Search the Blog

Monday, October 6, 2014

Dominating Monopoly!?

No Princesses in jail, either.

Has there ever been a time when playing a board game like Monopoly that requires you to roll the dice, where you wonder "How will my roll change the game"? You might roll poorly and land on an opponent's space. You might roll perfectly, letting you buy the property or space you want. You might even roll poorly, land on a Chance space, then be sent to your original wanted destination!

And we thought that Contra was hard.
These phenomenon, and the probability to land on certain spaces, can be derived by the use of Markov Chains. In programming (and economics, and even statistics) a Markov Chain is a technique that describes the chances to move from one state to another. There is often a predictable pattern, however it also has a chance to not move to the next state and instead remain in the current state. For example, a Markov Chain for a baby might include actions such as sleep, eat, cry, and play. The baby could start in the sleep state and, depending on the weights associated with our chain, will either enter a hungry (eating) state, enter an awake (crying) state, enter a cheerful (playing) state, or remain in the sleep state itself.


So, how can one apply this to a board game like Monopoly? You can create a 40x40 matrix that determines the chances of moving from one space to another, because after all that is what the chain would do itself. Certain cards make probability of moving from squares more likely such as a card that sends you to the nearest Railroad, or directly to jail. Speaking of Jail, it is essentially the "pit" of the game board: More cards and actions are likely to send you to jail than any other spot on the board, making the spaces AFTER jail more likely to be landed on!


Markov chains can be used for a variety of situations, and are nice to use in AI programming when you want your entities to have a somewhat predictable behavior. Just be sure to include a Markov chain for once they are dead, too.

The living dead have a strange affinity for pixel blocks, too!




Thursday, September 25, 2014

Deciding which Behavior to use

Have you ever been to a restaurant, and ordered the soup instead of the salad, and wonder why you didn't get the salad instead? What made you choose the boiling liquid over the frozen vegetables? Most of our choices we make each day has to deal with certain conditions and what steps are needed to make those choices. For example, if it was a cold December day outside, maybe you would prefer to soup.

Relating this to video gaming, each AI character follows a tree that helps it determine what needs to be done in order to fulfill its goal. Say we wanted to drive to a store, and pick up some milk. What steps are needed in order to make it to the store? Refer to the following graphic:

Behavior Tree


We start with a selector node that first determines what method we will be doing to get to the store: in this case taking a car or a bike. Our AI will try to use the first option, which is usually the "best" option, unless we randomly choose a method to use. The arrow denotes our sequence node that shows the steps in order for our method to be completed. If all of the steps in a sequence node can be completed, then that sequence succeeds and so does our method.


In our example, our character would first have to open the door to the car. This sounds like an easy task, but what if the car is locked? What if the door is jammed? What if the door is blocked and we can't get to the door itself? An AI character would have to consider potentially every possibility when completing simple tasks. If at any time a step in our sequential node fails, we look back at our selector node for a new path to take.

Looks like we might be taking the bike...









Monday, September 22, 2014

Which State would a State be in if a State could State it's current State?


We know how our characters move throughout an environment. We also know how they attempt to move through the said environment. We also know how if an entity is in a certain state they will preform according to that state. Or at least we will, after this post!

Many games used what are called finite state machines (abbreviated as FSM) to determine basic behavior patterns for characters with limited AI. Depending on what state-of-being an entity is in, it might have special properties or abilities. It may run faster, blend in with its environment, or forsake other needs that the entity would normally have entirely until it leaves that associated state. As per usual a picture is in order to help describe this setup.

Learn the secrets of the Ninja!

A typical autonomous entity will follow a set of loose commands and priorities based on its state. Our example character's default state would be its wander state (notice the use of Steering behaviors!) Until something causes it to leave this state, our character will simple wander around the screen or area that it is spawned in. A player controlled character getting close (or perhaps within vision) causes our entity to leave the wander state and enter the attack state: With this, it gains a new objective to pursue and attack the player character. From this state, there are 2 possible outcomes: the AI character would either go back to wandering (should it lose sight of the player or if the player flees) or it could enter an evading state (to avoid being hit by the player).


State Machines can be very specific!

State machines are great tools to help with any event-driven scenarios that might change a character's goal or objective. If A happens, then react with B. Another positive to state machines is that they can be set up in such a way as to always be in a state indefinitely (and thus, doing something) until it reaches an end state (usually lose of health or despawning). State machines can also act on more conditions than can be used in basic decision making, which typically involve binary options. Instead of deciding whether to attack the player or not, our FSM could attack the player in different ways depending on its state.




Saturday, September 13, 2014

Finding the Correct Path (or is it?)

Just trying pathing around me!
Ah Pathfinder. It has been around for awhile now, and it is absolutely one of the best things around. There is nothing more interesting than having your Paladin deal 4d6 weapon die to undead and having them die on impact. There was even this one time when our group wen--.... wait, you mean mean PathFINDING? Oh, well I guess we can talk about that, too.

Pathfinding involves techniques and algorithms that are used to model AI characters in gaming. Often times we want our autonomous characters to move on their own Pathfinding utilizes a combination of steering behaviors and weighted movement costs to determine the best appropriate path for a character to move from points A and B.

An example of a fleeing mechanic.
What are steering behaviors? Steering behaviors influence the movement patterns used by an AI. For example, are your AI characters going to march back and forth? This would be an example of a wandering behavior. A seek behavior would cause an entity to see toward a specific target (usually the player) or location (say, a thief moving toward a treasure chest). And if the character was running away from a location or another entity they it would use (you guessed it!) the flee behavior.


Weighted movement can be associated with the cost for a character to move. Once we know how a character will act (due to our steering behaviors), it must choose an appropriate path. However,  obstacles will always impede movement: this includes traveling along a straight road. In order for an entity to move, they must give up something in exchange, whether it be energy, movement points, or in the case of many 3D games, time. Lets use the following image as an example:

Blue equals Good Movements. All the blue!

Our player controlled character (designated by the middle character, next to the unique model that actually has hair, true to Final Fantasy games) has a certain amount of movement points available. In this case, she has 4. The blue squares show all the valid spaces that the unit can move to. Note that in this case, "move to" means in the game "end your turn". The unit can move through friendly units, but cannot end its turn there. The same goes for if the character wanted to move forward: While they can move 4 squares, they cannot stop in the same space as the Chocobo rider (There isn't enough space as it is for them!). Each space costs 1 movement point for our unit: the weighted movement is 1. Also in the picture are rivers. While the character can move to the left, they do not have the movement points to cross the river, because the river has a higher movement cost to move through (in this case, it costs 2 movement points). Different paths can contain different values for pathing. Some may require more effort and resources than others.

Once we figure out which way we are going, and how much each direction will take use, the unit will move based on its pathfinding algorithm. The two most common algorithms are Dijkstra's and A* (pronounced "A-star"). Dijkstra's finds the "best" path to ALL the valid paths for a unit-- our picture from Final Fantasy Tactics Advanced displays this perfectly. However, some games won't be limited by movement costs, such as an open world first person shooter (excluding the game world's boundaries). In these situations, finding all the valid pathing for an autonomous character would take a lot of processing power and therefore time, in a scenario where stopping to determine movement could be deadly (for the AI character, that is).

Most gaming engines will instead use the A* method, a modified, complex version of Dijkstra's algorithm. With A*, a heuristic model is used to determine and estimates the total distance to our goal. Once it finds a valid path to the goal, it stops searching for other paths. "heuristic" in this case means to "guess", and get "close enough" to the target goal without overshooting it- Just like we learned from good old Bob Barker. Overshooting our goal too much can cause our algorithms to incorrectly eliminate an otherwise "OK" path!

And remember kids, spade and neuter
your bad coding habits!

Monday, September 8, 2014

What's our Vector, Victor?

Ah vectors. The foundation of interpreting object's movement. Believe it or not, you use vectors every day of your life: They keep you bound to your seat, keep you from drifting away without friction. But how exactly do Vectors work? And how in gaming?

Vectors worked based on the concepts of direction and magnitude. A vector's direction denotes in which way the vector is apply its force, and its magnitude or length determines how much force is being exerted. For example, the Earth's gravity impedes a downward vector with a force pulling us towards the Earth's core. This force is strong enough to keep us bound to the planet's surface. Yet, by jumping into the air we can exert our own vertical vector that allows a person to lift off the ground. The sum of these vectors, however, causes us to be pulled back toward the Earth and not float away.


Lets put this into a gaming perspective. Every vector gives off a force that exerts something. Maybe it is traversing through water, maybe gravity. Whenever a character acts, they exert their own force. When opposing forces balance, then an entity is considered at rest. In the sample picture below, both the entity and world are at balance (S). Note that while there is gravity in the game world, the floor is exerting an upward force that keeps the entity from falling through the ground (i.e. replicating a solid object). When the entity preforms a jump, its vector exerts force that causes the character to jump in the air: This force must exceed that of the gravity in order for the entity to get off the ground (A). However, the once the entity reaches a certain distance (namely, the distance of the jump), it needs to be pulled back to the ground. At the peak of the jump, the vector zeros out to allow the ground to pull it back down. The same thing happens when jumping across (B); the difference here being that you have multiple vectors at work. Not only do you have the vector accelerating the jump, you also have the vector creating a forward momentum.


An Introduction, of Sorts



Fight? Run? ITEM????? My name is Brian Jeslis (also know to some as Ronark). I am 24 years old and continuing my education at Devry University after obtaining my associate degree for Game Simulation and Programming, and welcome to my Blog!


As the name suggests (and also my degree), I am working toward obtaining a bachelor's degree for programming and development for games and gaming in general. This includes languages like LUA, C++, and C#. By containing all of my thoughts here on this blog, I know that I can help others with coding as well as show off what I can accomplish. I will also be pulling from real examples found in games and how it applies to anything from a sports game to a first person shooter to a rhythm game. The choices are unlimited!


Of course, I have many as well hobbies as coding. These include football, programming, gaming, hanging out with my friends and tabletop gaming. When I am not spending time with my girlfriend, I am usually found on my computer, XBox 360, YouTube, or PS4, meddling with various concepts. I will, in fact, be using YouTube to demonstrate techniques and examples when appropriate.

Any questions? Feel free to ask and comment!