AI Movement Algorithms

For this project I implemented the various basic movement algorithms commonly used by CPU controlled bots for movement. This project was implemented using openFrameworks for Visual Studio (C++). These algorithms are based on those discussed in the book “Artificial Intelligence for Games by Ian Millington”. While I implemented these algorithms in 2D (for simplicity), converting them to 3D is a trivial task.

You can download and play around with the project from my GitHub

Kinematic Motion – A boid

The first part of the project focuses on creating a simple representation of an entity on the screen (called a boid). This part also involved setting up a Kinematic representation of the boid which keeps track of and updates its position, orientation, velocity and rotation every frame. I also added my own forward vector variable to help with drawing the boid.  

boid.h

Once this was set up, for testing purposes I added a simple logic for the boid to traverse around the corners of the screen.

A boid moving around the screen

While setting up the architecture, I went with the assumption that the Boid’s local basis vectors were always fixed w.r.t to the global (screen) basis vectors. This made it easier to calculate its orientation.

Dynamic Motion – Seek Steering behaviors

For the second part of the project I implemented the dynamic seek algorithm which would seek the position of mouse clicks. Since the seek algorithm only updates the position (via change in velocity) I had to implement a secondary algorithm called Dynamic Align, which updates the orientation (via change in rotation).

The seek algorithm is very rudimentary since it just calculates the distance from the target and moves the Boid in that direction at a supplied maximum acceleration. Logically this ends up creating a ping pong behavior for the Boid once it reaches its destination because there is no stop condition and. Once it overshoots, the direction of acceleration reverses causing it to slow down and come back. While tweaking input values I noticed that having a max acceleration value that is larger than the max speed significantly reduced the ping pong behavior. Weirdly enough, if both the max speed and acceleration values were very large the boid started exhibiting interesting behaviors while traveling. For e.g. setting maxAcc = 500.0f, and maxSpeed = 250.0f for the Boid and messing around with the target point I made the Boid exhibit circular motion around a point!

Dynamic Seek (the blue point is where the mouse was)

A better alternative to seek was Dynamic Arrive. While the basic principle is similar to seek, dynamic arrive also keeps track of a slow radius and a target radius. The applied acceleration is reduced proportional to the distance traveled inside the slow radius and becomes zero inside the target radius. This causes the Boid to slow down smoothly. Messing with the input values I found that keeping the max acceleration at least twice the max speed and the slow radius proportional to max acceleration gave good results.

The only oddity I found was the “time to target” input. Theoretically, it means that the Boid should reach its destination within this much time. Practically I realized it actually means that the boid should reach its destination in at least this much time. Internally it represents the dt component in dv / dt. Having this value to be (0, 1] gave much better results than if its value was greater than 1. This is probably because dt represents unit time and, for the application, unit time should be the frame time (or a value close to it). But I might be wrong about this inference.

Arrive with t = 0.1;
Arrive with t = 2

Align works on the same principle as Arrive except that it affects the Boid’s rotation and thus is subject to the same inferences.

Being dynamic, the outputs of these algorithms are additive. This is because as mentioned before, these algorithms do not directly affect the position and orientation values of the Boid but, its velocity and rotation. These values are then applied per frame. Programmatically, these changes are being added up before being applied to the Boid.

Wander Steering Behavior

Starting here the algorithms start to closely represent what an AI controlled entity would use. Theoretically the wander algorithm enables the Boid to move randomly but believably. Wander is a delegated behavior, meaning that once a target is established the actual calculations are done by align, seek or arrive along with two other behaviors called Dynamic Face and Dynamic “Look where you go” (LWYG).

Both dynamic Face and Dynamic Look Where You Go, on establishing a target orientation, delegate to dynamic Align. Dynamic face is pretty straight forward in what it does (i.e. given a target, orient in its direction). Dynamic LWYG however, depends on the Boid’s velocity to calculate its orientation. That means, for LWYG to work, the Boid must have some velocity (so it can orient in that direction).

From what I understood, Dynamic Wander differs from kinematic wander based on where the target orientation is calculated from. For kinematic Wander, the target orientation (to get a random direction to move in)is calculated from the position of the Boid itself. This is the primary reason why Kinematic wander movement is jittery in action. Contrasting this with Dynamic Wander, since we affect the Boid’s velocity and rotation, the target orientation must be calculated from a point at a certain distance from the Boid. The expectation is that the Boid’s orientation will become equal to the target orientation by the time it reaches this offset point. This causes the actual orientation change to be more gradual and thus, less jittery.

I implemented this algorithm in two different ways. Once the target was established using a random binomial, the boid faced the direction of the target using Dynamic Face and moved in that direction with a given max acceleration. The other way was to again pick a target and move to it using seek and continuously adjust the Boid’s orientation in the direction of travel using Dynamic Look Where You Go.

Dynamic Wander (Look where you go)

How these two differ is what happens first. The first algorithm adjusts the orientation first and then moves the Boid. The second moves the Boid while continuously adjusting its orientation.

Dynamic Wander ( Face and accelerate )

 From what I observed, keeping all inputs equal, both these algorithms gave a generally good outcome. However, the algorithm which used Dynamic Face gave a more random path on which the Boid moved compared to the other. Depending on what kind of movement type is required, either may give an acceptable outcome.

Flocking Behavior

Flocking is the culmination of all the algorithms done so far plus a few more. Most importantly, flocking requires blended steering. While Dynamic behaviors are additive, blended steering makes it possible to assign weights to each steering output so that when combined, the affects of certain steering behaviors are more prominent than others.

The other new algorithms I used to implement flocking were Dynamic Velocity and Dynamic Separation. Dynamic Velocity allows the Boids in a flock to match the velocity of a target Boid (leader). Dynamic Separation prevents the Boids from overlapping with each other. Other than that, I used Dynamic Arrive to reach the leader and Dynamic LWYG to adjust orientation in the direction of travel. The leader on the other hand was a Boid using Dynamic Wander.

For flocking to work each Boid was assigned a weight, with the leader having a value greater than every other Boid. The algorithm then calculates the center of mass which become the target point for Dynamic Arrive. Dynamic Velocity matches the flock’s velocity to that of the leader and Dynamic Separation prevents overlapping, all the while Dynamic LWYG adjusts the orientation of the Boids. Blended steering combines all the behaviors mentioned above to create a final steering value for each Boid.

I started with 8 Boids, adjusting the weights applied to velocity, separation and arrive. From experimenting with different values, I found that for the best results the weight of separation had to be greater than the other two. Moreover, varying the difference gave different kinds of flocking behaviors. E.g. for velocity weight = 2.0f and arrive weight = 15.0f, if separation was set to 20, flocking behavior was similar to ducklings following their mother. For separation weight = 200, flocking behavior was similar to a group of sheep. For separation weight = 2000, the behavior exhibited was more like a group of guards. Moreover, for this weight, if the Boids got separated from the leader, they automatically ended up getting into a tight formation while moving towards the leader.

If the weight difference between Dynamic separation and the rest became too big, the Boids started to jitter in place when close to the leader.

Flocking with 10 Boids

The other thing I tried varying was the number of Boids in the flock. Starting from 10 I increased the number of Boids to 200 and while the kind of identifiable formation that was seen at a lower count was no longer present, the flocking behavior was still observable.

Flocking with 100 Boids

All in all, implementing all of these different algorithms was interesting and informative. I understood that all the Dynamic behaviors, by virtue of being additive, are able to give emergent behavior based on the input variables. While flocking was just one interesting way of combining these different behaviors, there are many more which I plan on experimenting with in the future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s