One of the key problems with developing GRIP was creating AI bots that offered a challenge to the player when in such a challenging driving environment. We've already briefly covered the difficulties and some of the solutions in this field when talking about the public perception of cheating AI, and here in this article we'll dive into the this more deeply to uncover more of the detail involved in developing such a system. We'll start with the fundamentals to establish the bedrock, then follow up a little later with some of the more interesting aspects of the AI design that we arrived at.
Where are we going again?
In the previous article on GRIP AI we talked about the use of splines and how these are critical in determining how to navigate around each of the tracks. As a quick reminder, at the start of a race each bot determines the nearest spline and attaches itself to it. They take a point on the spline a few metres ahead of the vehicle and aim for that, much like a donkey following a carrot you might say. To that we add in a sideways offset, and here the splines have a maneuvering-width property that varies along their length. This property is used by the bots to allow them to weave around the centre line of the splines within the confines of the width and without going off track. This of course also erodes away the convoy syndrome where all the bots are tightly following each other solely along the centre line. This simple maneuvering-width mechanic does much to break up the uniformity of the bot driving and already we're starting to look good, but this is only the beginning.
As they drive around the bots are continually looking at how to navigate ahead. Each spline contains a list of junctions where other splines head off in different directions and these junctions are evaluated by the bots to determine which route to take. These decisions are a semi-random affair with routes having a custom probability of being taken compared to one another, as well as an indication of whether each route is good for collecting pickups or good for high speed. So for example, if a bot has no pickups and is ahead of the player, then they are much more likely to take a route laden with pickups than one that is faster. Similarly, if a bot is lagging behind the player then they are more likely to take the faster routes.
What happens when a vehicle crashes off the track or otherwise deviates away from the spline they are currently following? Trying to follow a spline that is no longer accessible results is pretty poor driving behaviour and it's critical to determine if the current spline is still valid for each bot at very regular intervals without compromising too much CPU time. The sooner we establish the lack of validity for their current spline, the quicker we can have the bot switch to a better spline and correspondingly more intelligent behaviour - there's nothing worse than seeing a bot driving blindly up against a wall like a horny dog trying to follow a spline that is no longer accessible.
So we constantly evaluate each bot with respect to its current spline and determine its position along it, attempt to follow a point ahead of the bot along the spline, add in a weaving sideways offset from the spline, perform route decision making at appropriate junctions and regularly check to see if the current spline is still valid and establish a more suitable spline if not. And yet, this is just the essense of having a vehicle perform the most simple task of driving blindly around a track and there's still much, much more to do.
I feel the need, the need for speed
Knowing how fast to travel around the track is just as important as know where to drive around it. Take a corner too fast you're off the track and landing in trouble. Take it too slow and players are passing you as you do so. Both are undesirable. So we specify the optimum speed at different points around each track in the spline data that the bots are following. At any given point, from this data they determine how fast they should be travelling and aim for that speed accordingly. Finding that optimum speed was one of the most taxing problems that we faced during the game's development, as it's this property that we were continually tweaking until the bots were staying on track, but often only barely.
Of course, different vehicles have different handling characteristics and just like sizes, one speed does not fit all. The optimum speeds were generally set with our most skittish of vehicles, such as the Tempest. Once this vehicle was just about staying on track we knew we had all vehicles covered and they would all behave themselves. But a lot of those other vehicles may well be able to take corners faster and there was unrealised potential there for them. So to this preset optimum speed we would add in yet more speed depending on the grip and the mass properties of each vehicle. Lighter, grippier vehicles are permitted to travel around these corners faster than the skid-happy Tempest as they just have the grit for it.
As a slight diversion it's interesting to note how a vehicle actually tries to match a given speed. If you consider that the bots drive their vehicles in the same way that a human player does, by offering controller inputs, then driving at a particular speed really isn't that simple. Except, we did manage to develop a relatively simple solution for it that works remarkably well. In a change of pace, and in a rare show of code from the game, I've included a source snippet below.
/*********************************************************************************
Calculate the throttle required to achieve the desired speed.
*********************************************************************************/
float AFlippableVehicle::AICalculateControlInputsForSpeed(const FVector& xdirection, float targetSpeed)
{
// Perform all calculations in 1m units, over 1 second of time.
auto throttle = 1.0f;
auto mass = Physics.CurrentMass;
auto velocityDirection = GetVelocityOrFacingDirection();
auto gravity = FVector(0.0f, 0.0f, -FMathEx::CentimetresToMetres(GravityStrength)) * (1.0f / mass);
auto drag = GetDragForceFor(velocityDirection * targetSpeed);
auto resistance = FVector::ZeroVector;
if (IsGrounded() == true)
{
resistance = GetRollingResistanceForceFor(velocityDirection * targetSpeed, velocityDirection);
}
// Now we have all the main forces that degrade speed (engine power), so sum
// them against the velocity vector of the vehicle.
auto total = drag + gravity + resistance;
auto totalNormalised = total; totalNormalised.Normalize();
total *= -FVector::DotProduct(totalNormalised, velocityDirection);
// total is now the force required simply to counteract the other forces to
// maintain the target speed, assuming we were at it already.
// Calculate the throttle position required to achieve that engine power.
auto minSpeed = FMath::Max(0.0f, targetSpeed - FMathEx::KilometresPerHourToMetresPerSecond(50.0f));
auto maxSpeed = targetSpeed;
// Get the total engine power here, piston and jet engine.
auto enginePower = FMathEx::CentimetresToMetres(GetEnginePower(xdirection));
// Hopefully, the engine power will exceed the total forces acting against it.
// If it doesn't, it means we're asking for more speed than the vehicle is
// capable of.
auto optimumThrottle = (total.Size() / enginePower);
// Clamp the throttle in case maximum speed is exceeded.
optimumThrottle = FMath::Min(optimumThrottle, 1.0f);
if (GetSpeedMPS() > maxSpeed)
{
// If we're already faster than the maximum speed then set the throttle
// level to that required to maintain optimum speed and it will pretty
// quickly come down to meet it (due to drag at higher speeds).
throttle = optimumThrottle;
}
else if (GetSpeedMPS() < minSpeed)
{
// We're quite a way under the target, so full throttle.
throttle = 1.0f;
}
else
{
// We're heading up towards the target, so calculate a ratio between full
// and optimum throttle. The ratio is cubed (because drag is squared)
// and we end up getting there quickly while slowing up acceleration
// towards the end.
auto ratio = (GetSpeedMPS() - minSpeed) / (maxSpeed - minSpeed);
throttle = FMath::Lerp(1.0f, optimumThrottle, ratio * ratio * ratio);
}
return throttle;
}
The nature of attraction
Bots aren't just concerned with driving forwards around a track and blindly following splines, this is just their most basic function. All the while they are driving they are also searching for opportunities, to take advantage of what each track has to offer. Speed-pads and pickup-pads are the primary means of competing in events and the bots of course need to take maximum advantage of them just as the player does. Generally, these kinds of objects are classed as attractable, in that they attract the bots to their location. Of course, this attraction is constrained so that bots don't crazily veer off-course in order to collect them - they have to fall inside the general navigation goals that each bot has at any given moment.
When a bot detects such a nearby attractable on its way around the track the point towards which the bot drives is changed from the spline target to the attractable target, using transitional interpolation to smooth out the change of course. Once the attractable target has been reached focus then returns back towards the spline it is following. All the time you are watching the bots drive around the track they will be following splines while taking regular diversions to hit these attractable pads - this is by far their most common driving behaviour.
Only a single bot will aim towards any given attractable at any given moment. We wanted to avoid circumstances where multiple bots tried to take a pickup and collided into one another as this would be an obvious and very common error. This simple technique avoided many potential collisions, though is far from the end of the matter with collision or obstacle avoidance. For this, we had to write a whole new sub-system which helped to prevent the bots from continually slamming into each other, and the player.
Social distancing
Having vehicles avoid collisions with other vehicles and indeed problematic elements of the scenery is a difficult task. They have to factor in this sometimes complex vista of objects that they need to avoid while still meeting their navigation goals in a very dynamic driving environment. It was a daunting task. Nevertheless, we arrived at a relatively simple solution that met the design needs of the bots while admittedly making occasional mistakes, though much as a human might in the same situations so in some sense that made it all the more realistic. Indeed, so lightweight is this code that its core is less than a thousand lines in length.
So each vehicle is itself an obstacle to be avoided. There are also several obstacles laid out around our race tracks that identify areas to be avoided at all cost. These obstacles we boiled down to the most basic construct possible - the ubiquitous sphere. Mathematically, working with these guys makes everything simple and we didn't really find any obstacle that couldn't be described accurately enough by a spherical shape. The mobile and static obstacles are handled a little differently to one another, but for the most part the collision avoidance algorithm is common to both.
Essentially, each vehicle continually looks for obstacles in its path that at the current closing speed will have a chance of hitting within the next couple of seconds. Those obstacles are then sorted on estimated collision time, with the shortest times first. We take a look at the first obstacle in that list and make it the focus to avoid. We look at the current turn rate of the bot, the radius of our respective spherical volumes, the distance and time left before impact and decide if we need to make any corrective steering. If so, then we decide which side of the avoiding sphere we should take, make an intelligent decision based on such things as which side the track is on and the direction we're currently steering towards. Once we have a side to aim for, this new navigation target at the side of the obstacle then temporarily overrides the normal track navigation.
We do this again for the second obstacle in the list and make the same determinations about whether any action needs to be taken to avoid it, and if so go through the same decision making process again but this time ensure we still avoid the first obstacle that we already identified. And that's it. We only consider the two highest priority obstacles in deciding how we should avoid collisions. We limit it to two simply because the code is not engineered in a way that would easily support more without becoming disagreeably convoluted. And in this respect, of all the areas of the AI system, this is perhaps the only one which I would like to revisit and implement a more robust solution - if only because I've been carrying such a solution around in my head but it just never became a high enough priority that it ever got implemented.
One day, two or three years ago now I was pondering this problem of how to process this list of obstacles in more intelligent way without a lot of conditional code. Thoughts turned to the deep, dark past when we were still writing software renderers. In fact, it turned to one of the very first software renderers, the one used in Quake. Quake didn't render triangles directly, instead it sliced them up into horizontal strips, then integrated with a central list of strips per vertical row of the screen. The strips were only actually rendered once all of the overlaps between triangles in screen depth had been eradicated, and you were just left with a list of non-overlapping horizontal strips of triangles. This meant that each pixel was only ever drawn once, and the time consuming task of rendering pixels was performed with no overdraw at all which is a big part of the reason why Quake managed to outperform other 3D games of the era. Not that there was any competition, Quake had no peers at the time of release. It also ended up being the rendering technique used in some GPUs such as PowerVR, its most well-known implementation in the Sega Dreamcast.
The thought occurred to me though that the problem of collision avoidance would be better solved if we had just a single horizontal list of non-overlapping obstacle fragments that exist in the horizontal plane of each vehicle being driven. An infinitely slim, infinite plane that sits horizontally across the centre of the bot vehicle under consideration. With that list of obstacle fragments, bolstered with information about the gaps between fragments too, solving the problem of collision avoidance would become simpler and far more robust as all the obstacles could be taken into account and not just one or two. One day perhaps.
What's going on?
What we've described so far covers just the most basic of bot behaviour - where to go, what to hit and what to avoid. But sometimes bots have other concerns. Sometimes they need to collect themselves, or consider some other kind of maneuvering. In general, this is called state control and is very common in AI systems.
Bots can exist in one of several states, and some of these are outlined below.
-
General maneuvering - this is where we do the common going-forwards, spline-following kind of driving
-
Recovering control - a vehicle has lost control, is facing the wrong direction or is spinning wildly and here we attempt to recover control before deciding what to do next
-
Reversing to reorient - a vehicle has recently recovered control or hit a static forward obstacle and needs to reverse in order to do a two-point turn to head off in the correct direction
-
Launch to reorient - a vehicle is facing the wrong direction and is launching in order to turn around
-
J-turn to reorient - a vehicle is facing the wrong direction and is performing a J-turn in order to turn around
In each of these vehicle states we have specific code to detect that state, and other code that governs their behaviour during that state. All it ever really boils down to is how am I steering and how fast should I be going? Sensing that can be tricky given our limited understanding of the environment, but generally even a limited understanding results in good decisions the vast majority of the time. When you do occasionally see bots make mistakes, it's likely just because they just can't see what you can see.
so what have we learned?
Just the fundamentals. None of this stuff is particularly exciting in that the goals are pretty basic, but likewise it's not that easy either and you require a considerable amount of code to produce even the most rudimentary behaviour. You need to know where to navigate around the track, how fast you're supposed to be going, what to aim for, what to avoid, detecting states such as whether you've crashed or gone off course, and what to do about it. All of this, in concert, and not in conflict.
Much of what we do here is about utilising splines and spherical volumes to keep things as simple as possible. And all we really do, is figure out what controller inputs for steering and speed control to offer the vehicle just as if it were controlled by a human. For such a simple aim, a lot of information has to be gathered and analysed, quickly, to decide exactly what those inputs should be.
But of course, once you have this bedrock in place, that's when the real fun begins. Next time, we'll take a look at some of the more interesting facets of bot behaviour and take a deeper dive into one or two. Not just about how to drive, but about how to fight as well.
[Posted 05/25/2020] |