
Introduction to Procedural Animation
- Transfer

In this series of articles, we will introduce inverse kinematics in video games. Before starting our journey, I will talk about several games that use procedural animations , and how they differ from traditional, resource-based animations.
GIF

The series will consist of the following parts:
- Part 1. Introduction to procedural animation
- Part 2. Mathematics of direct kinematics
- Part 3. Realization of direct kinematics
- Part 4. Introduction to gradient descent
- Part 5. Inverse kinematics for a robotic arm
- Part 6. Inverse kinematics of tentacles
Part 7. Inverse kinematics of spider paws
Part 1. Introduction to procedural animation
In most games, character animations are "static." When a character moves on the screen, the artist creates a specific movement. They are made manually or recorded using motion capture. Animations in this case are pre-created resources . When a character needs to perform another action, another animation is required. This way of implementing character movement is pretty standard in the gaming industry. There are large and detailed collections of animations that include the most commonly used behaviors, such as walking, jumping, and shooting. Traditional animations dominate the gaming industry, but they have equivalent alternatives. I want to introduce you to the concept of procedural animations .
The main idea here is that the moments of the state of the character can be generated procedurally. One of the most standard techniques for generating procedural animations uses physics simulation. Therefore, it is often called physical animation ( Wikipedia ). A typical example is water. You can animate it manually or use animation that takes into account the dynamics of liquids .
Below we will discuss a very specific subspecies of physical animations that uses rigid body simulation. The same type of simulation is commonly used in game engines such as Unity and Unreal. Let's see how this simple principle is used in games to create physical animations.
Ragdoll Physics
At the very core of physical animations is the principle of the possibility of simulating the movement of characters. By recreating the processes and limitations that govern the human body, you can come close to creating realistic behaviors. One of the simplest but most effective ways to create procedural animations is to use ragdoll physics (ragdoll, ragdoll) ( Wikipedia) The idea is to create a humanoid body and connect all its links with joints to recreate the degrees of freedom demonstrated by a real prototype. Just using solid state physics and articulation restrictions, you can simulate a person’s body fall. This not only saves money on the "death animation." It also allows you to create characters that realistically fall and interact with environments. Such a task is almost impossible to solve with only a ready-made set of animations, regardless of its accuracy.
A major drawback of ragdolls is their enormous unpredictability, which often leads to very funny behaviors.
GIF

Today, ragdolls are very familiar in games. Unity has a simple Ragdoll Wizard tool that allows you to quickly turn a humanoid model into a ragdoll.
Solid State Simulations
The main problem of ragdolls is the lack of motion control. If you connect the body parts with joints, then the character can neither walk nor jump. He will only fall. However, there are situations in which a mixed approach can be used.
In a How Grow Home Uses Maths To Generate Personality article, game journalist Alex Wiltshire talks to Ubisoft about Grow Home . One of the main features of the game is the way the protagonist moves, Bad (BUD). There are no ready-made animations in the game, at least in the traditionalsense. When a player moves, the position of the legs and arms is controlled by a code. Parts of the body are subject to the same limitations as a ragdoll, which forces them to create compelling animations.
A similar principle is also actively used in Rain World . Each animal in the game has a body consisting of several colliders. Some of them are controlled by code, others are controlled by articulations. This can be seen in the animation below. The end points of the wings of a bird of prey move programmatically, the remaining bones are connected by hinges. Endpoint management automatically creates smooth animation that otherwise would not have been possible.
GIF

Both Grow Home and Rain World use procedural animations to enhance character realism. However, controllers do not rely on these animations. In the game Gang Beasts, this concept is developed even further. The game fully approves of the blurred movements resulting from the use of ragdoll physics. The result is funny characters with unpredictable movements.
GIF

Inverse kinematics
Rigid body simulations make it easy to create animations. We indicate where Bud's arms and legs should be, and the physical engine does the rest for us. This very simple approach works with simple characters, but it often lacks realism. In solid body simulations, only parameters such as gravity and mass are taken into account, but they lack contextual knowledge. In many cases, it is necessary to create something that acts not only under the influence of restrictions due to gravity and compounds.
The next step in creating procedural animations is known as inverse kinematics.. For any type of ragdoll, inverse kinematics calculates how it needs to be moved in order to achieve the desired goal. In Grow Home and Rain World, physics itself determines how gravity-affected compounds should move. Inverse kinematics makes them move in the right phases.
One of the first indie games to actively use this concept was The Majesty Of Color by Future Proof Games. In it, the player controls the tentacles of a sea creature. Unlike the bird wing of Rain World, this tentacle is not just hinged. Each segment is rotated so that the end point of the tentacle reaches the desired point. If only solid body simulation were used in this animation, then the tentacle would seem to be “pinned” to this point, like a piece of rope.
GIF

Inverse kinematics can be used to solve many problems. The most standard is the natural movement of humanoid characters to certain objects. Instead of using predefined animations, developers simply indicate the goal that the hand should achieve. The inverse kinematics do the rest, finding the most natural way to move the joints of the hand. If only a simulation of a solid were used, then the movement would be jerky, it would seem that parts of the body are simply dragging and dropping.
The Unity animation editor called Mechanim has a tool ( Unity help ) that allows the developer to use inverse kinematics for humanoid characters.

In the remainder of this series of articles, I will focus on solving the problem of inverse kinematics. We will figure out how to control a robotic arm or the tentacle of a monster.
GIF

Part 2. Mathematics of direct kinematics
Now we will begin the journey into the world of inverse kinematics . There are many ways to solve this problem, but they all start with direct kinematics .
Inverse kinematics takes a point in space and tells us how to move a hand to reach it. Direct kinematics solves the opposite dual problem. Knowing how we will move the hand, she tells us what point in space the hand reaches.
Robotic arm
Inverse kinematics was originally used to control robotic manipulators . Therefore, assumptions and terminology of robotics will be used in this series of articles. However, this does not limit the possible applications of inverse kinematics. You can use it for human hands, paws of spiders and tentacles.
First, let's start by demonstrating what we mean by the term “robot manipulator”:

The image above shows a standard robot manipulator made of “limbs” connected by “joints”. Since the robot shown in the image has five independent joints, it is believed that it has five degrees of freedom. Each joint is controlled by an engine, which allows you to move the link attached to the joint to a certain angle.
To consider a more general example, we can draw a joint diagram. In this article, we will assume that each joint can rotate in only one axis.

A tool attached to the end of a robotic arm is called the end link . Depending on the context, it may or may not be considered one degree of freedom. In this article, the final link will not be taken into account, because we will focus only on movement to reach the desired point.
Direct kinematics
In this example, each joint can rotate on one axis. Therefore, the state of each joint is measured as an angle. By turning each joint at a certain angle, we allow the final link to reach different points in space. Determining where the end link is at known joint angles is called direct kinematics .
Direct kinematics is a “simple” problem. It means that for each set of angles there is one single result that can be calculated without any uncertainties. Determining how the robotic arm moves depending on the data we transmit is a necessary step to find the inverse problem of inverse kinematics.
Geometric interpretation
Before we start writing code, we need to understand the mathematical constructs behind direct kinematics. But first of all, we need to understand what it means spatially and geometrically.
Since visualizing turns in 3D is not so easy, let's start with a simple manipulator in two-dimensional space. The robotic arm has a “starting position” - this is a configuration in which all joints are rotated to their “zero angle”.

The diagram above shows a manipulator with three degrees of freedom. Each joint is rotated to the position of its zero angle, that is, the robot is in its original position. We can see how this configuration changes when the articulation is rotated




It is important to note that motors attached to other joints have not yet moved. Each joint contributes to the local rotation of a direct chain of bonds. The diagram below shows the change in configuration when the second joint is rotated by


The position





Maths
From the previous schemes it is obvious that to solve the direct kinematics problem, we need to calculate the position of the embedded (subordinate) objects when they rotate.
Let's see how to calculate it using the example of two joints. Having found a solution for two elements, we can repeat this process to solve chains of any length.
Let's start with the simple case when the first joint is in its initial position. This means that


It means that:

When





Mathematically, this can be written as:

Below we will learn how to use the function
AngleAxis
( Unity documentation ) without fussing with trigonometry. Reproducing the same logic, we can obtain an equation for


And finally, the general equation:

In the next part of the article we will see how this equation is conveniently implemented in C # code.
Direct kinematics in 2D
If you are familiar with rotation in 2D, then this can be done trigonometrically:


The derivation of the equation can be found in my article A Gentle Primer on 2D Rotations .


The derivation of the equation can be found in my article A Gentle Primer on 2D Rotations .
What about the Denavit-Hartenberg matrix?
If you have engineering knowledge, then you could solve this problem differently. The problems of direct and inverse kinematics are widely known, and several standardized approaches exist to solve them. One of them is a binding to each joint of four parameters called parameters of Denavit-Hartenberg ( Wikipedia ). It is convenient to work with them in a matrix format and they are excellent for the analytical solution of the inverse kinematics problem.

However, in this article we do not use them. Solving the Denavit-Hartenberg matrix requires more mathematics than many programmers will want to understand. The approach I have chosen uses gradient descent , which is a more general optimization algorithm.

However, in this article we do not use them. Solving the Denavit-Hartenberg matrix requires more mathematics than many programmers will want to understand. The approach I have chosen uses gradient descent , which is a more general optimization algorithm.
Part 3. Realization of direct kinematics
In this part, we will continue to solve the problem of direct kinematics . Having found the mathematical solution in the last part, now we will learn how to implement it in C # code for Unity. In the next part, “Introduction to Gradient Descent,” we will finally show the theoretical rationale for solving the inverse kinematics problem .
Introduction
In the previous part, we formalized the movement of the robotic arm. We started with a simple example of three joints. In their initial positions, they have the configuration shown below:

Different



When turning the joints, we observe the following picture:

The behavior of this system can be summarized by the following statements:
- The turn . The global
joint rotation is the sum of the rotations of all previous joints:
- Position . The global position of the
joint is defined as:
Given all of the above, we can begin to come up with a possible way to implement these behaviors in Unity.
GameObject Hierarchy
Unity already has a way to implement all of the above requirements: a parenting system . If you make a game object a child of another, it automatically inherits its position, rotation, and scale.

If rigging is familiar to you, then this will not surprise you. The bones, which are articulations of a humanoid character, also have a parent system in which turns and movements are inherited. The image from Michael Erbnetn ’s Unity Animation 3: Character Setup shows an obvious example of this.

When creating a connection hierarchy, you need to make sure that when all Euler local angles are equal to zero, then the robot arm is in its original position. For a humanoid character, this is usually the standard T-shaped pose shown in the image above.
Implementation
The ability to create child components in Unity de facto solves the problem of direct kinematics. Unfortunately, this is not enough. In the next part of the series of articles, we will see that we actually need a way to check the position of the final link without moving the robotic arm. This will force us to implement this basic Unity function in our own way.
The first step is to save information about each of the joints of the robotic arm. This can be achieved using a script, for example
RobotJoint
from the following example:using UnityEngine;
public class RobotJoint : MonoBehaviour
{
public Vector3 Axis;
public Vector3 StartOffset;
void Awake ()
{
StartOffset = transform.localPosition;
}
}
To simplify the calculations, we assume that each joint can rotate only on one of its local axis: X, Y or Z. We denote this by a variable
Axis
that takes a value 1
for the coordinate relative to the axis of rotation. If the joint rotates along the Y axis, it Axis
will look like (0,1,0)
. We will see how this will allow us to get rid of designs with IF
. Let's create a function
ForwardKinematics
. She gets an array angles
of type numbers float
. The name speaks for itself: it angles[i]
contains the value of the local rotation of the i-th joint. The function returns the position of the final link in global coordinates.public Vector3 ForwardKinematics (float [] angles)
{
...
}
The code is a simple C # implementation of the position equation shown above. Functions
rotate
are implemented through a convenient function Quaternion.AngleAxis
.Vector3 prevPoint = Joints[0].transform.position;
Quaternion rotation = Quaternion.identity;
for (int i = 1; i < Joints.Length; i++)
{
// Выполняет поворот вокруг новой оси
rotation *= Quaternion.AngleAxis(angles[i - 1], Joints[i - 1].Axis);
Vector3 nextPoint = prevPoint + rotation * Joints[i].StartOffset;
prevPoint = nextPoint;
}
return prevPoint;
Нужна помощь с кватернионами?
Повороты в Unity часто описываются через углы Эйлера. Это три числа, соотвествующие повороту объекта по осям X, Y и Z. Углы Эйлера обозначают крен (roll), тангаж (pitch) и рыскание (yaw) объекта в пространстве. Однако с математической точки зрения использование углов Эйлера может привести к довольно неприятным проблемам.
Работать с углами удобнее с помощью кватернионов (quaternions). Кватернионы — это математические объекты, которые можно использовать для описания поворотов. В отличие от них, углы Эйлера описывают ориентацию. Квартернион описывает путь, который нужно пройти из одной ориентации в другую. С технической точки зрения, это слишком большое упрощение, но для нашей статьи его более чем достаточно.
Кватернион можно представить как поворот. Вращение объекта в пространстве — это, с математической точки зрения, аналог умножения его положения на кватернион. Для создания вращения вокруг неподвижной точки в Unity можно использовать функцию
When two quaternions are multiplied, a new quaternion is created, which includes both turns. At each iteration of the loop, the
Finally, quaternions are used on this line:
It is fully consistent with the following entry:

The product of a quaternion and vector applies rotation.
Работать с углами удобнее с помощью кватернионов (quaternions). Кватернионы — это математические объекты, которые можно использовать для описания поворотов. В отличие от них, углы Эйлера описывают ориентацию. Квартернион описывает путь, который нужно пройти из одной ориентации в другую. С технической точки зрения, это слишком большое упрощение, но для нашей статьи его более чем достаточно.
Вращения ⇔ кватернионы
Кватернион можно представить как поворот. Вращение объекта в пространстве — это, с математической точки зрения, аналог умножения его положения на кватернион. Для создания вращения вокруг неподвижной точки в Unity можно использовать функцию
Quaternion.AngleAxis
. Строка Quaternion.AngleAxis(angle, axis);
создаёт кватернион, описывающий поворот вокруг оси axis
на angle
градусов. В этом контексте значение Axis
может быть равно (1,0,0)
, (0,1,0)
или (0,0,1)
, что обозначает, соответственно, X, Y или Z. Это объясняет, почему мы создали переменную Axis
в классе RobotJoint
.Добавление вращений ⇔ умножение кватернионов
When two quaternions are multiplied, a new quaternion is created, which includes both turns. At each iteration of the loop, the
for
variable rotation
is multiplied by the current quaternion. This means that it will include all the turns of all joints.Quaternion * vector = rotated vector
Finally, quaternions are used on this line:
Vector3 nextPoint = prevPoint + rotation * Joints[i].StartOffset;
It is fully consistent with the following entry:

The product of a quaternion and vector applies rotation.
[Ending follows. In the second article, we look at inverse kinematics.]