Approaches to Coding Animation

Animating a 3D Blender model in Codea part 1

Fluidly animated characters are an important part of a 3D game, but getting the animated model out of your modelling software and into your code can be a challenge.

A Blender animation playing in Codea. C90 Robot model by Awesome Studios

In this series of posts I’ll be discussing a method for getting 3D animations out of Blender and into Codea. I’ll be explaining the process that produced the video above. If you know what Blender and Codea are, you can skip ahead two paragraphs. For anyone reading who is unfamiliar with either of these platforms, Blender is a free, cross-platform (Windows, Mac, Linux) package for 3D modelling and animating (and a lot more besides, it has an entire game development platform included). Codea is perhaps less well known. Codea is an iPad app, an active community, and a platform for developing applications for iOS. At its heart is the programming language Lua (version 5.3 currently), but it has a load of incredible extra features that allow you to harness the power of iOS, write clean code, or just have a lot of fun. These extras include object-oriented features such as classes and class-inheritance (subclassing), APIs such as Box2D, the physics sandbox that powers a lot of AppStore titles, tools for accessing iOS technologies such as multitouch, and a mesh API that gives us access to the iPad’s graphics power in the form of OpenGL ES 2.0, and allows us to write custom shaders in the Open GL Shader Language.

It is these shaders that are going to allow us to reproduce complex animations and mesh deformations, by manipulating tens of thousands of vertices on the GPU. If you are unfamiliar with shaders, or 3D, or indeed anything Codea-related, then I highly recommend the series of blog posts and free e-books created by Ignatz at his site Cool Codea. These are an invaluable resource for learning Codea.

A quick overview of keyframe interpolation

So far I’ve been stressing that this is for 3D animation, but there’s no reason why the method I’m going to outline here could not be adapted for 2D animation. Indeed, Blender can be used to rig 2D characters (although it is not as fully featured in this regard as a dedicated 2D animation rigging system such as Spine).

In this series of posts, I’m going to be describing a what I think is the simplest (though not necessarily the most flexible) method for getting an animation into Codea, keyframe interpolation. Here, in summary, are the steps involved in implementing keyframe interpolation in Codea. We will be going over each of these in more detail in this series of posts:

  • Animate your model using Blender’s animation editor. Animation in Blender works by setting up a set of keyframes. Blender then interpolates between the keyframes to create the inbetween (or tween) frames that create the illusion of smooth movement. A walk-cycle for instance might consist of 4 keyframes, representing the 4 extremes of the motion, evenly spaced throughout the action (ie 2 frames for when each of the legs is at its fullest extent, plus 2 frames for when the head is at its highest point). The animation editor is actually my favourite part of the Blender application. Compared to doing almost anything else in Blender, it feels very fluid and quick.

  • Export these keyframes.

  • Import the files into Codea. We are going to be using a modified version of Ignatz’s .obj importer.

  • Instead of building separate meshes for each imported frame, the data for all the keyframes is going to be loaded into a set of custom attribute arrays on a single mesh, making a single, deformable model.

  • During runtime, we pass to the shader some integers representing the frames we want to interpolate between, and a fractional number between 0.0 and 1.0, representing how far between the two frames we want to mix. By incrementally increasing this value, we can create a smoothly flowing animation.

  • The shader (and the GPU) now does all of the heavy lifting, moving hundreds of thousands of vertices between the various keyframes. The most straight-forward way of interpolating between the keyframes is a linear mix. But for a motion that more accurately reproduces the tweening that Blender performs, a spline is needed. We will look at implementing a Catmull-Rom spline, on the shader.

Comparison with a full rigged animation system

Before we go further into these details, I’d like to briefly set out the advantages and disadvantages of this method, and compare it to other approaches that we could take. The relative simplicity of keyframe interpolation is its main advantage and disadvantage. Because we are not importing the actual “rig” (in other words the bone structure, bone weighting information, and inverse kinematics handles that allow us to easily pose, deform, and animate the model in Blender), we do not have to write an importer for one of the animation export formats, or a runtime that is able to interpret the information about bone movements and weightings in order to accurately reproduce the animation in Codea.

With keyframe interpolation, we do not need to worry about bone hierarchies, skinning, and bone weightings. We do not need an animation importer as such (we will use a modified version of Ignatz’s .obj importer), and we don’t need a runtime capable of interpreting the animation data at sixty frames per second. Of course, this greatly simplified approach comes with a considerable loss of flexibility. We cannot procedurally create animations on the fly, within our code. If we needed a character to accurately gesture towards an arbitrary point in space with one of their limbs, that would require a different approach and some creative thinking. If we wanted to procedurally combine two animations, so that our dinosaur roars at random points whilst she is running, that would also be tricky. You would need to create a separate set of keyframes, or perhaps think about splitting your model into two meshes, each with its own set of keyframes, if you wanted to achieve this level of flexibility.

Keyframe interpolation, while probably being the fastest method of animating (because it requires the least calculations to be made) is also the least memory efficient, per character. Particularly if you had a lot of different models with different animations, a skinned rig approach would be a lot more memory efficient, and might even allow you to recycle animations across different models (if the bone structure is the same), allowing, for instance, the same walk cycle to be reused. On the plus side of the ledger for the keyframe interpolation technique, the code base is going to be leaner, because you don’t have to take care of cascading bone transforms, weighting, and so on.

Finally, regardless of whether we’re using a keyframe interpolation technique or a full skeletal animation approach, there are downsides to using the shader to perform animation. The first is that getting anything other than graphics out of the GPU is difficult (if not impossible, in Codea). If you want your character to be holding an object in their hand as they run, how do you know where, in 3D space, their hand is at any given point in time? Some creative thinking would be needed here.1 That being said, I think using a shader is the only viable way of producing this kind of animation at speed in Codea. My tests suggests that 100,000 vertices can be animated at 60fps on an iPad Air, enough for many indie level games. Doing all of those vertex calculations in the main code and then uploading the vertices to the GPU every frame would kill performance.

A second downside to implementing the animation on the shader, is that we lose the ability to swap between different shaders. Shaders can be used for all sorts of different tasks, but Apple’s developer guidelines recommend that, rather than creating a large, one-shader-to-rule-them-all, with boolean flags and branching if statements representing different states (I hate long, branching if/else structures anyway), we should, for performance reasons, create a range of shaders designed for various specific localised tasks, and then switch between them on the fly. If, for instance, you want a model to start glowing, you swap your default lighting shader for a glow shader. If however we have loaded our mesh with tens of thousands of custom attribute variables, describing how each vertex is going to deform when animated, not only do we have no means of passing this data about where each point has been deformed to to the new shader, but this data is actually erased when we switch shader, even if the shader we are switching to contains the same custom attributes as our main animation shader. So again, compromise is necessary. If the effect is essential, perhaps a one-shader-to-rule-them-all approach could be taken. Or, instead of rendering the model direct to the screen, it could be drawn to an intermediary image, and the glowing effect applied to this buffer image.

Okay, with these caveats out of the way, in the next post we’ll start looking at some code, specifically I/O: getting the animation out of Blender, into Codea.

  1. Have the character’s “sword arm” on a separate mesh that you manipulate directly? Have a parallel table of local “contact points” for each keyframe, describing points on the model where you might want to swap various accessories in and out, and then interpolate the contact points table using the same algorithm that is being performed on the vertex shader?

Built with Jekyll      © Salt Pig Media