Time for some crystal ball gazing with my wish-list/ predictions for a future version of Swift Playgrounds. The top 5 are things that would change the scope of what a playground could be quite dramatically, the rest is more minor stuff. A couple of these items might be unlikely to happen, but most are feasible, I think.
Being able to compile code on an iPad with Swift Playgrounds in the beta of iOS 10 is a complete joy. With certain coding tasks however, the performance of the playground code isn’t all that great. It might run a little sluggishly, or you might find it refuses to run when you start to increase the number of objects in your code. I’d noticed this while playing with SceneKit. I found I could only add around 40 cubes to a scene before the playground refused to run. Yet I knew that Apple’s Learn to Code playground books had significantly more complex scenes running in their liveviews, and that these were powered by Swift/ SceneKit code being compiled live on-device. So why wasn’t my playground performing better?
In the Platforms State of the Union session at WWDC, it was announced that the Playgrounds app would place “the entire iOS SDK at your fingertips.” In theory, that makes it an incredibly powerful tool. At present, the only other iOS App that allows you to “code native” (ie using Apple’s extensive API library), is the insanely great Continuous, a C# / F# IDE for iOS that enables you to use Apple’s APIs via the cross-platform Xamarin environment (which was recently purchased by Microsoft). Xamarin offers C# wrappers for the whole of the Apple SDK, and even manages to do this in sync with Apple’s own release cycles (Xamarin support for iOS 9 released the same day iOS 9 did). Continuous released just a few weeks ago, not long after the beta of iOS 10 launched. No sooner had I got used to coding native on my iPad with the new Swift Playgrounds, than a second app came along with a similar feature set.
In this post I’m going to further explore the new iPad Swift Playgrounds app, currently available with the beta of iOS 10. As with the previous post, I’m focussing on what iPad Playgrounds is like as a tool for developing code, starting from the in-app blank playground template, and building a responsive UIKit interface. At the end I will touch on creating Playgrounds on the Mac in the new Playgrounds Book format, and bringing those onto the iPad.
One of the most exciting announcements at this year’s World Wide Developers’ Conference was that Swift Playgrounds, a feature of Xcode on the Mac since version 6, is coming to iPad. Although presented at the WWDC keynote as being primarily an educational portal, with a storefront where users can download content along the lines of iTunes U or the interactive textbooks of iBooks, as the Playgrounds session video makes clear, Playgrounds on iPad is an immensely versatile app. It promises to blur the boundary between developer and consumer, and as such there are at least three ways it can be used. First, there is the aforementioned downloadable content. Second, users can create their own playgrounds in-app, starting from a blank page, or from a number of templates. Finally, Xcode developers can author content in the new Playgrounds Book format, and will be able to distribute it via the storefront. In this series of post I’ll just be looking at the second of these use-cases, using iPad Playgrounds as a stand-alone developing tool to create playgrounds from scratch, in-app. Throughout the series I’ll focus on the question, is iPad Playgrounds a serious developing tool?
So you’ve exported your rigged model from Blender as an Xcode-friendly
.dae as described in the the previous post. In this post we’re going to look at bringing the
.dae into a Swift SceneKit project in Xcode, and programmatically triggering the animations.
SceneKit is Apple’s high-level 3D API that first appeared on OS X Mountain Lion and made the jump to iPhone and iPad (and now Apple TV) from iOS 8. After having spent a lot of time with Banjax working on lighting shaders and implementing a very basic 2.5D physics engine, it’s something of a relief to use an API that has 3D physics built in and allows complex lighting effects to be achieved with just a line of code. It’s great that it can automatically implement an OpenGLES or Metal rendering engine depending on the hardware and OS its running on. It’s also really exciting to use an API that has supported for animating models with a rig/ armature. But how quickly could I get a rigged character with a run-cycle animation out of Blender and into Xcode and SceneKit?
Salt Pig’s first release, Beware the Banjax, is now available on the App Store for iPhone, iPad, iPad Pro, and iPod Touch.
This was my entry for the Codea Talk 2015 Christmas Competition, which had a Star Wars theme to celebrate the launch of The Force Awakens. It only took a day or two to put together, as I already had the .OBJ importer code and the wireframe shader ready, and the models all came from BlendSwap (please see credits below), and had fairly minimal processing in Blender before exporting them as .OBJ files.
Over on the Codea Talk forum, we begun wondering whether it might be possible to do the toon shader in just a single pass. If you can halve the number of
draw calls you make, you will significantly speed up your code. I went back to my original inspiration for the shader, this GLSL Programming e-Book, and found that after the description of the toon shader that ships with Unity (which I adapted in the previous post), they do indeed describe a single-pass method.
Who’s up for a ‘toon shader? 3D graphics on a computer screen is of course an illusion of depth on a flat surface. Sometimes though, it’s fun, or useful, to lay that illusion bare: to prod the user by pointing out that the painstakingly rendered 3D scene they’re looking at is in fact, a depthless image. Or, perhaps you want to integrate 3D elements in with hand-drawn 2D-style elements, without the former looking too far out of place. Whatever the reason, we see many many 3D visualisations that eschew photorealistic rendering in favour of something a little more, well, flat.
In the previous posts in this series, we’ve looked at getting an animated model out of Blender, and then recreating that animation in Codea by interpolating between a set of keyframes with a Catmull-Rom spline in the vertex shader. We ended up with some code that loaded up a model and then animated a continuous walking loop cycle. Chances are though, that for most of the objects in the game, we’re not just going to want them to animate in a continuous loop; rather we’ll want them to respond to various inputs and triggers around them, and respond accordingly.
In this post, we’ll look at how the keyframe interpolation technique actually works in the vertex shader set up in Codea.
In this second post in the series, we’ll focus on getting our animation out of Blender and into Codea.
Fluidly animated characters are an important part of a 3D game, but getting the animated model out of your modelling software and into your code can be a challenge.