Monday, 31 December 2012

Procedural Skybox WIP

I've been learning JavaScript. I thought that a great way to get started would be to try some 3D graphics in the browser using WebGL.

It has been great fun, thanks to Three.js, a lightweight 3D JavaScript graphics library. The examples of it in use are pretty cool and it's clearly going places, and it's incredibly easy to use thanks to the vast number of said examples means you can dive straight into those to figure out what's going on.

So, I had a shot at doing an atmosphere simulation with sun and clouds. I hooked it into date and sun position libraries and have a realistic day-night cycle going thanks to that. The aim is for it it all to be procedurally generated and realtime. On my computer (with a Radeon 5750) it runs at a steady 60fps at 1920x1080 resolution.

The atmosphere is an attempt to be fairly realistic and account for atmospheric scattering. This walkthrough by Florian Boesch made the maths really clear.

I am going for a wispy oriental look for the clouds, like how I would imagine clouds in a game like Okami would look, but with a 3D appearance. It turns out that it's relatively easy to create the effect using turbulent noise, rather than generating realistic clouds.

Here's what it looks like now:

I am currently up to adding some extra bits like JSON models and a minimap. Also, collision detection using Physijs.

Eventually I will add proper lighting for objects in the world and add some nicer models, and then release the code. May try to get it added as a Three.js example, if it turns out well.

On a related note, wow, check these guys out: Reset Atmosphere Tech WIP

Sunday, 30 December 2012

Blind Crossword 3D Again

Back in August I posted about the 3D Crosswords for the Blind software I was working on. Now it's time for a proper release and the code is up on github, with build instructions, documentation and binaries. Windows (XP/Vista/7/8) Version 1.0 (December 2012) here: download.

The program is completely voiced (using SAPI on Windows) and is written in portable C++. It is licensed under the GPLv3.

Here are a couple of screenshots of it in action:

The coloured highlight on this grid marks an anagram

Solvers with poor vision can zoom and pan about the graphical crossword grid to get a good view.
Update 23/01/2013 - Mac version (work in progress):

The OSX version uses the NSSpeechSynthesizer text-to-speech interface for voicing.

Solvers can print and email their answers. It's possible to check answers, sort and filter clues, and more. You can also print and email your answers off to be appraised by Eric Westbrook, the sponsor for this project and a partially blind crossword setter!

There lots of features that can be added and improved to make it better! I am seeing about getting the software to load several 2D crossword formats as well as adding a number of other features through the coming year.

If anybody reading this would like to help contribute or collaborate on this project then send me a message, it'd be great.

Tuesday, 20 November 2012

Nottingham Uni Jam 2012

I went to the Nottingham DevSoc game jam with a couple of friends. It was a great experience, really well organized! It was a 24 hour competition, and we were only allowed to use publicly available resources and stuff we made during that timeframe.

The theme for the jam were these four words: genesis, explosions, flood, evil. Our game was therefore about two players jumping on platforms to escape rising water, throwing and jumping on crates of dynamite, and going up beyond the clouds, with evil-sounding bible quotes about flooding appearing in the background.

Getting a boost off a crate.

For most of the jam our progress went extremely well, but we didn't count how much tiredness would impair us later on. In the end there was a huge rush, and we didn't have any preparation time before making our demonstration to the judges. We plan to come back and make a full game out of it.

Aiming for Worms-like destructible terrain.

Here's a clip of it close to the end of the competition:

When it came to judging, we won a prize for graphics, which was thanks to our team member Joe Williamson and his speedy pixel art prowess.

Friday, 5 October 2012

Signed Distance Field Font Rendering

Signed Distance Field (SDF) font rendering is a really nice technique for rendering crisp, clear fonts. It was made by Valve and used in Team Fortress 2. Valve presented it at SIGGRAPH 2007 in this white paper.

I wanted to make a game that used a lot of text in a 3D environment, and it quickly became apparent that regular texture fonts weren't up to the task. Without using massive textures for the fonts things looked bad, and the fuzziness was just unavoidable.

Paul Houx's excellent explanation and code posted in this thread on the Cinder forums give a great overview of how SDF font rendering works, and it really helped me try it out.

The text scales beautifully, and the results speak for themselves:

Later, an implementation for OpenGLES 2 dropped into a game prototype:

Signed Distance Field font rendering.

Monday, 13 August 2012

3D Crosswords for the Blind

I'm working on a crossword puzzle game for the visually impaired. The aim is to enable a blind solver to complete three dimensional crosswords without needing guidance from elsewhere. To that end, it's fully voiced using computer generated speech, and everything can be got at through keyboard shortcuts.

The main task ahead is to improve usability. Luckily I have blind friends who are willing to test drive the software. There will be a text tutorial using accessible section headers, supported by a list of controls and audio/video quick starts.

The software itself is written in C++, using Qt. I found that different screenreaders tended to all act differently, probably because their underlying implementations were proprietary and non-standard, so I opted for using SAPI to add voice support, which is appropriate for such a text-oriented game.

A few popular two dimensional crossword formats are supported as well.

Qt is great to use, the verbose API makes it easy to use, and the Qt Creator IDE and UI Designer were really well integrated. Using Qt also helps make it cross platform. Qt 5 is also coming out soon, and QML will play a bigger role, which is definitely a sign of the times, and yet another good reason to pick up JavaScript.

In other news, I came third out of third year student applicants in the Search for a Star competition, in the 'Rising Star' category. So I'm pleased about that... but hopefully it'll go even better next year!

Thursday, 3 May 2012

Search for a Star

I've been participating in Aardvark Swift's Search for a Star games development competition. One of the competition rounds involves fixing a supplied broken game and then improving on it over the course of a week.

It was coded in C++. I used SDL for audio and DirectX fixed pipeline functionality for graphics, but use of third party libraries and tools was discouraged.

My aim was to make a short platform game based around avoiding creeps and getting to the goal (a star) at the end of each level. It was themed on the 1989 NES game Ironsword: Wizards & Warriors II.

The levels, of which there were to be five, were at least named after the spells in that game. Unfortunately not much else was, because I ran out of time. Left on the to-do list were integration of a tilemap and level editor for creating levels of a respectable size, plus the fixing of several bugs. It's still a laugh to play though.

I take full responsibility for the grotty 8-by-8 pixel art.
One thing that Ironsword really did well graphics. Ste Pickford did a miraculous job with the massive and detailed sprites for bosses and NPCs, particularly the Dragon King. They really made the game feel epic compared to other titles around at the time:

Get your own crown.
You can grab a zip of my submitted game here. I'll put up a link to a repo with the source on it once the competition is complete.

Update: As advertised, here is the repository with source and development log:

Friday, 30 March 2012

Time-Attack Shooter

Recently I've been working on a time-attack shoot'em up game. I'm using the libGDX games library including Box2D physics, and a bunch of custom OpenGL Shader Language (GLSL) shaders.

The player is that blob in the middle
Currently the basic game prototype is there. The player whizzes around the area and blows up enemies within time limits, picking up upgrades along the way.

The gameplay will eventually be somewhere between Bubbletanks and Geometry Wars. The time-attack element is motivated by the boss battles in the Touhou bullet hell games, from which I am borrowing some placeholder sprites!

It uses OpenGL ES 2 and runs in the browser. The GLSL shaders include the screen distortion, bloom, scanlines, blurring and more. It will go very heavy on particle effects. A large part of what makes games like these enjoyable for me are the impressive explosions. The particles approach also saves me from trashing the visuals with programmer pixel art.

What happens when a texture with a circular image is mapped to the player (white quad is the Box2D physics polygon)
One thing I thought was clever was my usage of Catmull-Rom splines to dynamically create triangle-fan meshes for actors, which works well for making morphing blob-type shapes. It's good enough, but I haven't gotten around to doing satisfactory texture coordinate generation yet.

Wednesday, 8 February 2012

Wizard Conflict

Last Summer I made a minigame called Wizard Conflict for iOS. It's based on a game called Wizard Battle made by Foppy.

Wizard Battle is a "one switch" game - the controls consist of only one button, making it playable for impaired people. The downside to this is that it tires your fingers out in competitive play. I figured that a touchscreen variant of the game would be an improvement for a mobile device.

The gameplay is simple - you spawn units from your side of the screen. The starting objective is to get your units past your enemy for bonus points. To win you need to get a wizard unit past the opponent, usually through cunning or by leveraging a points advantage.

It's funny how many arrows units can take before they die...

The game was written mostly in C++ using a simple engine I made. SDL and CocosDenshion were used for context creation and audio, and OpenGL for graphics. Most of the assets have a Creative Commons license, and the rest I made using Blender and masterful programmer artistry.

It has a few good reviews and one bad one from somebody who found it too tricky. When I update it I'll add a tutorial level, plus some new units and drag-and-drop spells. It also needs releasing for iPhone - that comes as soon as I can get my hands on one to test it on.

Unreal Development Kit

The Unreal Development Kit (UDK) is the development framework for the popular Unreal Engine.

I'm trying to familiarize myself with UDK because I am going to make a game with a similar feel to Which by Mike Inel:

Making a game like this without an engine is too tough a proposition, at least if I want it done in a reasonable time.

UDK includes an application for creating particle systems, which is fast and painless compared to defining them programmatically. It also has a versatile node-based material editor. To get started I put together this particle system of spinning circles, which follows the player around the default scene:

The vision for the game is currently a mix of ideas from Silent Hill 4: The Room, Which, and the dream sequences in Max Payne.

The game should only have about fifteen minutes of playtime in it, but could take a while to make, depending on what design decisions are made down the line... I'll post more as I get some milestones ticked off. Hopefully it'll be done before the year is out!

26/01/2013 - Ah well, I didn't pull this one off. It was too ambitious. One day, maybe.

Tuesday, 17 January 2012

Edge Detection

 Edge detection is a type of image processing that identifies discontinuities in brightness or colour in images.

Edge detection techniques can be useful in video games for visual effects. Many games achieve this by a postprocessing effect that uses one or more convolution matrices. By postprocessing I mean that the effect is applied after the scene has been rendered normally, and the effect really works by modifying the result in a second pass of the shader.

In image processing, a convolution matrix is just a grid of numbers you use to manipulate an image. Certain combinations of these numbers can produce different effects, such as blur, embossing, edge enhancement, and so on.

In general, the following is done. For each pixel of the first pass render result, the convolution matrix is centered atop a pixel. Each cell of the convolution matrix is then multiplied with the brightness value of the pixel that it overlaps. The resulting value is the brightness value for the corresponding pixel in the postprocessed image.

The images below show the result of a simple edge detector convolution matrix being applied to a scene:

First pass.
Second pass.
Incidentally, it is very easy to produce a toon shader from here. Instead of returning a single colour (i.e. the grey background like above) when non-edge pixels are found, you can return any range of colours based on the brightness of the pixel in the original image to produce something like this:

Edge detection with toon-shading.
Many modern games use more advanced methods such as silhouette extraction, or combinations of several convolution matrices to produce a particular style. Okami and Jet Set Radio come to mind:

Here is the fragment shader code I used for the edge detection shader, building once again on a two-pass shader example from the OpenGL 4.0 Shading Language Cookbook.

 #version 400  
 in vec3 Position;  
 in vec3 Normal;  
 in vec2 TexCoord;  
 uniform sampler2D RenderTex;  
 uniform float EdgeThreshold;  
 uniform int Width;  
 uniform int Height;  
 subroutine vec4 RenderPassType();  
 subroutine uniform RenderPassType RenderPass;  
 struct LightInfo  
   vec4 Position; // Light position in eye coords.  
   vec3 Intensity; // A,D,S intensity  
 uniform LightInfo Light;  
 struct MaterialInfo  
   vec3 Ka;      // Ambient reflectivity  
   vec3 Kd;      // Diffuse reflectivity  
   vec3 Ks;      // Specular reflectivity  
   float Shininess;  // Specular shininess factor  
 uniform MaterialInfo Material;  
 layout( location = 0 ) out vec4 FragColor;  
 // Perform a first pass to produce the image using regular lighting  
 vec3 phongModel( vec3 pos, vec3 norm )  
   vec3 s = normalize(vec3(Light.Position) - pos);  
   vec3 v = normalize(;  
   vec3 r = reflect( -s, norm );  
   vec3 ambient = Light.Intensity * Material.Ka;  
   float sDotN = max( dot(s,norm), 0.0 );  
   vec3 diffuse = Light.Intensity * Material.Kd * sDotN;  
   vec3 spec = vec3(0.0);  
   if( sDotN > 0.0 )  
     spec = Light.Intensity * Material.Ks *  
         pow( max( dot(r,v), 0.0 ), Material.Shininess );  
   return ambient + diffuse + spec;  
 // Call the first regular lighting pass  
 subroutine ( RenderPassType )  
 vec4 pass1()  
   return vec4(phongModel( Position, Normal ),1.0);  
 // Get the brightness of a pixel  
 float luminance( vec3 color )  
   return 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;  
 // Perform an edge-detection pass  
 subroutine( RenderPassType )  
 vec4 pass2()  
   float dx = 1.0 / float(Width);  
   float dy = 1.0 / float(Height);  
   // Fetch the brightness values of the pixels that the convolution matrix currently overlaps  
   float s00 = luminance(texture( RenderTex, TexCoord + vec2(-dx,dy) ).rgb);  
   float s10 = luminance(texture( RenderTex, TexCoord + vec2(-dx,0.0) ).rgb);  
   float s20 = luminance(texture( RenderTex, TexCoord + vec2(-dx,-dy) ).rgb);  
   float s01 = luminance(texture( RenderTex, TexCoord + vec2(0.0,dy) ).rgb);  
   float s11 = luminance(texture( RenderTex, TexCoord + vec2(0.0, 0.0)).rgb);  
   float s21 = luminance(texture( RenderTex, TexCoord + vec2(0.0,-dy) ).rgb);  
   float s02 = luminance(texture( RenderTex, TexCoord + vec2(dx, dy) ).rgb);  
   float s12 = luminance(texture( RenderTex, TexCoord + vec2(dx, 0.0) ).rgb);  
   float s22 = luminance(texture( RenderTex, TexCoord + vec2(dx, -dy) ).rgb);  
   // Multiply the convolution matrices with the overlapping pixel brightnesses  
   float lapx = s11 * 4 - (s01 + s21 + s10 + s12);  
   float lapy = s11 * 8 - (s00 + s01 + s02 + s10 + s12 + s20 + s21 + s22);  
   float dist = ((lapx * lapx + lapy * lapy));  
   if( dist > EdgeThreshold )  
     return vec4(0.5, 0.5, 0.5, 1.0);  
 void main()  
   // This will call either pass1() or pass2()  
   FragColor = RenderPass();