Mobile Game Developer

Vector2(Success);

A couple of years ago, I set out to learn Unity. I ported Balloonz to Unity (using UnityScript), and wrote a shallow port of Jake Gordon’s Finite State Machine. I gave up though, because I hated UnityScript with a passion. The problem for me was that I know browser-based JavaScript really well, and UnityScript is superficially similar syntax wise, so I’d get myself into trouble by assuming how things (like arrays for example) would behave.

BPACpV8CQAIM0uE

Recently, I decided to have another go at it – this time creating a Breakout clone using C#. And this second time around, I was already familiar with the Unity Editor, and had basic knowledge of how Unity’s Entity-Component-System-But-Not-Really works. I got the game working really quickly, and surprised myself at being able to create a level editor (by importing Tiled maps) with a trivial amount of effort.

tiled-breakout

Now I’m working on a point-and-click sci-fi adventure game – something original for a change. But I still remember how disheartened I felt that first time around.

sci-fi-thumbnails
That bowl of ramen is the first thing I’ve -ever- drawn without reference. Next time I use the ellipse tool.

The path to success is rarely a straight line. Nothing you ever learn is wasted – it might subconsciously inform your decisions, or directly inform your actions later. You can’t predict the future. Don’t be down on yourself if you don’t get it right the first time.

Unity Breakout Clone: Level Editor

I’ve not forgotten about the Breakout clone that I was working on. Truth be told, I haven’t looked at it in over a month.

Since the last post, I re-implemented the exploding brick type and the SFX that I lost, and created a level editor that imports .tmx level layouts from Tiled. I have a layer for brick colours, a layer for brick types (1hp, 2hp, multiball, indestructible bricks, etc), and a layer for powerups. There’s also an ‘extend paddle’ powerup.

I’d say this is somewhere in between Breakout and Arkanoid/Brickout in terms of features. I think I’ve learned everything that I set out to learn from it, so I’m going to shelve it for now.

Unity Live Training: Breakout

It’s funny how I started making games with a Breakout tutorial… and here I am, four years later, following a Breakout tutorial *sigh*. The more things change, the more things stay the same 🙂

It was easier than I expected it to be tbh, but I do have a *teensy little bit* of experience with Unity already (more on that later).

(Apologies for the slight stutter in the video – my Mac is old and tired :))

I took the results of the live training tutorial, and ran with them; adding an indestructible block, a block that spawns a multiball powerup, and better (but still ropey) paddle physics. Not bad for a few hours’ work!

I think I’m going to continue building on it. A level editor will be the next order of business I expect.

Learning, Faster

I’ve only started making games relatively recently, so I can trace my progress over the course of a few blog posts.

The first game I ever made was a Breakout clone in 2011. I followed a tutorial; I was learning the (then new) HTML5 Canvas API at the time.

The first version of Breakout. It had ten levels, a laser powerup, a starfield background, and a catchy trance soundtrack.

Ever since then, whenever I set out to learn a new engine or API, I’ll use it to do one of two things: I’ll either make a match-3 game, or I’ll build a Breakout clone.

Brickout! came next, in 2012.  I created it so I could learn the latest batch of HTML5 APIs – Web Audio, Touch Events, localStorage, etc.  I improved upon Breakout by introducing multi-hit and indestructible bricks; five new powerups; the ‘nuke’ ability; and a global leaderboard. Amazingly, it’s still really popular!

[Play Brickout!]

As I’ve written and rewritten the same core mechanics numerous times, the process has become second nature. I’ve learned to leverage this knowledge to help me learn new skills quicker. When you remove design decisions from the equation, you pick up the API faster; the game ‘gets out of the way’, and you can focus your efforts on what’s important.

In 2014 I rewrote Brickout in ImpactJS; my intention was to release it as a mobile game using Ejecta. A few little niggles aside, it worked quite well. There was one major deal-breaker though: audio. Without the Web Audio API, you’re *really* limited as to what you can do, and I sure as hell wasn’t going back to using audio sprites. Project: abandoned 🙁

You can guess where this is heading :] I’m going to learn Unity by writing a Breakout clone. Guess I’d best get started!

Creating an iOS App with Cordova, Part 2: Prototyping the User Interface

It’s been a while since I wrote the first part of this article – some of the open source software I’d based the original post on has changed significantly.

The largest change is that PhoneGap is now called Cordova. After Nitobi’s acquisition by Adobe, the name of the project was changed, and stewardship handed to the Apache Foundation. Aside from confusing the hell out of people, this doesn’t affect us much, as for our purposes the API has stayed largely the same.

The framework I had been prototyping the UI with – Ratchet – has also changed. Previously it had only supported iOS-style visuals, but with version two, the Ratchet team have rewritten the framework, decoupling the theme component to support Android and other platform specific styles.

Prototyping the UI

Let’s start by opening Terminal, and creating a new folder to keep all our code in:

mkdir book-app
cd book-app

We’re going to use Bower to manage front end dependencies. If you haven’t heard of it before, Bower is a package manager for front end libraries – in the same way that npm is used for Node.js packages. I’m assuming that you already have Bower installed, so let’s create go ahead and a bower.json file:

bower init

Bower will ask you a load of questions to generate the bower.json file:

[?] name: book-app
[?] version: 0.0.1
[?] description: 
[?] main file: 
[?] what types of modules does this package expose? 
[?] keywords: 
[?] authors: Nikki <your@email.com>
[?] license: MIT
[?] homepage: 
[?] set currently installed components as dependencies? No
[?] add commonly ignored files to ignore list? Yes
[?] would you like to mark this package as private which prevents it from being accidentally published to the registry? YesN) y

{
  name: 'book-app',
  version: '0.0.1',
  authors: [
    'Nikki <your@email.com>'
  ],
  license: 'MIT',
  private: true,
  ignore: [
    '**/.*',
    'node_modules',
    'bower_components',
    'test',
    'tests'
  ]
}

[?] Looks good? Yes

Once the config file has been created, install Ratchet and Jasmine, remembering to save them as dependencies.

bower install ratchet jasmine --save
Initial sketches

Next we’ll start creating the view. We’ve already got our sketches from the last tutorial, so with those in mind, lets flesh out the structure by writing the markup. Create a html file for each of the main three views…

touch index.html archive.html options.html

…and open up the Ratchet quick start guide at the page for the basic template.

Open up the index.html file in your text editor (I use love adore Sublime Text 3) and copy the header and footer of the basic template into it.  Change the CSS and JS file references to the correct ones.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Ratchet template page</title>

    <!-- Sets initial viewport load and disables zooming  -->
    <meta name="viewport" content="initial-scale=1, maximum-scale=1, user-scalable=no, minimal-ui">

    <!-- Makes your prototype chrome-less once bookmarked to your phone's home screen -->
    <meta name="apple-mobile-web-app-capable" content="yes">
    <meta name="apple-mobile-web-app-status-bar-style" content="black">

    <!-- Include the compiled Ratchet CSS -->
    <link rel="stylesheet" href="bower_components/ratchet/dist/css/ratchet.min.css">

    <!-- Include the compiled Ratchet JS -->
    <script src="bower_components/ratchet/dist/js/ratchet.min.js"></script>
  </head>
  <body>

...

  </body>
</html>

We also need to link to a theme, so paste the reference to the iOS theme just below the main styles:

<link rel="stylesheet" href="bower_components/ratchet/dist/css/ratchet-theme-ios.min.css">

Open up Ratchet’s components documentation.  In our UI sketches we designed an application bar at the top, and a navigation bar at the bottom, with a button for each of the main views.  Let’s create the top bar, with a button to the right for adding new books.

<header class="bar bar-nav">
  <a class="icon icon-plus pull-right" href="#"></a>
  <h1 class="title">ReadLog Stack</h1>
</header>

The Ratchet docs say that all bars must be added before the content, so let’s create that bottom bar next:

<nav class="bar bar-tab">
  <ul class="tab-inner">
    <li class="tab-item active">
      <a href="index.html">
        <span class="icon icon-bars"></span>
        <span class="tab-label">Stack</span>
      </a>
    </li>
    <li class="tab-item">
      <a href="archive.html">
        <span class="icon icon-pages"></span>
        <span class="tab-label">Archive</span>
      </a>
    </li>
    <li class="tab-item">
      <a href="options.html">
        <span class="icon icon-gear"></span>
        <span class="tab-label">Options</span>
      </a>
    </li>
  </ul>
</nav>

And finally, we’ll create the content container:

<div class="content">
  <ul>
    <li>Books</li>
    <li>Books</li>
    <li>Books</li>
  </ul>
</div>

Copy index.html and paste the markup into archive.html and options.html.  If you open this up in Chrome and enable mobile device emulation, you should see something that looks like this:

readlog-p2

I’m going to stop here for now. In part three, I’ll be writing failing tests in Jasmine, which we’ll use to discover more of our application’s functionality. We can then develop the UI accordingly.

You can find the code for this tutorial on Github: https://github.com/nikki/ReadLog-App-Tutorial

Be sure to check back in roughly a year’s time! By then – if you’re lucky – I might have finished part three 😉

Web Audio API Tutorial

Welcome to this tutorial on the Web Audio API. I’ll show you how to load and play sounds, how to adjust the volume, how to loop sounds, and how to crossfade tracks in and out – everything you need to get started implementing audio into your HTML5 games.

Web Audio – a brief history

Before the release of the Web Audio API in 2011, the only cross-platform way of playing audio in the browser (without using Flash) was with the <audio> element. The <audio> element has a very basic feature set – there isn’t much to it beyond loading, playing, pausing and stopping a single track.

For the game developers taking advantage of the new and improved graphics APIs (WebGL, Canvas), audio support – or lack thereof – was a source of constant frustration. As graphics advanced, the paucity of the <audio> feature set became more pronounced. Worse still, <audio> was plagued by bugs across the different browser implementations, thwarting developers’ attempts to use the API for even the most basic of it’s intended purposes.

Ingenious hacks had to be devised – the ‘audio sprite’* was invented simply to get audio to work correctly in iOS. Developers clamoured for a better audio API to complement the rich, engaging visual experiences they were creating with the far-superior graphics APIs.

Enter, the Web Audio API.

The Web Audio API

The Web Audio API enables developers to create vibrant, immersive audio experiences in the browser. It provides a high-level abstraction for manipulating and controlling audio.

The API has a node-based architecture: a sound can be routed through several different types of nodes before reaching it’s end-point. Each node has it’s own unique purpose; there are nodes for generating, modifying, analysing and outputting sounds.

Where is it supported?

web-audio-browser-support
The Web Audio API is currently supported in all good browsers.

Test for API Support

Before you can load a sound, you first need to check whether the API is supported in your target browser. This snippet of code attempts to create an AudioContext.

var context;

try {
  // still needed for Safari
  window.AudioContext = window.AudioContext || window.webkitAudioContext;

  // create an AudioContext
  context = new AudioContext();
} catch(e) {
  // API not supported
  throw new Error('Web Audio API not supported.');
}

View demo

If the test fails, you have one of three options: a) ignore it (user gets no audio); b) use an <audio> sprite; c) use a Flash fallback. Personally, I’m in favour of option a) – as mentioned in the footnotes, audio sprites are a real pain in the backside to create, and Flash is a no-go in HTML5 mobile games.

Load a sound

Next, we’ll load a sound. The binary audio data is loaded into an ArrayBuffer via Ajax. In the onload callback, it’s decoded using the AudioContext’s decodeAudioData method. The decoded audio is then assigned to our sound variable.

var sound;

/**
 * Example 1: Load a sound
 * @param {String} src Url of the sound to be loaded.
 */

function loadSound(url) {
  var request = new XMLHttpRequest();
  request.open('GET', url, true);
  request.responseType = 'arraybuffer';

  request.onload = function() {
    // request.response is encoded... so decode it now
    context.decodeAudioData(request.response, function(buffer) {
      sound = buffer;
    }, function(err) {
      throw new Error(err);
    });
  }

  request.send();
}
// loadSound('audio/BaseUnderAttack.mp3');

View demo

Testing file format support

To ensure that our audio is playable wherever the Web Audio API is supported, we’ll need to provide the browser with two variants of our audio source, in MP3 and Ogg format. This code snippet checks whether the browser can play Ogg format audio, and helps us to fall back to the MP3 where it’s not supported.

var format = '.' + (new Audio().canPlayType('audio/ogg') !== '' ? 'ogg' : 'mp3');
// loadSound('audio/baseUnderAttack' + format);

View demo

Play a sound

To play a sound, we need to take the AudioBuffer containing our sound, and use it to create an AudioBufferSourceNode. We then connect the AudioBufferSourceNode to the AudioContext’s destination and call the start() method to play it.

/**
 * Example 2: Play a sound
 * @param {Object} buffer AudioBuffer object - a loaded sound.
 */

function playSound(buffer) {
  var source = context.createBufferSource();
  source.buffer = buffer;
  source.connect(context.destination);
  source.start(0);
}
// playSound(sound);

View demo

Load multiple sounds

To load more than one sound, reference your sounds in a format that can be iterated over (like an array or an object).

var sounds = {
  laser : {
    src : 'audio/laser'
  },
  coin : {
    src : 'audio/coin'
  },
  explosion : {
    src : 'audio/explosion'
  }
};


/**
 * Example 3a: Modify loadSound fn to accept changed params
 * @param {Object} obj Object containing url of sound to be loaded.
 */

function loadSoundObj(obj) {
  var request = new XMLHttpRequest();
  request.open('GET', obj.src + format, true);
  request.responseType = 'arraybuffer';

  request.onload = function() {
    // request.response is encoded... so decode it now
    context.decodeAudioData(request.response, function(buffer) {
      obj.buffer = buffer;
    }, function(err) {
      throw new Error(err);
    });
  }

  request.send();
}
// loadSoundObj({ src : 'audio/baseUnderAttack' });


/**
 * Example 3b: Function to loop through and load all sounds
 * @param {Object} obj List of sounds to loop through.
 */

function loadSounds(obj) {
  var len = obj.length, i;

  // iterate over sounds obj
  for (i in obj) {
    if (obj.hasOwnProperty(i)) {
      // load sound
      loadSoundObj(obj[i]);
    }
  }
}
// loadSounds(sounds);

View demo

Adjusting the volume

In the ‘play’ example, we created an AudioBufferSourceNode, and then connected it to a destination. To change the volume of an audio source, we need to create an AudioBufferSourceNode as before, but then we create a GainNode, and connect the AudioBufferSourceNode to that, before connecting the GainNode to the destination. Then we can use the GainNode to alter the volume.

sounds = {
  laser : {
    src : 'audio/laser',
    volume : 2
  },
  coin : {
    src : 'audio/coin',
    volume : 1.5
  },
  explosion : {
    src : 'audio/explosion',
    volume : 0.5
  }
};


/**
 * Example 4: Modify the playSoundObj function to accept volume property
 * @param {Object} obj Object containing url of sound to be loaded.
 */

function playSoundObj(obj) {
  var source = context.createBufferSource();
  source.buffer = obj.buffer;

  // create a gain node
  obj.gainNode = context.createGain();

  // connect the source to the gain node
  source.connect(obj.gainNode);

  // set the gain (volume)
  obj.gainNode.gain.value = obj.volume;

  // connect gain node to destination
  obj.gainNode.connect(context.destination);

  // play sound
  source.start(0);
}
// loadSounds(sounds);

View demo

Muting a sound

To mute a sound, we simply need to set the value of the gain on the GainNode to zero.

var nyan = {
  src : 'audio/nyan',
  volume : 1
};
loadSoundObj(nyan);


/**
 * Example 5: Muting a sound
 * @param  {object} obj Object containing a loaded sound buffer.
 */

function muteSoundObj(obj) {
  obj.gainNode.gain.value = 0;
}
// muteSoundObj(nyan);

View demo

Looping sounds

Whenever you’re creating any game, you should always be mindful of optimising file sizes. There’s no point making your player download a 10Mb audio file, when the same effect is achievable with 0.5Mb of looped audio. This is especially the case if you’re creating games for HTML5 mobile game portals.

To create a looping sound, set the loop attribute of the AudioBufferSourceNode’s to true just before connecting it to the GainNode.

sounds = {
  laser : {
    src : 'audio/laser',
    volume : 1,
    loop: true
  },
  coin : {
    src : 'audio/coin',
    volume : 1,
    loop: true
  },
  explosion : {
    src : 'audio/explosion',
    volume : 1,
    loop: true
  }
};


/**
 * Example 6: Modify the playSoundObj function again to accept a loop property
 * @param {Object} obj Object containing url of sound to be loaded.
 */

function playSoundObj(obj) {
  var source = context.createBufferSource();
  source.buffer = obj.buffer;

  // loop the audio?
  source.loop = obj.loop;

  // create a gain node
  obj.gainNode = context.createGain();

  // connect the source to the gain node
  source.connect(obj.gainNode);

  // set the gain (volume)
  obj.gainNode.gain.value = obj.volume;

  // connect gain node to destination
  obj.gainNode.connect(context.destination);

  // play sound
  source.start(0);
}
// loadSounds(sounds);

View demo

Crossfading sounds

Making use of multiple audio tracks is a great way to aurally demarcate the different areas of your game. In Brickout, for example, I crossfade between the title music and the game music when the game starts, and back again when it ends.

To crossfade between two tracks, you’ll need to schedule a transition between the gain volume ‘now’ and a time fixed in the future (e.g. ‘now’ plus 3 seconds). ‘Now’ in Web Audio terms is the AudioContext’s currentTime property – the time that has elapsed since the AudioContext was created.

var crossfade = {
  battle : {
    src : 'audio/the-last-encounter',
    volume : 1,
    loop : true
  },
  eclipse : {
    src : 'audio/red-eclipse',
    volume : 0,
    loop : true
  }
};


/**
 * Example 7: Crossfading between two sounds
 * @param  {Object} a Sound object to fade out.
 * @param  {Object} b Sound object to fade in.
 */

function crossFadeSounds(a, b) {
  var currentTime = context.currentTime,
      fadeTime = 3; // 3 seconds fade time

  // fade out
  a.gainNode.gain.linearRampToValueAtTime(1, currentTime);
  a.gainNode.gain.linearRampToValueAtTime(0, currentTime + fadeTime);

  // fade in
  b.gainNode.gain.linearRampToValueAtTime(0, currentTime);
  b.gainNode.gain.linearRampToValueAtTime(1, currentTime + fadeTime);
}
// crossFadeSounds(crossfade.battle, crossfade.eclipse);

View demo

LinearRampToValueAtTime has a catch that isn’t immediately apparent – if you try to change the gain after using it, nothing will happen. You need to cancel any scheduled effects you’ve applied before you can set the gain, even if your schedule has long since expired. You can do this with the cancelScheduledValues() method.

Final Tip

If you’re struggling to get a sense of what each of the vast array of audio nodes does, head over to the spec. Under each node’s subheading you’ll find the following:

numberOfInputs: n
numberOfOutputs: n

From this you can get a rough idea of what the node does. If the node has no inputs and one output, it will load or synthesise audio. If it has one input and n outputs, an audio source can be connected to it and modified or analysed in some respect. If the node has inputs but no output, it will be an end-point – the final destination that connects your audio to your user’s headphones or speakers.

Further Reading

https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API

 

*The process of creating an audio sprite was painstaking – all of your game’s audio had to be composited into one file, and once loaded the ‘playhead’ had to be jogged back and forth between each of the ‘sprites’. This workaround still had it’s downsides – sounds were often clipped if a new sound was triggered before the previous sound had finished playing.

fragged

Fragged: Cyberpunk Jam Game Postmortem

Fragged is a game I made for the Cyberpunk Jam. I’m going to talk about my entry, Fragged; about the experience, and what I learned from it. Disclaimer: there’s a lot of thought process here. If you want the TDLR, skip to the bottom.

I didn’t start work on Fragged until last Thursday. The jam started on the third of March, which meant that I was already three days behind. Entering was a snap decision. I’d toyed with the idea of creating an entry when I’d first heard about it, but after that first flight of fancy I didn’t really give it any further thought. I just didn’t have the time.

As the first entries began to trickle in, I realised that these events are important to the game dev community, and that if I didn’t make time, I’d forever be on the periphery and never a part of it.  So I made time.

The corollary of my snap decision was that I had no game ideas whatsoever. I generally have the opposite problem. It was nice to start with a clean slate, but it also meant that I began doubly on the back foot.

Cyberpunk jam theme image
The Cyberpunk jam theme image

The game was to be based on the theme image (above). The first thing that struck me about the image wasn’t the punk chick with the cool hair, it was the silhouettes of the base-jumpers that had jumped before her. They’re the the same sprite, duplicated and flipped on the horizontal axis.

That got me thinking about clones. My first ideas revolved around something to do with manipulating DNA. I didn’t think that DNA was ‘cyber’ enough though. The matrix, jacking in… what happens to your physical brain when your consciousness is in cyberspace? Perhaps I could do something with neurons?

sketches for cyberpunk jam
Preliminary sketches

I thought about the mechanic of linking neuron clusters to each other. Not hugely relevant to the theme, but that’s not necessarily fatal at such an early stage. I cracked open Sublime and ran with it. I wanted to create a mechanic where the player would connect ‘free-floating’ neurons together. I needed joints.

I’d worked with Box2D before, but I hadn’t done a lot of work with joints. Box2dWeb is the JavaScript port of Box2DFlash used with Impact.js. It’s not been updated for a while, so unfortunately it doesn’t have rope joints. There are newer, more up-to-date ports, but I didn’t have time to start digging into new APIs. Luckily, distance joints seemed to get me ninety percent of the way there.

Creating different types of joints in code is time consuming, but I needed to get a feel for how they worked beyond the demos available in the testbeds. It was at this point that I discovered R.U.B.E. Discovering tools like R.U.B.E is one of the highlights of being a developer for me. It’s a graphical interface for manipulating Box2D worlds, and made testing of the different joint types – in a way that was relevant to my prototype – so much easier.

As the time pressure began to make itself more pronounced, I began to worry about my game’s scope. I was also worried about cohesion; how to make the mechanic into a game (a mechanic alone does not a game make); and how to knit the game with the theme.

My first priority was to cut the scope of the game down. This meant simplifying the mechanic, which in turn meant that a lot of my earlier code with distance joints was rendered redundant. Rather than connecting free-floating entities together, I opted for a tile based approach. I could then create a grid, and have tiles snap onto predefined paths. This made level design easier, but I had to rethink how it tied to the theme.

I needed a story. Clones in cyberspace… That’s a bit Agent Smith in Matrix Revolutions, isn’t it? What does the brain of an AI look like? This idea intermingled with the plot of a book that a friend told me about: ‘personality’ is a fluid construct, comprised of competing thoughts and emotions, behavioural response to stimuli, etc. What would happen to this construct if it were no longer constrained by a the need for a physical presence? ‘Uploaded to cyberspace’, for example?

The door is your brain.
The door is your brain.

I took ‘personality’ as an abstract concept, and anthropomorphised it. Those sprites, I decided, were in fact the splintered shards of an AI’s personality; trying to de-res themselves in cyberspace because they did not want to be a part of the unified whole. As each fragment is recovered and re-integrated into the AI consciousness, the AI’s true personality begins to reveal itself – and she is not very nice. I took this story and weaved into the game with a narrative device that looks a lot like the codec from the Metal Gear series.

With the deadline looming, I had to make some painful sacrifices. Title screen art? Cut. The art for the ‘codec’ screen? Cut. Sprite animations? Cut. Better art for the ‘pegs’ and for the data fragments? The placeholders would have to suffice. Ten levels? Seven would have to do. Sound, beyond a couple of basic SFX? Cut.

I was still designing levels half an hour before the deadline. The deadline – 12am PST, is 7am GMT, which means I didn’t sleep that night.

 



What went wrong:

  • Not enough time spent thinking about the theme

Most people going into game jams will at least have an idea of the direction they want to head in. If I’d had a solid idea at the start, the final game would have been a much more cohesive experience.

  • Scope vs time

I’m always far too ambitious with my initial game designs. This occasion was no exception. Had I more time to reflect, I would have realised that I couldn’t possibly deliver the game that I wanted to create in the time that I had.

 

What went right:

  • Familiarity with tools

I’ve dabbled with Unity, and was tempted again; but decided against it. I create a lot of prototypes as part of my job, and if I’ve learned anything, it’s that when you’re pressed for time, you should stick with what you know. Impact.js is tried and trusted, and I wouldn’t have been able to create levels with sufficient complexity were I battling an unfamiliar API at the same time.

  • HTML5 generally

Take a look at this clip of Fragged on an iPad. It works just as well as it does on desktop. The extra effort to make that happen was minimal.

 

 

What did you learn from the experience?

The above, and…

  • You’re not competing against other people…

… you’re competing against yourself.

  • Pixellation hides a multitude of sins

I drew the AI character by hand, from scratch. Lexy, the human character, is a hastily put together composite of an asset from OpenGameArt (thanks NatashaHaggard!) and my own scribbles. The mosaic filter in PhotoShop saved the day.



Final Thoughts

At the end of it all, I’m ambivalent. I’m not entirely happy with how my game turned out. The story and the mechanic are good when judged in isolation, but I’m not sure that together they make a cohesive experience. The game lacks synergy.

I’d like to think that adding audio would have contributed towards making a better game, because I believe that audio is as important as the art and design. But in the end that would have just been papering over the cracks.

In spite of my project’s shortcomings, I’m still glad that I took part. It forced me out of my comfort zone. When you’re constrained by time, you have no room for second guessing yourself – necessity prevails. I discovered new tools and new methods that, going forward, will help me create new and better games.

My thanks to @deviever for organising it, Itch.io for hosting the jam, the maintainer and generous artists that donate to OpenGameArt.org, and all the devs who took part.

Creating an iOS Application with PhoneGap, Part 1: User Experience

Hello, and welcome!  I’m going to show you how to create an iOS application – from start to finish – with HTML, CSS, JavaScript and PhoneGap.

In this first part, I’ll be planning the project. I’m going to start by discussing the importance of User Experience (UX) design, deciding upon features, and finishing off with some sketches. In part two, I’ll get down to prototyping the UI. In part three I’ll write failing tests with Jasmine. In part four, I’ll develop the application; and finally, in part five I’ll finish the app by wrapping it with PhoneGap, and deploying it to an iPhone with Xcode.

We’re going to create a really simple book cataloguing application, inspired by my personal experience with ReadMore (iOS).

ss1
ReadMore App by Navel Labs

ReadMore is an app I’ve used since 2010. It keeps track of the books you’re currently reading, books you’ve read, etc.; but it also logs how long it takes you to read each book. It’s a great little app, but the one big downside for me is that the time tracking functionality is tightly coupled with the archiving functionality. Time tracking is a feature I’m not interested in, but the app requires me to enter (fake) time data in order to save a book to the archive. I just want to be able to see, at a glance, the books I’m currently reading, and books I’ve read.

If I was being sensible, I’d simply browse the App Store for a suitable alternative; but that wouldn’t make a very interesting blog post.

Instead, I’m going to roll my own, and show you how I did it. Before I get started though, I’d like to take a moment to talk about user experience design.

User Experience Design

User experience is, at it’s core, concerned with one thing: how do users feel about your product? Is it intuitive? Cohesive? A delight to use? Or is using your product the pain point of a person’s day?

User experience designers, therefore, are engaged in the process of creating a positive user experience. This is accomplished by researching user needs, designing a solution to meet those needs, and validating the proffered solution with user testing.

Before user-centred design became the standard approach to designing software, designers approached projects with only a superficial understanding of it’s intended users. The design was based on what the designers thought the client wanted, as opposed to what they actually needed. The result was often disappointing.

tree_swing_development_requirements
Image credit: Paragon Innovations, Inc.

If your app is clunky and frustrating to use, you can be sure that users won’t stick with it for long. Creating a product that no one wants to use is a futile gesture, and a sure-fire way to bankrupt yourself in the process. This is why UX is so important! Customer loyalty? Employee productivity? User satisfaction? All are affected by UX, and all directly affect your bottom line.

Okay, you’re convinced. UXD is great. So where do you start? User Surveys? Personas? Wireframes? Usability Testing? Unfortunately, there is no right answer, no magical combination of UXD techniques that, applied to a project, will always guarantee a perfect outcome. But you can be assured that any effort you make to learn more about your users can only yield positive results. Less uncertainty means less time spent in fruitless debate, a smoother build process, and a better end product.

ux-diagram
Image credit: studio aum

Now that I’ve highlighted just how important UX is, I’m going to dive straight in at the deep end with some user stories. As a developer, I find that this is the best way to nail down a feature set, because each story represents a feature, and each feature is an actionable, measurable, and (most importantly) testable step to getting a product shipped.

Planning Features using User Stories

user story asteroids
Image credit: Andrew Fuqua

The template for a user story is as follows: As a user role I want feature so I can benefit. Using this template, we can describe all of the features we want the app to have. Let’s create the first user story for our book catalogue.

As a user, I want to be able to create a book record and save it to a list so that I can see which books I’m currently reading.

This one is pretty straightforward. In fact, this story represents the functionality of the app at it’s most basic level. It’s safe to say that similar stories will exist for the rest of the CRUD (create, read, update, delete) paradigm, so lets skip over them for now. (As we’re creating a single-role application, I’m going to forgo writing ‘as a user…’ for the remaining stories.)

… I want to be able to pre-populate a book record by searching for the book online, so that I don’t have enter a record manually.

Straightforward at first glance, but how should you be able to search specifically?

… I should be able to search by title, author, or barcode, to make it easy to enter a book record.

So, this feature confers a time-saving benefit. Imagine, though, that this were a commercial project. What if the project planner just assumed that the user should be able to search by title and nothing else? It’s not a great example, but you see the point I’m trying to make – loose ends can be a disaster for your project, so it’s best to keep asking questions until there are no more questions to ask.

I’ll probably use the Amazon or Google Books API here, and there’s an open source PhoneGap plugin I can use to read barcodes with a phone’s camera, so I have this part covered.

… I want to be able to view books by category, so I can see the different genres of books I’m reading.

This is a utility benefit. I read technical books for work, and sci-fi books for fun. It would be useful for me to be able to differentiate between the two at a glance.

… I want to be able to archive books that I’ve read, so that I only see books that I’m currently reading.

If we don’t separate ‘current’ books from ‘completed’ books, we’re going to end up with one long, not very useful list. Perhaps we could create a separate view for archived books?

… I want to be able to export data from the app, so that I can import it into other applications.

A user’s data belongs to the user, not to your app. They should be able to use it elsewhere (Goodreads, for example) if they choose to. I’ll export the book list to CSV and JSON, saving each as a document with the help of PhoneGap’s FileWriter API.

Also, backups are always useful. I lost a portion of my ReadMore reading list when I upgraded to iOS 7 beta, and I was incredibly annoyed about it. So, I’ll enable Document File Sharing in iTunes – any files exported will be accessible in here (it’s just a simple case of drag and drop to the desktop), and they’ll also automatically be saved with everything else when the user initiates a device backup.

These user stories describe the entire feature set of our application. In part two I’ll show you how to translate these stories into actionable steps with Behaviour Driven Design (BDD), but right now I’d like to talk about the UX challenges creating a hybrid app presents.

Hybrid App UX

User needs are one rung on the ladder of user experience. Your choice of tech stack can present it’s own problems.

Using a hybrid app can sometimes feel like you’ve taken a trip to the Uncanny Valley. The app may be work perfectly, but it doesn’t feel the way you expect it to. This is the unavoidable consequence of using a WebView; instead of having native controls and components to work with, you have to create everything from scratch with HTML, CSS and JavaScript. Users, accustomed to a platform’s behaviour, expect your app to follow the same conventions. Often this expectation goes unmet, and users can become frustrated with your app as a result.

For example, a button, if bound with a standard click handler rather than a touch handler, can take 300ms to trigger it’s associated event when tapped. 300ms of latency will not go unnoticed.

So, why bother with hybrid apps at all, why not just go native? Developing a hybrid app is still the fastest, most cost-effective way to get an app built and deployed across multiple platforms. The issues cited above only tend to arise when a developer takes a one-size-fits-all approach to building the app. Hybrid apps are not a panacea; if you plonk the same codebase down on iOS and Android without any modification, you’re going to have a bad time. Care must be taken to meet the expectations of users on each platform you intend to deploy to.

UX Sketching

On to sketching. I’m terrible at drawing, but sketching is always a useful exercise. The real value of sketching lies in how quickly you can spot obvious problems – a textual description of a feature can mask problems that will leap out at you when you put pencil to paper.

IMG_0078-1
Initial UX sketches

My all time favourite post on UX sketching is by Peiter Buick over on Smashing Magazine.  In his article, The Messy Art of UX Sketching, Peiter delves into the power of UX sketching, and discusses the tools and techniques behind it.  He explains it much better than I could, so it’s well worth a read.

Next Time…

In the next tutorial, I’ll be prototyping the user interface, and writing failing tests with Jasmine. Stay tuned!

Further Reading

Coding Horror – Avoiding the Uncanny Valley of User Interface
Smashing Magazine – Effectively Planning UX Projects