In my last post, at the end, I mentioned promises.  Promises are one of three key ways (in JavaScript) of handling asynchronous operations.  Generally speaking, any operation which performs IO, or waits for user interaction, should be considered asynchronous.  I’m going to briefly cover the other two methods of doing asynchronous operations, and point out their problems, then I’ll talk about promises and the Q implimentation.


Events are used in many languages to handle user input, amongst other things.  If we want to wait for user input, we don’t want to lock up the whole application by spinning round in a tight loop saying “has the user clicked anything” again and again until it’s true, so instead we say “tell me when the user clicks something”.  The following example attaches an event handler to the click event of a button, using jQuery.

[gist file=events.js]

I don’t want to complain too much about events.  For user input their perfect, and for real-time information being pushed from a server, they’re great.  Where they aren’t great, is when you have a single request to fetch a page from a server, or a record from a database.  A better alternative might be callbacks.


There are a number of reasons why events aren’t suited to handling a lot of asynchronous events.  One is that they don’t encapsulate any guarantees to only be called once, another is that they might never be called, if the event happens before we start listening for it.

For web requests or database queries, where we almost always want to make use of the result once and only once, it makes sense to use callbacks (at least it seems to until you see the problems.  Here is an example of where callbacks work really well (reading one record from a database).

[gist file=callbacksGood.js]

So, what’s good about this?  Well for starters it’s a lot shorter than it would be if we had to separately attach an event handler then query for the id.  It’s clean and easy to read, we request the user, then we get called back with the user.  I’ve used the node.js convention of making the first argument an error object.  If it’s not null we can either retry or just throw the error.

So, what’s bad about this? Well, nothing really, providing you stick to such simple examples.  Now lets imagine that the user object has two asynchronous methods, getRoles and getFriends.  Which both take a callback function and return the user’s roles and friends respectively.  Now lets see how it looks when we use that (warning, this won’t be pretty):

[gist file=callbacksBad.js]

The main takeaway from that example, is that it’s going to get really nasty really quickly.  Imagine I now need to get a list of the users enemies as well.  I have to add a variable for the result, add to the check on next, add to the object returned, in short it’s a nightmare.  There’s also all sorts of other things that could go wrong:

  • What if both calls return an error, suddenly we’ve called the callback twice!!
  • What if we throw an error inside one of the callbacks? How could we handle that when calling getAllUserDetails
  • What if we wanted to provide a falsy value for roles or friends? We’d never call our callbacks!!

That’s a lot of bad, and don’t even get me started on what the stack traces look like (they’re worse than useless).  This problem grows exponentially as we compose more asynchronous operations.

Just to be clear, doing getFriends inside the getRoles callback isn’t a solution for two reasons:

  1. It increases the depth of the callbacks, which makes code harder to understand.
  2. It means that the two operations won’t execute in parallel, which is a vast performance hit (probably adds 50% to the time it takes to ‘getAllUserDetails’)
For anyone who’s reading this and thinking about libraries like step and seq, I am aware of them, and they do help, but they have a steep learning curve, are very difficult to read when you’re not used to the specific implementation, and they won’t be able to make use of advanced features like yield (when it eventually becomes supported by browsers and node).  They’re also no good for trying to support inside a templating library.


Hopefully you’re still with me, and aren’t too exhausted by that introduction, cause it’s about to get great.

Imagine if we could re-write our getAllUserDetails function as:

[gist file=yield.js]

Well, with Firefox, set to super-futuristic mode, you can (providing your database provides promises, rather than taking callbacks).

It looks like synchronous code, which should make it fairly easy to understand, but the Q.async and the yield’s mean that it’s not synchronous.  First we call findUserByID, which returns a promise.  We immediately “yield” on this promise (it’s just like await if you’ve used the new versions of C#).  This effectively pauses our function, and resumes it once it has a value for user.

To ensure that we still execute getRoles and getFriends concurrently, we run both, and store the promises returned.  At that point, both operations are running.  We then yield on our promises one after the other, to get the results.  We can then simply return our results almost exactly like we would in a synchronous function.

Note that there’s no error handling code here.  That’s because errors (complete with advanced stack traces to make debugging easier) are automatically propagated up the call chain with promises.  That means that the calling function can effortlessly handle all errors produced by this function when it asks for the result.

There’s not actually any magic going on here, so we could write something using the promises of today, which was semantically equivalent, but not quite as pretty:

[gist file=noyield.js]

Now I accept that it’s not the easiest thing to read, but it’s in the same order as our synchronous looking example using yield.  It also works exactly the same and does the same propagation of errors and stack traces.

The Q promise library actually provides us with some additional helpers we can use to make this a little easier.

[gist file=promised.js]

All we’re doing there is combining both promises into one promise for an array of values.  We then spread the array over our function.  This is shorter, probably easier to read, and can be used with an arbitrarily long array.  It’s not even essential to know whether items in the array are promises or not.

To sum up, I just want to show you how you’d go about calling the getAllUserDetails function.  All 3 of the promised versions are called in exactly the same way and should have identical behaviour.

[gist file=callpromised.js]

The point about calling end in there is really important, so I’ll repeat it and add some detail.  The biggest problem with promises, at the moment, is that they completely swallow all unhanded errors.  To avoid this, whenever you have a promise and you aren’t going to return it to your caller, you have to call end.  That ends the chain and effectively says “I’m done with this”.  If there are still errors at the point you say you’re done with it, Q knows you aren’t going to handle them, so it throws them, along with a stack trace, so you can handle them however you want.

Finally, if you wanted to convert a promised function to a callback function or a callback to a promised function or a callback to a promised function:

[gist file=promisedtocb.js]

Fortunately, that functionality can be easily turned into a re-usable function (because javascript is a beautiful functional language).

[gist file=promisedtocbgeneral.js]

and to go the other way, you can use `Q.nbind(fn)`, provided that the callback is a node style callback and takes err as the first argument.

Callbacks are still great 🙂

It’s time to admit it, if you haven’t realised by now, I have a bit of an agenda.  We should all use promises more and callbacks less.  However, some people prefer callbacks, and there are plenty of times when I’m happy to forgo the heavy weight promises in favour of the super-lightweight callbacks.  What I propose then, is that wherever possible when writing libraries, we should give people a choice.  If you are writing a library that returns a result asynchronously, wrap everything you can in the following function:

[gist file=maybepromised.js]

Simply pass it your base function, and true if your base function returns a promise or false if your base function expects a callback.  Then this function will make sure that you get what you want and whoever calls your function gets what they want.  Let’s build the future 🙂