Unit Testing JavaScript

by Jon Bretman on Monday, 3 Mar 2014

There are many ways to implement unit testing for JavaScript code and lots of frameworks to choose from. This post describes the setup we use at Lyst as well as why we chose the particular libraries / tools we did.

An example project using all of the methods discussed in this post can be found here.

Our Requirements

Our requirements for unit tests were as follows:

  • Work with AMD modules (we use RequireJS)
  • Can be run as a Grunt task as part of our build
  • Can be run in a browser
  • Provide coverage information
  • Provide a way to mock/stub things

To meet these requirements we used a mix of different libraries and tied them together with some custom Grunt tasks.

Getting Started

The example project mentioned at the beginning of this post shows exactly how everything is structured, but the basic directory structure is:

- src
    - js
- test
    - spec
- Gruntfile.js

All our JavaScript code lives inside src, including any 3rd party libraries, and all tests live in test/spec.

Writing Tests

We use Mocha as a test runner as it provides a number of different output options, a number of different interfaces for writing tests, and plays nicely with AMD modules. We like the BDD (Behaviour Driven Development) interface for writing tests and we use Expect.js as an assertion library.

Both of these things can be installed with npm:

npm install --save-dev mocha
npm install --save-dev expect.js

Expect.js also uses BDD style assertions so you end up with very human readable assertion statements. It also has built in support for asserting types, and loose equality of things like arrays and objects.

// assert based on type

// assert based on an array containing a value

// assert based on loose equality
expect(['a','b','c']).to.eql(['a', 'b', 'c']);

One of our requirements was the ability to mock / stub things in the tests. Mocking is a very useful tool when writing unit tests as it allows you test a unit in isolation, and also create very specific situations to ensure your code is able to handle errors. A good example of when mocking is useful is network requests, if you have a module that makes AJAX calls then you can mock that module and control its behaviour without actually making any network requests during tests.

The mocking framework we use is Sinon.js, which has a lot of very powerful functionality for mocking or stubbing objects and functions. Sinon can also be installed with npm:

npm install --save-dev sinon

The Sinon documentation explains the functionality in great detail but a fairly simple example would be testing a function that takes a function as a parameter and asserting it calls that function the correct number of times with the correct arguments. For example:

// create a 'spy' function
var spy = sinon.spy();

// use it

// assert it was called the correct
// number of times with the correct arguments
expect(spy.getCall(0).args).to.eql([1, 1, [1, 2, 3]]);

Mocking Modules

The benefit of using AMD modules and RequireJS is that we can leverage the configuration options of RequireJS to mock a modules dependencies. For example if module A depends on module B, then we can use the map option to tell RequireJS to load a mocked version of B when A asks for it.


    map: {
        'A': {
            'B': '../mocks/B'


Running Tests

There are two ways we want to run tests, as part of our Grunt build process, and also manually in a browser. As it turns out the latter will come for free when the former is working. We use a forked version of grunt-mocha to run the tests as part of our Grunt build, the reason we use a forked version is to do with generating coverage reports, which I will get to later on. This plugin uses PhantomJS to run the unit tests and has a fairly simple config.

grunt-mocha can be installed with npm:

npm install --save-dev grunt-mocha

Initially the Gruntfile looks pretty simple, just a basic configuration for the the grunt-mocha task. The run option is set to false as we are using AMD modules that will be loaded asynchronously, meaning we will need to call mocha.run() ourselves when all our tests have loaded.

module.exports = function (grunt) {



        mocha: {
            test: {
                src: 'test/index.html',
                options: {
                    reporter: 'Spec',
                    run: false,
                    log: true



The test/index.html file either has to be kept up to date with the tests you want to run or generated dynamically. I prefer to automate anything that can feasibility be automated so we have a simple custom Grunt task that generates the index.html file from a template.

The Template

<!DOCTYPE html>
<html lang="en">
    <meta charset="utf-8">
    <link rel="stylesheet" href="../node_modules/mocha/mocha.css">
    <script src="../node_modules/mocha/mocha.js"></script>
    <script src="../node_modules/expect.js/index.js"></script>
    <script src="../node_modules/sinon/pkg/sinon.js"></script>
    <script src="../node_modules/requirejs/require.js"></script>

<div id="mocha"></div>




        baseUrl: 'src',

        paths: {
            'underscore': 'lib/underscore'

        map: {
            'helpers': {
                'underscore': '../mocks/underscore'

        // this will be populated with all the tests we want to run
        deps: {{ tests }},

        callback: mocha.run




Custom ‘test’ Task

grunt.registerTask('test', 'Run JS Unit tests', function () {

    var options = this.options();

    var tests = grunt.file.expand(options.files).map(function(file) {
        return '../' + file;

    // build the template
    var template = grunt.file.read(options.template)
        .replace('{{ tests }}', JSON.stringify(tests));

    // write template to tests directory and run tests
    grunt.file.write(options.runner, template);
    grunt.task.run('coverage:instrument', 'mocha', 'coverage:report');

The test task basically uses the grunt.file.expand() method to get a list of all the test files and then generates an index.html file using the template. We are using RequireJS’s deps and callback options to load the test files and then call mocha.run when they are all loaded. Notice that we don’t need to specify the source files that will be loaded as each test will load the files it needs to test.

Notice too that the baseUrl is set to src and not ../src as you might expect. This is because we are going to copy instrumented versions of source code into the test directory so we can get coverage information.


We use Istanbul to generate coverage reports after running the unit tests. This is not the friendliest tool I have ever used but it generates accurate coverage reports in a number of different formats.

There are two steps to getting coverage information, the first is instrumentation which modifies the code to collect information about which branches / statements / functions ran. The second is generating a report based on the information collected. The code for generating the instrumented code looks like this:

var ignore = this.data.ignore || [];
var instrumenter = new istanbul.Instrumenter();

this.files.forEach(function (file) {

    var src = file.src[0];
    var instrumented = grunt.file.read(src);

    // only instrument this file if it is not in ignored list
    if (!grunt.file.isMatch(ignore, src)) {
        instrumented = instrumenter.instrumentSync(instrumented, src);

    // write file to destination
    grunt.file.write(file.dest, instrumented);

All that is happening here is we are going over every file in this.files and unless it is specified in the ignored array it is instrumented using the instrumentSync method and then written to the destination. The instrumented files will look very ugly, don’t be alarmed, this is just how the coverage information is collected. The reason we specify some files to be ignored is that we only want code coverage information for our own code, not 3rd party libraries. Any file that is matched by the ignored array will just be copied as is.

Now the reason for using a forked version of grunt-mocha becomes apparent, we need to get the coverage information collected from the page running in PhantomJS back to our coverage task somehow. This required some very small changes to this plugin which can be seen here that result in a coverage event being fired with the event data being the coverage data. The data can be saved by just listening for the event:

grunt.event.on('coverage', function (data) {
    grunt.config('coverage.coverage', data);

Generating the report is probably the most awkward bit:


var Report = istanbul.Report;
var Collector = istanbul.Collector;
var collector = new Collector();

// this will be an array like ['html', 'text-summary']
var reporters = this.data.reports;
var dest = this.data.dest;

// fetch the coverage object we saved earlier

reporters.forEach(function (reporter) {

    Report.create(reporter, {
        dir: dest + '/' + reporter
    }).writeReport(collector, true);



All done! Tests can be run on the command line by running grunt test and then in the browser by opening the generated test/index.html file. Have a look at the full Gruntfile to see how all these things tie together, but the output should look like this:

comments powered by Disqus