An example project using all of the methods discussed in this post can be found here.
Our requirements for unit tests were as follows:
- Work with AMD modules (we use RequireJS)
- Can be run as a Grunt task as part of our build
- Can be run in a browser
- Provide coverage information
- Provide a way to mock/stub things
To meet these requirements we used a mix of different libraries and tied them together with some custom Grunt tasks.
The example project mentioned at the beginning of this post shows exactly how everything is structured, but the basic directory structure is:
We use Mocha as a test runner as it provides a number of different output options, a number of different interfaces for writing tests, and plays nicely with AMD modules. We like the BDD (Behaviour Driven Development) interface for writing tests and we use Expect.js as an assertion library.
Both of these things can be installed with npm:
Expect.js also uses BDD style assertions so you end up with very human readable assertion statements. It also has built in support for asserting types, and loose equality of things like arrays and objects.
One of our requirements was the ability to mock / stub things in the tests. Mocking is a very useful tool when writing unit tests as it allows you test a unit in isolation, and also create very specific situations to ensure your code is able to handle errors. A good example of when mocking is useful is network requests, if you have a module that makes AJAX calls then you can mock that module and control its behaviour without actually making any network requests during tests.
The mocking framework we use is Sinon.js, which has a lot of very powerful functionality for mocking or stubbing objects and functions. Sinon can also be installed with npm:
The Sinon documentation explains the functionality in great detail but a fairly simple example would be testing a function that takes a function as a parameter and asserting it calls that function the correct number of times with the correct arguments. For example:
The benefit of using AMD modules and RequireJS is that we can leverage the configuration options of RequireJS to mock a modules dependencies. For example if module A depends on module B, then we can use the
map option to tell RequireJS to load a mocked version of B when A asks for it.
There are two ways we want to run tests, as part of our Grunt build process, and also manually in a browser. As it turns out the latter will come for free when the former is working. We use a forked version of grunt-mocha to run the tests as part of our Grunt build, the reason we use a forked version is to do with generating coverage reports, which I will get to later on. This plugin uses PhantomJS to run the unit tests and has a fairly simple config.
grunt-mocha can be installed with npm:
Initially the Gruntfile looks pretty simple, just a basic configuration for the the grunt-mocha task. The run option is set to false as we are using AMD modules that will be loaded asynchronously, meaning we will need to call
mocha.run() ourselves when all our tests have loaded.
The test/index.html file either has to be kept up to date with the tests you want to run or generated dynamically. I prefer to automate anything that can feasibility be automated so we have a simple custom Grunt task that generates the index.html file from a template.
Custom ‘test’ Task
The test task basically uses the
grunt.file.expand() method to get a list of all the test files and then generates an index.html file using the template. We are using RequireJS’s
callback options to load the test files and then call
mocha.run when they are all loaded. Notice that we don’t need to specify the source files that will be loaded as each test will load the files it needs to test.
Notice too that the
baseUrl is set to src and not ../src as you might expect. This is because we are going to copy instrumented versions of source code into the test directory so we can get coverage information.
We use Istanbul to generate coverage reports after running the unit tests. This is not the friendliest tool I have ever used but it generates accurate coverage reports in a number of different formats.
There are two steps to getting coverage information, the first is instrumentation which modifies the code to collect information about which branches / statements / functions ran. The second is generating a report based on the information collected. The code for generating the instrumented code looks like this:
All that is happening here is we are going over every file in
this.files and unless it is specified in the
ignored array it is instrumented using the
instrumentSync method and then written to the destination. The instrumented files will look very ugly, don’t be alarmed, this is just how the coverage information is collected. The reason we specify some files to be ignored is that we only want code coverage information for our own code, not 3rd party libraries. Any file that is matched by the
ignored array will just be copied as is.
Now the reason for using a forked version of grunt-mocha becomes apparent, we need to get the coverage information collected from the page running in PhantomJS back to our coverage task somehow. This required some very small changes to this plugin which can be seen here that result in a
coverage event being fired with the event data being the coverage data. The data can be saved by just listening for the event:
Generating the report is probably the most awkward bit:
All done! Tests can be run on the command line by running
grunt test and then in the browser by opening the generated test/index.html file. Have a look at the full Gruntfile to see how all these things tie together, but the output should look like this: