There are different strategies to handle that:
Run your tests in parallel. Don't even try to run them as a chain. This mean you have to isolate every single test environment from the others (DB, …). That is possible, but the overhead might be too big in some cases.
Active polling and waiting. This strategy is used by Jasmine or jasmine-nodejs. Basically, a function is ran periodically to check whether the test is done or not. I don't like this approach, because active polling is bad. Sometimes purity is not the right approach, and maybe this is the time… but this just doesn't seem right to me. Plus, it feels to me like the
waitsForstuff is just too much overhead.
Example of asynchronous test using jasmine-nodejs:
- Signal the end of the test. This approach is used by QUnit, the jQuery test suite, but also node-async-testing. Basically, you call a function in your last callback. Once this function is called, your test is considered as complete. This approach introduces the danger of having parts of the tests never ran. That is why QUnit let you specify the number of expected assertions (optional): if there is less or more than expected, then your test is probably wrong.
Example of asynchronous test using QUnit:
Another caveat of such method is that sometimes it is hard to know which callback is going to finish first, and so where to call the start() function.
- Count the number of assertions you are expecting. When this number is reached, you can start the next test. If an assertion fail, you can also catch it and continue the next tests (or just stop the whole suite). This strategy is inspired from the QUnit one, expect this time we rely on the count of assertions to know if a test is finished or not. I have implemented this strategy in a small nodeJS library: nodetk, which includes a test runner. The library just wraps the assert functions provided with node to count how many times they are called.
Example of test using this strategy:
I like this strategy because because the overhead it introduces is small, but one might find it painful to count the number of assertions each test is going to make. My argument is that anyway, when you write tests, you really should know what is expected to happen.
And you, what strategy do you prefer? We do not consider here the style of tests (RSpec like, based on assertions…), but strategy to handle asynchronous calls within the tests. I'm really interested in your experiences and feedbacks, so please don't hesitate to comment on your strategy or the ones exposed here!