Instant Feedback in IDE

After I hit Ctrl-S on my IDE, my patience for test results is about two seconds before I start to get annoyed. What this means in practice is that you can not wait until the browser launches and runs the tests. The browser needs to be already running. Hitting refresh on your browser manually is very expensive since the browser needs to reload all of the JavaScript code an re-parse it. If you have one HTML file for each TestCase and you have hundred of these TestCases, The browser may be busy for several minutes until it reloads and re-parses the same exact production code once for each TestCase. There is no way you can fit that into the patience of average developer after hitting Ctrl-S.

Introducing JsTestDriver

Jeremie Lenfant-engelmann and I have set out to build a JavaScript test runner which solves exactly these issues so that Ctrl-S causes all of my JavaScript tests to execute in under a second on all browsers. Here is how Jeremie has made this seemingly impossible dream a reality. On startup JsTestDriver captures any number of browsers from any number of platforms and turns them into slaves. As slave the browser has your production code loaded along with all of your test code. As you edit your code and hit Ctrl-S the JsTestDriver reloads only the files which you have modified into the captured browsers slaves, this greatly reduces the amount of network traffic and amount of JavaScript re-parsing which the browser has to do and therefore greatly improves the test execution time. The JsTestDriver than runs all of your test in parallel on all captured browsers. Because JavaScript APIs are non-blocking it is almost impossible for your tests to run slow, since there is nothing to block on, no network traffic and no re-parsing of the JavaScript code. As a result JsTestDriver can easily run hundreds of TestCases per second. Once the tests execute the results are sent over the network to the command which executed the tests either on the command line ready to be show in you IDE or in your continuous build.

 

 

Web app acceptance test survival techniques, Part 3: Musings

Wednesday, May 20, 2009

By Julian Harty

Part 1 and Part 2 of this series provided how-tos and usefulness tips for creating acceptance tests for Web apps. This final post reflects on some of the broader topics for our acceptance tests.

Aims and drivers of our tests

In my experience and that of my colleagues, there are drivers and aims for acceptance tests. They should act as ‘ safety rails ’, by analogy similar to crash barriers at the sides of roads, that keep us from straying too far from the right direction. Our tests need to ensure development doesn’t break essential functionality. The tests must also provide early warning, preferably minutes after relevant changes have been made to the code.

My advice for developing acceptance tests for Web applications: start simple, keep them simple, and find ways to build and establish trust in your automation code. One of the maxims I use when assessing the value of a test is to think of ways to fool my test into giving erroneous results. Then I decide whether the test is good enough or whether we need to add safeguards to the test code to make it harder to fool. I’m pragmatic and realise that all my tests are imperfect; I prefer to make tests ‘good enough’ to be useful where essential preconditions are embedded into the test. Preconditions should include checking for things that invalidate assumptions for that test (for example, the logged-in account is assumed to have administrative rights) and checking for the appropriate system state (for example, to confirm the user is starting from the correct homepage and has several items in the shopping basket).

The value of the tests, and their ability to act as safety rails, is directly related to how often failing tests are a "false positive." Too many false positives, and a team loses trust in their acceptance tests entirely.

Acceptance tests aren’t a ‘silver bullet.’ They don’t solve all our problems or provide complete confidence in the system being tested (real life usage generates plenty of humbling experiences). They should be backed up by comprehensive automated unit tests and tests for quality attributes such as performance and security. Typically, unit tests should comprise 70% of our functional tests, integration tests 20%, and acceptance tests the remaining 10%.

We need to be able to justify the benefits of the automated tests and understand both the return on investment (ROI) and Opportunity Cost – the time we spend on creating the automated tests is not available to do other things, so we need to ask whether we could spend our time better. Here, the intent is to consider the effects and costs rather than provide detailed calculations; I typically spend a few minutes thinking about these factors as I’m deciding whether to create or modify an automated test. As code spends the vast majority of time in maintenance mode, living on for a long time after active work has ceased, I recommend assessing most costs and benefits over the life of the software. However, opportunity cost must be considered within the period I’m actively working on the project, as that’s all the time I have available.


Понравилась статья? Добавь ее в закладку (CTRL+D) и не забудь поделиться с друзьями:  



double arrow
Сейчас читают про: