For the past year or so, I have been researching how software tests fail and the ways in which developers fix the failures. There are many interesting problems within this general theme, but I have most recently focused on the following familiar scenario:
Alice is a developer a large software company. She works on the company's flagship product and spends over half of her time writing unit tests to verify her code and document her assumptions. She is not alone in this respect; the company requires that functional changes and bugfixes should have corresponding unit tests to prevent regressions. As a result, the product's unit test suite achieves exceptionally high coverage.
One day, the project manager informs Alice that a key requirement has changed. The changed requirement violates many assumptions encoded the test suite, so several dozen tests fail after Alice modifies the software. Now Alice has a choice: should she remove the failing tests since they no longer reflect the correct behavior of the software, or should she attempt to repair the tests, which would require tedious and time-consuming manual editing?
Developers often have to make a similar choice. When tests fail due to problems with test code rather than the system under test, it is undoubtedly beneficial to fix the broken tests, since removing tests reduces a test suite's ability to detect regressions. However, developers may not take the time to fix the broken tests. For example, while working on
if (true) return;".
To solve this problem, I have been exploring ways of reducing the effort required to fix broken unit tests. Doing so would make developers less fearful of "deep" changes, allow them to write more detailed tests, and most importantly, provide time for more important work.
As a first step toward this goal, I developed a tool called ReAssert that automatically suggests changes to test code that are sufficient to make tests pass. Earlier this week I released a public beta. I welcome anyone reading this to download it from