My current project is a J2EE project which uses JDOs for persistence. Off late there has been a huge emphasis on running our unit and unit integration tests inside the app container. With lack of IoC and DI, the code does behave different inside the container and outside the container. Given this as the reality of the world which we cannot change, how do we test drive our code?
So we started of by using JUnitEE for running unit tests inside the container and FitNesse for acceptance tests at the service level. FitNesse does not run inside the container. Hence remote debugging, EJB refs and other container provided facilities could not be used. So I started a new open source project called Patang which gives you the infrastructure to run Fit/FitNesse inside the container through a FitServlet.
Now we had JUnitEE tests testing individual layers of our services, while FitNesse test doing more of what we call end to end service test. We found the following problems with this approach:
1. There seemed like a lot of duplication of test coverage and test data in these 2 types of tests.
2. JUnitEE is as bad as FitNesse tests in terms of quick feedback. In both the cases we need to redeploy our code to the container to test it.
3. Any refactoring meant changes in 2 places. So maintaining the tests became a costly affair.
Because of these problems, we started just writing fit tests and driving our development using FitNesse tests. It seems to be working fine so far.
I feel this approach gives us the following advantages:
1. Fit is pretty good to express the intent. It helps to clearly express what needs to be done. What are the inputs and what are the outputs.
2. It is very good in separating setup data from the actual test itself. So it‘s much easier to understand.
3. We are using just one tool for in container testing and hence decrease learning curve for new team members.
4. Much less test maintenances. It‘s all in one place.
5. Our tests cater to a broader audience. It‘s no more just a developer tool. Our testers can take these fit tests and enhance them with different scenarios. This helps them to keep pace with development. And sometimes drive development to some extent.
6. xJunit is not very good at setting the context and providing explanation to the tests. With Fit we can make our tests more like stories and hence bring them closer to our acceptance tests.
Side effects of this approach:
1. It is much harder to refactor your code, because now you have the fit test and its associated fit fixture. When we rename a method/class or change a method signature, the fit fixture gets refactored by the IDE. But the fit test can be difficult to refactor. It can be even more difficult to maintain them if the test pages are not inside the project structure.
2. The feedback cycle is longer. But same as any in container testing framework.
3. Writing and maintaining the fit pages is a very difficult task. It can get very cryptic. There is no decent IDE to do this. [May be my next open source project]
Some myths associated with this approach:
1. We are moving tests away from the code. I don‘t think so. The fixtures are still organized in the same way we organize our tests.
2. More maintenance. Much better than applying changes in 3 different places. [JUnitEE and then acceptance tests]
3. No IDE support. One can easily use the FitRunner class to run fit tests from the IDE.
4. Cannot be made a part of the build process. Patang project provides you with a ServletInvoker class which can run all your fit tests inside the container and publish the results.
This seems to be working fine for us so far. In the end that‘s what it boils down to.