XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Archive for the ‘Testing’ Category

Example of Test Driving Code with Events

Wednesday, July 6th, 2011

Following is a stripped down version of the actual code.

Start with a simple Test:

private static void ContinueMethod(object sender, EventArgs arg)
{
    ((Eventful)sender).ContinueExecutionFor("ContinueMethod");
}
 
[Test]
public void EventHandlerCanRequestFurtherCodeExecution()
{
    var eventful = new Eventful();
    eventful.ContinuationHandler += ContinueMethod;
    eventful.Execute();
    Assert.AreEqual("ContinueMethod wants to execute more logic", eventful.State);
}

and then wrote the production code:

public class Eventful
{
    public string State;
    public event EventHandler<EventArgs> ContinuationHandler;
 
    public void Execute()
    {
        // logic to get ready
 
        var args = new EventArgs();
        //stuff args with some data
        ContinuationHandler(this, args); 
    }
 
    public void ContinueExecutionFor(string delegateName)
    {
        State = delegateName + " wants to execute more logic";
        // some logic
    }
}

The wrote the next Test:

private static void StopMethod(object sender, EventArgs arg)
{
    // no call back
}
 
[Test]
public void EventHandlerCanStopFurtherCodeExecution()
{
    eventful.ContinuationHandler += StopMethod;
    eventful.Execute();
    Assert.IsNull(eventful.State);
}

and this went Green.

Then I wondered what happened if there were no registered event handlers. So I wrote this test.

[Test]
public void NoOneIsInterestedInThisEvent()
{
    eventful.Execute();
    Assert.IsNull(eventful.State);
}

This resulted in:

System.NullReferenceException : Object reference not set to an instance of an object.
at Eventful.Execute() in Eventful.cs: line 17
at EventTest.NoOneIsInterestedInThisEvent() in EventTest.cs: line 46

Then I updated the Execute() method in Eventful class:

public void Execute()
{
    // logic to get ready
 
    if (ContinuationHandler != null)
    {
        var args = new EventArgs();
        //stuff args with some data
        ContinuationHandler(this, args);
    }
}

This made all the 3 tests go Green!

Stop Over-Relying on your Beautifully Elegant Automated Tests

Tuesday, June 21st, 2011

Time and again, I find developers (including myself) over-relying on our automated tests. esp. unit tests which run fast and  reliably.

In the urge to save time today, we want to automate everything (which is good), but then we become blindfolded to this desire. Only later to realize that we’ve spent a lot more time debugging issues that our automated tests did not catch (leave the embarrassment caused because of this aside.)

I’ve come to believe:

A little bit of manual, sanity testing by the developer before checking in code can save hours of debugging and embarrassment.

Again this is contextual and needs personal judgement based on the nature of the change one makes to the code.

In addition to quickly doing some manual, sanity test on your machine before checking, it can be extremely important to do some exploratory testing as well. However, not always we can test things locally. In those cases, we can test them on a staging environment or on a live server. But its important for you to discover the issue much before your users encounter it.

P.S: Recently we faced Error CS0234: The type or namespace name ‘Specialized’ does not exist in the namespace ‘System.Collections’ issue, which prompted me to write this blog post.

Preemptively Branching a Release Candidate and Splitting Teams Considered Harmful

Monday, April 18th, 2011

Building on top of my previous blog entry: Version Control Branching (extensively) Considered Harmful

I always discourage teams from preemptively branching a release candidate and then splitting their team to harden the release while rest of the team continues working on next release features.

My reasoning:

  • Increases the work-in-progress and creates a lot of planning, management, version-control, testing, etc. overheads.
  • In the grand scheme of things, we are focusing on resource utilization, but the throughput of the overall system is actually reducing.
  • During development, teams get very focused on churning out features. Subconsciously they know there will be a hardening/optimization phase at the end, so they tend to cut corners for short-term speed gains. This attitude had a snowball effect. Overall encourages a “not-my-problem” attitude towards quality, performance and overall usability.
  • The team (developers, testers and managers) responsible for hardening the release have to work extremely hard, under high pressure causing them to burn-out (and possibly introducing more problems into the system.) They have to suffer for the mistakes others have done. Does not seem like a fair system.
  • Because the team is under high pressure to deliver the release, even though they know something really needs to be redesigned/refactored, they just patch it up. Constantly doing this, really creates a big ball of complex mud that only a few people understand.
  • Creates a “Knowledge/Skill divide” between the developers and testers of the team. Generally the best (most trusted and knowledgable) members are pick up to work on the release hardening and performance optimization. They learn many interesting things while doing this. This newly acquired knowledge does not effectively get communicate back to other team members (mostly developers). Others continue doing what they used to do (potentially wrong things which the hardening team has to fix later.)
  • As releases pass by, there are fewer and fewer people who understand the overall system and only they are able to effectively harden the project. This is a huge project risk.
  • Over a period of time, every new release needs more hardening time due to the points highlighted above. This approach does not seem like a good strategy of getting out of the problem.

If something hurts, do it all the time to reduce the pain and get better at it.

Hence we should build release hardening as much as possible into the routine everyday work. If you still need hardening at the end, then instead of splitting the teams, we should let the whole swamp on making the release.

Also usually I notice that if only a subset of the team can effectively do the hardening, then its a good indication that the team is over-staffed and that might be one of the reasons for many problems in the first place. It might be worth considering down-sizing your team to see if some of those problems can be addressed.

Big Upfront Test Creation in Legacy Code is a Bad Idea

Wednesday, March 9th, 2011

When confronted with Legacy code, we usually run into the Test-Refactor dilemma. To refactor code we need tests, to write tests, we need to refactor the code.

Some people might advise you to invest time upfront to create a whole set of tests. Instead I recommend that every time you touch a piece of legacy code (either to fix a bug or to enhance the functionality), you perform a couple of safe refactoring to enable you to create a few scaffolding tests, then clean up the code (may be even test drive the new code) and then get rid of the scaffolding tests.

Even though this approach might appear to be slower, why does this approach work better?

  • You start seeing some immediate returns.
  • On any given system, there are parts of the system which are more fragile and needs more attention than others. There are parts of code which we actively touch and others we rarely touch. When we have a limit time, it does not make sense in investing effort to create tests for areas that are fairly stable or rarely changed. Big upfront test creation might not take this aspect into account. So you might not get the biggest bang for your buck.
  • The first few tests we usually write are fragile tests. But we won’t get this feedback nor the opportunity to improve the quality of our tests until its too late.
  • When we get into a test creation mode, everyone is focusing on creating more and more tests. Finally when we start using the tests we’ve created, a small change in production code might breaks a whole bunch of tests. First few times developers wonder what happened, but if this generates a lot of false-negative (which usually they do), then developers start ignoring or deleting those tests. So the investment does not really pay for itself.
  • Also when we have a whole lot of tests prematurely written, they start getting in the way of refactoring and genuinely improving the design of the code. (Defeats the whole point of creating test upfront so we can refactor.)
  • People get too attached to the tests they had written (it was a big investment). They somehow want to make the test work. People fail to realize that those fragile tests are slowing them down and getting in their way.
  • Unless the team gets into the habit of gradually building better test coverage, they will always play the catch up game with requirements constantly changing. (Remember we are chasing a moving target.)
  • Its usually hard to take a fixed (usually long) duration of time off from CRs and bug fixes. People will be forced to multi-task.

I encourage you to share your own experience.

    Technical Debt

    Wednesday, February 16th, 2011

    Technical Debt is any technical issues slowing down the project due to hasty (short-sighted) decisions made at an earlier point.

    All of us make bad decisions, but not fixing them and just differing them really leads to bigger problems as these issues have a snowball effect.

    Technical debt can be introduced at various levels:

    • Code smells is the most obvious one,
    • But things like lack of (or poor) automation,
    • poor choice of tools,
    • fragility in the development environment
    • and so on

    can also contribute to technical debt.

    Inverting the Testing Pyramid

    Tuesday, February 1st, 2011

    As more and more companies are moving to the Cloud, they want their latest, greatest software features to be available to their users as quickly as they are built. However there are several issues blocking them from moving ahead.

    One key issue is the massive amount of time it takes for someone to certify that the new feature is indeed working as expected and also to assure that the rest of the features will continuing to work. In spite of this long waiting cycle, we still cannot assure that our software will not have any issues. In fact, many times our assumptions about the user’s needs or behavior might itself be wrong. But this long testing cycle only helps us validate that our assumptions works as assumed.

    How can we break out of this rut & get thin slices of our features in front of our users to validate our assumptions early?

    Most software organizations today suffer from what I call, the “Inverted Testing Pyramid” problem. They spend maximum time and effort manually checking software. Some invest in automation, but mostly building slow, complex, fragile end-to-end GUI test. Very little effort is spent on building a solid foundation of unit & acceptance tests.

    This over-investment in end-to-end tests is a slippery slope. Once you start on this path, you end up investing even more time & effort on testing which gives you diminishing returns.

    They end up with majority (80-90%) of their tests being end-to-end GUI tests. Some effort is spent on writing so-called “Integration test” (typically 5-15%.) Resulting in a shocking 1-5% of their tests being unit/micro tests.

    Why is this a problem?

    • The base of the pyramid is constructed from end-to-end GUI test, which are famous for their fragility and complexity. A small pixel change in the location of a UI component can result in test failure. GUI tests are also very time-sensitive, sometimes resulting in random failure (false-negative.)
    • To make matters worst, most teams struggle automating their end-to-end tests early on, which results in huge amount of time spent in manual regression testing. Its quite common to find test teams struggling to catch up with development. This lag causes many other hard-development problems.
    • Number of end-to-end tests required to get a good coverage is much higher and more complex than the number of unit tests + selected end-to-end tests required. (BEWARE: Don’t be Seduced by Code Coverage Numbers)
    • Maintain a large number of end-to-end tests is quite a nightmare for teams. Following are some core issues with end-to-end tests:
      • It requires deep domain knowledge and high technical skills to write quality end-to-end tests.
      • They take a lot of time to execute.
      • They are relatively resource intensive.
      • Testing negative paths in end-to-end tests is very difficult (or impossible) compared to lower level tests.
      • When an end-to-end test fails, we don’t get pin-pointed feedback about what went wrong.
      • They are more tightly coupled with the environment and have external dependencies, hence fragile. Slight changes to the environment can cause the tests to fail. (false-negative.)
      • From a refactoring point of view, they don’t give the same comfort feeling to developers as unit tests can give.

    Again don’t get me wrong. I’m not suggesting end-to-end integration tests are a scam. I certainly think they have a place and time.

    Imagine, an automobile company building an automobile without testing/checking the bolts, nuts all the way up to the engine, transmission, breaks, etc. And then just assembling the whole thing somehow and asking you to drive it. Would you test drive that automobile? But you will see many software companies using this approach to building software.

    What I propose and help many organizations achieve is the right balance of end-to-end tests, acceptance tests and unit tests. I call this “Inverting the Testing Pyramid.” [Inspired by Jonathan Wilson’s book called Inverting The Pyramid: The History Of Football Tactics].

    Inverting the Testing Pyramid

    In a later blog post I can quickly highlight various tactics used to invert the pyramid.

    Update: I recently came across Alister Scott’s blog on Introducing the software testing ice-cream cone (anti-pattern). Strongly suggest you read it.

    Simple Design and Testing Conference: London, UK 12-13th March 2011

    Thursday, January 20th, 2011

    Simple Design and Testing Conference is an all open space conference providing software practitioners a platform to meet face-to-face and discuss/demonstrate simple design & testing principles/approaches.

    At this conference you’ll meet real, hands-on practitioners interested in peer-to-peer learning and exploration. We strive hard to avoid fluffy, marketing talks and other non-sense.

    • What: Open Space Conference on Simple Design & Testing practices
    • Where: Skills Matter eXchange, London, UK
    • When: 12th-13th Mar 2011
    • Who: Software Practitioners (Developers, Testers, UX Designer…)
    • Cost: £50.00, also (Position Paper required!)

    SDT Conf 2011 is our 6th annual conference and for the first time in Europe. Check out the past conference SDT Conf 2006, 2007, 2008, 2009 and 2010 details.

    Register now…

    Single Assert Per Unit Test: Myth

    Sunday, November 14th, 2010

    You are telling me, if I were unit testing a List class and I wrote tests like the following with 2 assert statements in them, then I’m violating xUnit testing “best practices”.

    @Test
    public void addingElementIncreasesSizeOfTheList() {
    	assertEquals(0, list.size()); 
           //or assertListSizeIs(0); - define your custom assert statement
    	list.add("Element");
    	assertEquals(1, list.size()); // assertListSizeIs(1);
    }

    If so, how and why?

    I understand that if I wrote a test like the following its not very communicative:

    @Test
    public void removeElementBasedOnItsIndexAndReduceSizeByOne() {
    	list.add("Element");
    	assertEquals("Element", list.remove(0));
    	assertEquals(0, list.size());
    }

    in this case, it might be better to write 2 test instead:

    @Test
    public void removeElementBasedOnItsIndex() {
    	list.add("Element");
    	assertEquals("Element", list.remove(0));
    }
     
    @Test
    public void reducesSizeByOneOnRemovingAnElement() {
    	list.add("Element");
    	assertEquals(1, list.size());
    	list.remove(0);
    	assertEquals(0, list.size());
    }

    Notice our goal was better communication and one of the tests we ended up (incidentally) with had just one assert statement in it (which is better than the previous test which was asserting 2 different things).

    Our goal is not one assert statement per test. Our goal is better communication/documentation. Using the one-assert-per-test heuristic helps to achieve better communication. But splitting tests into more tests or deleting an important assert statement from a test just to avoid multiple assert statements in a test is mindless dogma.

    Conclusion: One-Assert-Per-Test really means test one aspect/behavior in each test (and communicate that clearly in your test name). It does not literally mean, one test should have only one assert statement in it. Test one aspect/behavior in one test and related aspect/behavior can be tested in another test. Sometimes we need more than one assert statement to assert one aspect/behavior and its helps to add those assert statements in the test for better communication.

    So don’t be afraid, go ahead and add that second or third assert statement in your test. Be careful not to fill your test with many asserts, it becomes difficult to understand and debug.

    UPDATE:
    Folks, I’m sorry for using a naive code example. I agree the example does not do justice to the point I’m trying to make.

    @Robert, I certainly like your test. Much better.

    Here is another naive example, looking forward to brickbats 😉

    Let’s assume I’m building a RomanDigit class and I need to make sure that RomanDigits are map friendly, .i.e. they work correctly when added to maps. Following is a test for the same:

    @Test
    public void isMapFriendly() throws Exception {
    	RomanDigit one = new RomanDigit('I', 1);
    	RomanDigit anotherOne = new RomanDigit('I', 1);
    	Map romanDigits = new HashMap();
    	romanDigits.put(one, "One");
    	romanDigits.put(anotherOne, "AnotherOne");
    	assertEquals(1, romanDigits.size());
    	assertEquals("AnotherOne", romanDigits.get(one));
    	assertEquals("AnotherOne", romanDigits.get(anotherOne));
    }

    Another example: When subtracting RomanDigits, only powers of ten can repeat.

    @Test
    public void onlyPowerOfTenCanRepeate() throws Exception {
    	assertTrue("I is a power of Ten and can repeat", romanDigit('I').canRepeat());
    	assertFalse("V is not a power of Ten and should not repeat", romanDigit('V').canRepeat());
    }

    Is there a better way to write these tests so that I only need one assert statement per test?

    5th Annual Simple Design and Testing Conference, Columbus, Ohio, USA, Oct 29th to 31th 2010

    Tuesday, October 12th, 2010

    I’m proud to announce the 5th Annual SDTConf. This year we plan to hold the conference in Otterbein University Campus Center, OH, USA.

    We plan to keep a max cap of 100 participants for this conference.

    As you might be aware SDTConf is a free conference and we use the concept of position papers as the price for admission. This helps us ensure the quality of the participants is really high. You can add your position papers for the conference on our wiki. Making the position papers public helps other participants gauge in advance what they can expect from the conference.

    Last but not the least, since this is a community run, non-profit event, we really on sponsorship in kind to make this event possible. Here is a list of items that you or your company can sponsor to support this conference.

    P.S: Please blog about this conference and/or send an email to your friends and colleagues. Word of mouth is the only way we market this event.

    Test Automation Dilemma

    Thursday, June 24th, 2010

    I regularly practice Test Driven Development. I truly believe in its benefits and also have been able to influence a huge number of develops to practice it.

    However, there are situations in which I ask myself questions like:

    • Is it important to automate this particular check (test) ?
    • Will it be worth the effort/investment or is it YAGNI?
    • What is the possibility of this particular scenario breaking (risk)?
    • And so on…

    Yesterday I was faced with a similar situation where I decided to skip the automated test.

    Context: I was working on a website where users can use their credit card while shopping online. Basically the website had a simple text input box to accept the credit card number. It turns out that most browsers (except Safari) caches the input data and stores it in plain text somewhere on the computer.

    So we wanted to turn off the auto-complete and caching feature on this specific input field. There is a very easy way to do this. Just need to set autocomplete=”off” in the input tag. For example:

    <input type=”TEXT” name=”creditcard” autocomplete=”off” />

    Its an easy fix and quite widely used. So that’s great. But how do I write an automated test for this?

    If we think a little hard, there are ways to write automated test. But then you ask yourself is it worth it?

    This site had lived without this feature for so long, so I did not think it was that crucial to its users. Even if this feature stops works, it won’t bring the site down. (They certainly had a good battery of automated tests which tests the core functionality if the product.) So I choose to skip the automated test. I manually tested it with different browsers made sure it was working as expected.

    If this comes back and bites me, I’ll certainly invest in some form of safety net, but for now, time to hack some more code.

    What would you choose to do?

        Licensed under
    Creative Commons License