XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Avatars of Test Driven Development (TDD)

Tuesday, March 19th, 2013

It’s easy to speak of test-driven development as if it were a single method, but there are several ways to approach it. In my experience, different approaches lead to quite different solutions.

In this hands-on workshop, with the help of some concrete examples, I’ll demonstrate the different styles and more importantly what goes into the moment of decision when a test is written? And why TDDers make certain choices. The objective of the session is not to decide which approach is best, rather to highlight various different approaches/styles of practicing test-driven development.

By the end of this session, you will understand how TTDers break down a problem before trying to solve it. Also you’ll be exposed to various strategies or techniques used by TDDers to help them write the first few tests.

Outside In – Behaviour Driven Development (BDD)

Tuesday, March 19th, 2013

Recently at the Agile India 2013 Conference I ran an introductory workshop on Behavior Driven Development. This workshop offered a comprehensive, hands-on introduction to behavior driven development via an interactive-demo.

Over the past decade, eXtreme Programming practices like Test-Driven Development (TDD) and Behaviour Driven Development (BDD) have fundamentally changed software development processes and inherently how engineers work. Practitioners claim that it has helped them significantly improve their collaboration with business, development speed, design & code quality and responsiveness to changing requirements. Software professionals across the board, from Internet startups to medical device companies to space research organizations, today have embraced these practices.

This workshop explores the foundations of TDD & BDD with the help of various patterns, strategies, tools and techniques.

Inverting the Testing Pyramid

Tuesday, March 19th, 2013

As more and more companies are moving to the Cloud, they want their latest, greatest software features to be available to their users as quickly as they are built. However there are several issues blocking them from moving ahead.

One key issue is the massive amount of time it takes for someone to certify that the new feature is indeed working as expected and also to assure that the rest of the features will continuing to work. In spite of this long waiting cycle, we still cannot assure that our software will not have any issues. In fact, many times our assumptions about the user’s needs or behavior might itself be wrong. But this long testing cycle only helps us validate that our assumptions works as assumed.

How can we break out of this rut & get thin slices of our features in front of our users to validate our assumptions early?

Most software organizations today suffer from what I call, the “Inverted Testing Pyramid” problem. They spend maximum time and effort manually checking software. Some invest in automation, but mostly building slow, complex, fragile end-to-end GUI test. Very little effort is spent on building a solid foundation of unit & acceptance tests.

This over-investment in end-to-end tests is a slippery slope. Once you start on this path, you end up investing even more time & effort on testing which gives you diminishing returns.

In this session Naresh Jain will explain the key misconceptions that has lead to the inverted testing pyramid approach being massively adopted, main drawbacks of this approach and how to turn your organization around to get the right testing pyramid.

The Ever-Expanding Agile and Lean Software Terminology

Sunday, July 8th, 2012
A Acceptance Criteria/Test, Automation, A/B Testing, Adaptive Planning, Appreciative inquiry
B Backlog, Business Value, Burndown, Big Visible Charts, Behavior Driven Development, Bugs, Build Monkey, Big Design Up Front (BDUF)
C Continuous Integration, Continuous Deployment, Continuous Improvement, Celebration, Capacity Planning, Code Smells, Customer Development, Customer Collaboration, Code Coverage, Cyclomatic Complexity, Cycle Time, Collective Ownership, Cross functional Team, C3 (Complexity, Coverage and Churn), Critical Chain
D Definition of Done (DoD)/Doneness Criteria, Done Done, Daily Scrum, Deliverables, Dojos, Drum Buffer Rope
E Epic, Evolutionary Design, Energized Work, Exploratory Testing
F Flow, Fail-Fast, Feature Teams, Five Whys
G Grooming (Backlog) Meeting, Gemba
H Hungover Story
I Impediment, Iteration, Inspect and Adapt, Informative Workspace, Information radiator, Immunization test, IKIWISI (I’ll Know It When I See It)
J Just-in-time
K Kanban, Kaizen, Knowledge Workers
L Last responsible moment, Lead time, Lean Thinking
M Minimum Viable Product (MVP), Minimum Marketable Features, Mock Objects, Mistake Proofing, MOSCOW Priority, Mindfulness, Muda
N Non-functional Requirements, Non-value add
O Onsite customer, Opportunity Backlog, Organizational Transformation, Osmotic Communication
P Pivot, Product Discovery, Product Owner, Pair Programming, Planning Game, Potentially shippable product, Pull-based-planning, Predictability Paradox
Q Quality First, Queuing theory
R Refactoring, Retrospective, Reviews, Release Roadmap, Risk log, Root cause analysis
S Simplicity, Sprint, Story Points, Standup Meeting, Scrum Master, Sprint Backlog, Self-Organized Teams, Story Map, Sashimi, Sustainable pace, Set-based development, Service time, Spike, Stakeholder, Stop-the-line, Sprint Termination, Single Click Deploy, Systems Thinking, Single Minute Setup, Safe Fail Experimentation
T Technical Debt, Test Driven Development, Ten minute build, Theme, Tracer bullet, Task Board, Theory of Constraints, Throughput, Timeboxing, Testing Pyramid, Three-Sixty Review
U User Story, Unit Tests, Ubiquitous Language, User Centered Design
V Velocity, Value Stream Mapping, Vision Statement, Vanity metrics, Voice of the Customer, Visual controls
W Work in Progress (WIP), Whole Team, Working Software, War Room, Waste Elimination
X xUnit
Y YAGNI (You Aren’t Gonna Need It)
Z Zero Downtime Deployment, Zen Mind

Unit Testing Dilemma: Should I Invest or Not?

Tuesday, November 1st, 2011

Every single line of code must be unit tested!

This sound advice rather seems quite extreme to me. IMHO a skilled programmer pragmatically decides when to invest in unit testing.

After practicing (automated) unit testing for over a decade, I’m a strong believer and proponent of automated unit testing. My take on why developers should care about Unit Testing and TDD.

However over the years I’ve realized that automated unit tests do have four, very important, costs associated with them:

  • Cost of writing the unit tests in the first place
  • Cost of running the unit tests regularly to get feedback
  • Cost of maintaining and updating the unit tests as and when required
  • Cost of understanding other’s unit tests
One also starts to recognize some other subtle costs associated with unit testing:
  • Illusion of safety: While unit tests gives you a great safety net, at times, it can also create an illusion of safety leading to developers too heavily relying on just unit tests (possibly doing more harm than good.)
  • Opportunity cost: If I did not invest in this test, what else could I have done in that time? Flip side of this argument is the opportunity cost of repetitive manually testing or even worse not testing at all.
  • Getting in the way: While unit tests help you drive your design, at times, they do get in the way of refactoring. Many times, I’ve refrained from refactoring the code because I get intimidated by the sheer effort of refactor/rewrite a large number of my tests as well. (I’ve learned many patterns to reduce this pain over the years, but the pain still exists.)
  • Obscures a simpler design: Many times, I find myself so engrossed in my tests and the design it leads to, that I become ignorant to a better, more simpler design. Also sometimes half-way through, even if I realize that there might be an alternative design, because I’ve already invested in a solution (plus all its tests), its harder to throw away the code. In retrospect this always seems like a bad choice.
If we consider all these factors, would you agree with me that:
Automated unit testing is extremely important, but each developer has to make a conscious, pragmatic decision when to invest in unit testing.
Its easy to say always write unit tests, but it takes years of first-hand experience to judge where to draw the line.

Importance of Unit Testing and Test Driven Development (TDD)

Tuesday, November 1st, 2011

Why should developers care of automated unit tests?

  • Keeps you out of the (time hungry) debugger!
  • Reduces bugs in new features and in existing features
  • Reduces the cost of change
  • Improves design
  • Encourages refactoring
  • Builds a safety net to defend against other programmers
  • Is fun
  • Forces you to slow down and think
  • Speeds up development by eliminating waste
  • Reduces fear

And how TDD takes it to the next level?

  • Improves productivity by
    • minimizing time spent debugging
    • reduces the need for manual (monkey) checking by developers and tester
    • helping developers to maintain focus
    • reduce wastage: hand overs
  • Improves communication
  • Creating living, up-to-date specification
  • Communicate design decisions
  • Learning: listen to your code
  • Baby steps: slow down and think
  • Improves confidence
  • Testable code by design + safety net
  • Loosely-coupled design
  • Refactoring

Inverting the Testing Pyramid

Tuesday, February 1st, 2011

As more and more companies are moving to the Cloud, they want their latest, greatest software features to be available to their users as quickly as they are built. However there are several issues blocking them from moving ahead.

One key issue is the massive amount of time it takes for someone to certify that the new feature is indeed working as expected and also to assure that the rest of the features will continuing to work. In spite of this long waiting cycle, we still cannot assure that our software will not have any issues. In fact, many times our assumptions about the user’s needs or behavior might itself be wrong. But this long testing cycle only helps us validate that our assumptions works as assumed.

How can we break out of this rut & get thin slices of our features in front of our users to validate our assumptions early?

Most software organizations today suffer from what I call, the “Inverted Testing Pyramid” problem. They spend maximum time and effort manually checking software. Some invest in automation, but mostly building slow, complex, fragile end-to-end GUI test. Very little effort is spent on building a solid foundation of unit & acceptance tests.

This over-investment in end-to-end tests is a slippery slope. Once you start on this path, you end up investing even more time & effort on testing which gives you diminishing returns.

They end up with majority (80-90%) of their tests being end-to-end GUI tests. Some effort is spent on writing so-called “Integration test” (typically 5-15%.) Resulting in a shocking 1-5% of their tests being unit/micro tests.

Why is this a problem?

  • The base of the pyramid is constructed from end-to-end GUI test, which are famous for their fragility and complexity. A small pixel change in the location of a UI component can result in test failure. GUI tests are also very time-sensitive, sometimes resulting in random failure (false-negative.)
  • To make matters worst, most teams struggle automating their end-to-end tests early on, which results in huge amount of time spent in manual regression testing. Its quite common to find test teams struggling to catch up with development. This lag causes many other hard-development problems.
  • Number of end-to-end tests required to get a good coverage is much higher and more complex than the number of unit tests + selected end-to-end tests required. (BEWARE: Don’t be Seduced by Code Coverage Numbers)
  • Maintain a large number of end-to-end tests is quite a nightmare for teams. Following are some core issues with end-to-end tests:
    • It requires deep domain knowledge and high technical skills to write quality end-to-end tests.
    • They take a lot of time to execute.
    • They are relatively resource intensive.
    • Testing negative paths in end-to-end tests is very difficult (or impossible) compared to lower level tests.
    • When an end-to-end test fails, we don’t get pin-pointed feedback about what went wrong.
    • They are more tightly coupled with the environment and have external dependencies, hence fragile. Slight changes to the environment can cause the tests to fail. (false-negative.)
    • From a refactoring point of view, they don’t give the same comfort feeling to developers as unit tests can give.

Again don’t get me wrong. I’m not suggesting end-to-end integration tests are a scam. I certainly think they have a place and time.

Imagine, an automobile company building an automobile without testing/checking the bolts, nuts all the way up to the engine, transmission, breaks, etc. And then just assembling the whole thing somehow and asking you to drive it. Would you test drive that automobile? But you will see many software companies using this approach to building software.

What I propose and help many organizations achieve is the right balance of end-to-end tests, acceptance tests and unit tests. I call this “Inverting the Testing Pyramid.” [Inspired by Jonathan Wilson’s book called Inverting The Pyramid: The History Of Football Tactics].

Inverting the Testing Pyramid

In a later blog post I can quickly highlight various tactics used to invert the pyramid.

Update: I recently came across Alister Scott’s blog on Introducing the software testing ice-cream cone (anti-pattern). Strongly suggest you read it.

Who needs a separate QA Team?

Wednesday, January 14th, 2009

Have you come across developers who think that having a separate Quality Assurance (QA) team, who could test (manually or auto-magically) their code/software at the end of an iteration/release, will really help them? Personally I think this style of software development is not just dangerous but also harmful to the developers’ growth.

Having a QA Team that tests (inspects) the software after it’s built, gives me an impression that you can slap inspection at the end of any process and improve the quality of your product. Unfortunately things don’t work this way. What you want to do is build quality into the process rather than inspecting (checking) at the end of your process to assure quality.

Let me give you an example of what I mean by “building quality into the process“.

Back in the good old days, it was typical for a cloth manufacturer to have 10-15 power looms. They would set up these looms at the beginning of the day and let them run for the day. At the end of the day, they would take all the cloth produced by the looms and hand it over to another team (separate QA team) who would check each cloth for defect.

There were multiple sources of defects. At times one of the threads would break creating a defect in the cloth. At times insects would sit on the thread and would also get woven into the cloth creating a defect. And so on. Checking the cloth at the end of the day was turning out to be very expensive for the cloth manufactures. Basically they were trying to create quality products by inspecting the cloth at the end of the process. This is similar to the QA process in a waterfall project.

Since this was not working out, they hired a lot of people to watch each loom. Best case, there would be one person per loom watching for defects. As soon as a thread would break, they would stop the loom, fix the thread and continue. This certainly helped to reduce the defects, but was not an optimal solution for several reasons:

  • It was turning out to be quite expensive to have one person per loom
  • People at the looms would take breaks during the day and they would either stop the loom during their break (production hit) or would take the risk of letting some defects slip.
  • It become very dependent on how closely these folks watched the loom. In other words, the quality of the cloth was very dependent on the capability of the person (good eyesight and keen attention) inspecting the loom.
  • and so on

As you can see, what we are trying to do here is move the quality assurance process upstream. Trying to build quality into the manufacturing process. This is similar to the traditional Agile process where you have a couple of dedicated QAs on each team, who check for defects during or at the end of the iteration.

The next step which really helped fix this issue, to a great extent, was a ground breaking innovation by Toyoda Looms. As early as 1885 Sakichi Toyoda worked on improving looms.

Toyoda Loom

One of his initial innovation was to introduce a small lever on each thread. As soon as the thread would break, the lever would go and jam the loom. They went on to introduce noteworthy inventions such as automatic thread replenishment without any drop in the weaving speed, non-stop shuttle change motion, etc. Now a days, you can find looms with sensors which detect insect or other dirt on the threads and so on.

Basically what happened in the loom industry is they introduced various small mechanisms to be part of the loom which prevents the defect from being introduced in the first place. In other words, as and when they found issues with the process, they mistake proofed it by stopping it at source. They built quality into the process by shifting their focus from Quality Assurance to Quality Control. This is what you see in some really good product companies where they don’t really have a separate QA team. They focus on how can we eliminate/reduce the chances of introduction of defects rather than how can we detect defects (which is wasteful).

Hence its important that we focus on Quality Control rather than Quality Assurance. The terms “quality assurance” and “quality control” are often used interchangeably to refer to ways of ensuring the quality of a service or product. The terms, however, have different meanings.

Assurance: The act of giving confidence, the state of being certain or the act of making certain.

Quality assurance: The planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled.

Control: An evaluation to indicate needed corrective responses; the act of guiding a process in which variability is attributable to a constant system of chance causes.

Quality control: The observation techniques and activities used to fulfill requirements for quality.

So think about it, do you really need a separate QA team? What are you doing on the lines of Quality Control?

IMHO, in the late 90’s eXtreme Programming really pushed the envelope on this front. With wonderful practices like Automated Acceptance Testing, Test Driven Development, Pair Programming and Continuous Integration, I finally think we are getting closer. Having continuous/frequent working sessions with your customers/users is another great way of building quality into the process.

Lean Startup practices like Continuous Deployment and A/B Testing take this one step further and are really effective in tightening the feedback cycle for measuring user behavior in real context.

As more and more companies are embracing these methods, its becoming clear that we can do away with the concept of a separate QA team or an independent testing team.

Richard Sharpe made a great interview of Jean Tabaka and Bob Martin on the lean concept of “ceasing inspections”. In this 7 minute video, Jean and Bob support the idea of preventing defects upfront rather than at the end. Quality Assurance vs Quality Control

    Licensed under
Creative Commons License