XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

The Ever-Expanding Agile and Lean Software Terminology

Sunday, July 8th, 2012
A Acceptance Criteria/Test, Automation, A/B Testing, Adaptive Planning, Appreciative inquiry
B Backlog, Business Value, Burndown, Big Visible Charts, Behavior Driven Development, Bugs, Build Monkey, Big Design Up Front (BDUF)
C Continuous Integration, Continuous Deployment, Continuous Improvement, Celebration, Capacity Planning, Code Smells, Customer Development, Customer Collaboration, Code Coverage, Cyclomatic Complexity, Cycle Time, Collective Ownership, Cross functional Team, C3 (Complexity, Coverage and Churn), Critical Chain
D Definition of Done (DoD)/Doneness Criteria, Done Done, Daily Scrum, Deliverables, Dojos, Drum Buffer Rope
E Epic, Evolutionary Design, Energized Work, Exploratory Testing
F Flow, Fail-Fast, Feature Teams, Five Whys
G Grooming (Backlog) Meeting, Gemba
H Hungover Story
I Impediment, Iteration, Inspect and Adapt, Informative Workspace, Information radiator, Immunization test, IKIWISI (I’ll Know It When I See It)
J Just-in-time
K Kanban, Kaizen, Knowledge Workers
L Last responsible moment, Lead time, Lean Thinking
M Minimum Viable Product (MVP), Minimum Marketable Features, Mock Objects, Mistake Proofing, MOSCOW Priority, Mindfulness, Muda
N Non-functional Requirements, Non-value add
O Onsite customer, Opportunity Backlog, Organizational Transformation, Osmotic Communication
P Pivot, Product Discovery, Product Owner, Pair Programming, Planning Game, Potentially shippable product, Pull-based-planning, Predictability Paradox
Q Quality First, Queuing theory
R Refactoring, Retrospective, Reviews, Release Roadmap, Risk log, Root cause analysis
S Simplicity, Sprint, Story Points, Standup Meeting, Scrum Master, Sprint Backlog, Self-Organized Teams, Story Map, Sashimi, Sustainable pace, Set-based development, Service time, Spike, Stakeholder, Stop-the-line, Sprint Termination, Single Click Deploy, Systems Thinking, Single Minute Setup, Safe Fail Experimentation
T Technical Debt, Test Driven Development, Ten minute build, Theme, Tracer bullet, Task Board, Theory of Constraints, Throughput, Timeboxing, Testing Pyramid, Three-Sixty Review
U User Story, Unit Tests, Ubiquitous Language, User Centered Design
V Velocity, Value Stream Mapping, Vision Statement, Vanity metrics, Voice of the Customer, Visual controls
W Work in Progress (WIP), Whole Team, Working Software, War Room, Waste Elimination
X xUnit
Y YAGNI (You Aren’t Gonna Need It)
Z Zero Downtime Deployment, Zen Mind

Is Code Coverage, Cyclomatic Complexity or Defect Density a good Measure of Quality?

Sunday, October 9th, 2011

In Software, Quality is one of those badly abused term, which is getting harder and harder to define what it really means. I think we have a sense of quality. When we see something in a specific context, we can say its high quality or low quality, but its hard to define (and hence measure) what absolute quality really is.

You can measure somethings about quality, but don’t fool yourself to believe that IS quality.

Quality is subjective, relative and contextual.

Some might say things like code coverage, cyclomatic complexity and defect density is a good measure of quality. I would argue that those are attributes/aspects of quality, but not quality itself (symptoms not the disease itself.) Its a classic case of Fundamental Attribution Error. (If you go to France and see the first 50 Frenchmen wear glasses, you cannot conclude all Frenchmen wear glasses. Nor can you conclude that, if I wear glasses I’ll also be French.)

BTW people already differentiate between Internal/Intrinsic Quality and External/Extrinsic Quality. This is not enough to complicate things, evangelists would like to further slice and dice quality along different parameters (structural, functional, UX, etc.)

Some anecdotes:

Impact of Continuous Integration on Team Culture

Sunday, April 17th, 2011

Better productivity and collaboration via 

improved feedback and high-quality information.

Continuous Integration

Impact of Continuous Integration on Team Culture:

  • Encourages an Evolutionary Design and Continuous Improvement culture
  • On complex projects, forces a nicely decoupled design such that each modules can be independently tested. Also ensures that in production you can support different versions of each module.
  • Team takes shared ownership of their development and build process
  • The source control trunk is in an always-working-state (avoid multiple branch issues)
    • No developer is blocked because they can’t get stable code
  • Developers break down work into small end-to-end, testable slices and checks-in multiple times a day
    • Developers are up-to date with other developer changes
    • Team catches issues at the source and avoids last minute integration nightmares
    • Developers get rapid feedback once they check-in their code
      • Builds are optimized and parallelized for speed
        • Builds are incremental in nature (not big bang over-night builds)
      • Builds run all the automated tests (may be staged) to give realistic feedback
        • Captures and visualizes build results and logs very effectively
      • Display various source code quality metrics trends
        • Code coverage, cyclomatic complexity, coding convention violation, version control activity, bug counts, etc.
  • Influence the right behavior in the team by acting as Information Radiator in the team area
    • Provide clear visual feedback about the build status
  • Developers ask for an easy way to run and debug builds locally (or remotely)
  • Broken builds are rare. However broken builds are rapidly fixed by developers
    • Build results are intelligently archived
    • Easy navigation between various build versions
      • Easily visualization and comparison of the change sets
  • Large monolithic builds are broken into smaller, self contained builds with a clear build promotion process
  • Complete traceability exists
    • Version Control, Project & Requirements Managements tool, Bug Tracking and Build system are completely integrated.
  • CI page becomes the project dashboard for everyone (devs, testers, managers, etc.).

Any other impact you think is worth highlighting?

Don’t be Seduced by Code Coverage Numbers

Tuesday, February 1st, 2011

I once worked for a manager (a.k.a Scrum Master) who insisted that we should have at least 85% code coverage on our project. If we did not meet the targeted coverage numbers this sprint, then the whole team would face the consequences during the upcoming performance appraisals. In spite of trying to talk to her, she insisted. I would like to believe that the manager had good intent and was trying to encourage developers to write clean, tested code.

However the ground reality was, my colleagues were not convinced about automated developer testing, they lacked basic knowledge and skill to do effective developer testing and the manager always kept them under high pressure. (Otherwise people would slack, she said.)

Now what do you expect to happened when managers push certain (poorly understood) metric down developers’ throats?

Humans are very good at gaming systems and they do a wonderful job.

  • Some developers wrote automated tests with ZERO assert statements
  • Others wrote complex, fragile test code which was an absolute maintenance nightmare. Their coverage numbers looked impressive.
  • Me and my pair wrote a small test which would simply crawl all the code base. We were able to achieve 99% code coverage. (1% intensionally left, so we don’t look suspicious.)
  • And so on…

The manager was very happy and so was the team. This went on for some sprints. One could find our manager showing off to her colleagues about how much the team respected her words and what a command she had on us.

Alas, her happiness lasted only for a few sprints. One fine Sprint demo day, I convinced the team to showcase our dirty little secret to all the stakeholders and management.

The event did shake up the management. It also sent out a clear message.

Code Coverage measures which line of code got executed (intensionally or accidentally) when you run the program or tests against the program. Code coverage tells you nothing about the quality of your tests, hence nothing about the quality of the code. Code coverage along with Cyclomatic Complexity can be used by the team to guide them, but not by management to judge the team.

Code Analysis Tools for C/C++

Tuesday, August 31st, 2010

What tools do you use for Code Analysis of C/C++ projects?

This is a common questions a lot of teams have when we discuss Continuous Integration in C/C++.

I would recommend the following tools:

UPDATE: I strongly recommend looking at CppDepend (commercial), one stop solution for all kinds of metric. It has some very cool/useful features like Code Query Language, Customer Build Reporting, Comparing Builds, great visualization diagrams for dependency, treemaps, etc.

Wikipedia page on Static Code Analysis Tools has a list of many more tools.

Refactoring Teaser IV Solution Summary Report

Sunday, September 27th, 2009
Topic Before After
Project Size Production Code

  • Package =4
  • Classes =30
  • Methods = 90 (average 3/class)
  • LOC = 480 (average 5.33/method and 16/class)
  • Average Cyclomatic Complexity/Method = 1.89

Test Code

  • Package =3
  • Classes = 19
  • Methods = 106
  • LOC = 1379
Production Code

  • Package = 2
  • Classes =7
  • Methods = 24 (average 3.43/class)
  • LOC = 168 (average 6.42/method and 18.56/class)
  • Average Cyclomatic Complexity/Method = 1.83

Test Code

  • Package = 1
  • Classes = 4
  • Methods = 53
  • LOC =243
Code Coverage
  • Line Coverage: 88%
  • Block Coverage: 89%

Code Coverage Before

  • Line Coverage: 95%
  • Block Coverage: 94%

Code Coverage After

Cyclomatic Complexity Cylcomatic Complexity Before Cylcomatic Complexity After
Coding Convention Violation 85 0

Signs of a Healthy Codebase

Tuesday, August 11th, 2009

I’ve become a big fan of displaying metric using Treemaps. Julias Shaw‘s Panpoticode is a great tool to produce useful design metric in the treemap format for your Java project.

In the past, I’ve used these graphs to show Before and After snapshots of various projects after a small refactoring effort. In this blog I want to show you a healthy project’s codebase and highlight somethings that makes me feel comfortable about the codebase. (Actually there is not much to talk, a picture is worth a thousand words.)

Following is the code coverage report from a project:

papu_codecoverage

Couple of quick observations:

  • Majority of the code has coverage over 75% (Our goal is not to have every single class with 100% code coverage. Code Coverage does not talk about Quality of your tests.)
  • There is a decent distribution of code across packages, classes and methods. (No large boxes standing out.)
  • You don’t see large black patches (ones you see are classes that were mocked out for testing).

Lets look at the complexity graph:

papu_cyclomatic_complexity
  • Except for a couple of methods, most of them have Cyclomatic Complexity under 5.
  • You don’t see large red or black boxes which are clear indicators of complex code.

Panopticode combined with CheckStyle, FindBugs and JDepends can give you a lot more info to check the real pulse of your codebase.

Another Project Rescue Report

Monday, February 9th, 2009

Some time back, I spent 1 Week helping a project (Server written in Java) clear its Technical Debt. The code base is tiny because it leverages lot of existing server framework to do its job. This server handles extremely high volumes of data & request and is a very important part of our server infrastructure. Here are some results:

Topic Before After
Project Size Production Code

  • Package =1
  • Classes =4
  • Methods = 15 (average 3.75/class)
  • LOC = 172 (average 11.47/method and 43/class)
  • Average Cyclomatic Complexity/Method = 3.27

Test Code

  • Package =0
  • Classes = 0
  • Methods = 0
  • LOC = 0
Production Code

  • Package = 4
  • Classes =13
  • Methods = 68 (average 5.23/class)
  • LOC = 394 (average 5.79/method and 30.31/class)
  • Average Cyclomatic Complexity/Method = 1.58

Test Code

  • Package = 6
  • Classes = 11
  • Methods = 90
  • LOC =458
Code Coverage
  • Line Coverage: 0%
  • Block Coverage: 0%

Old Code Coverage Report

  • Line Coverage: 96%
  • Block Coverage: 97%

New Code Coverage Report

Cyclomatic Complexity Cyclomatic Complexity report before Refactoring Cyclomatic Complexity report after Refactoring
Obvious Dead Code Following public methods:

  • class DatabaseLayer: releasePool()

Total: 1 method in 1 class

Following public methods:

  • class DFService: overloaded constructor

Total: 1 method in 1 class

Note: This method is required by the tests.

Automation
Version Control Usage
  • Average Commits Per Day = 0
  • Average # of Files Changed Per Commit = 12
  • Average Commits Per Day = 7
  • Average # of Files Changed Per Commit = 4
Coding Convention Violation 96 0

Another similar report.

Project Rescue Report

Monday, February 2nd, 2009

Recently I spent 2 Weeks helping a project clear its Technical Debt. Here are some results:

Topic Before After
Project Size Production Code

  • Package = 7
  • Classes = 23
  • Methods = 104 (average 4.52/class)
  • LOC = 912 (average 8.77/method and 39.65/class)
  • Average Cyclomatic Complexity/Method = 2.04

Test Code

  • Package = 1
  • Classes = 10
  • Methods = 92
  • LOC = 410
Production Code

  • Package = 4
  • Classes = 20
  • Methods = 89 (average 4.45/class)
  • LOC = 627 (average 7.04/method and 31.35/class)
  • Average Cyclomatic Complexity/Method = 1.79

Test Code

  • Package = 4
  • Classes = 18
  • Methods = 120
  • LOC = 771
Code Coverage
  • Line Coverage: 46%
  • Block Coverage: 43%

Coverage report before Refactoring

  • Line Coverage: 94%
  • Block Coverage: 96%

Coverage report after refactoring

Cyclomatic Complexity Cyclomatic Complexity report before Refactoring Cyclomatic Complexity report after Refactoring
Obvious Dead Code Following public methods:

  • class CryptoUtils: String getSHA1HashOfString(String), String encryptString(String), String decryptString(String)
  • class DbLogger: writeToTable(String, String)
  • class DebugUtils: String convertListToString(java.util.List), String convertStrArrayToString(String)
  • class FileSystem: int getNumLinesInFile(String)

Total: 7 methods in 4 classes

Following public methods:

  • class BackgroundDBWriter: stop()

Total: 1 method in 1 class

Note: This method is required by the tests.

Automation
Version Control Usage
  • Average Commits Per Day = 1
  • Average # of Files Changed Per Commit = 2
  • Average Commits Per Day = 4
  • Average # of Files Changed Per Commit = 9

Note: Since we are heavily refactoring, lots of files are touched for each commit. But the frequency of commit is fairly high to ensure we are not taking big leaps.

Coding Convention Violation 976 0

Something interesting to watch out is how the production code becomes more crisp (fewer packages, classes and LOC) and how the amount of test code becomes greater than the production code.

Another similar report.

Cobertura Gottachs

Friday, October 3rd, 2008

Today I spent a good 3 hours trouble shooting issues with code coverage on a legacy project. I’ve been using Cobertura for a good 3 years now. Hence I decided to use Cobertura to help a team working on a legacy project. What looked like a simple 10 mins job, ended up taking forever.

To start off, the team did not have an automated build. So I quickly put an Ant script together. Last 2 days I had helped them write some acceptance tests using FitNesse. I set up the build file to run the FitNesse tests as part of the build. Once I had the testing in place, I wanted to see what kind of code coverage we had. So we started adding Cobertura Ant tasks to the build. This was a fairly trivial task. But surprisingly cobertura-report kept displaying N/A for Line and Branch coverage. Clearly there was something wrong.

After spending a good amount of time, I realized that we need to compile the source code with debug=true option, else Cobertura does not generate any coverage numbers. What’s amazing is, it does not complain (in the normal mode) about this nor does cobertura’s documentation talks about this. When I ran the ant build in verbose mode, it showed the following warning

[cobertura-instrument] WARN visitEnd, No line number information found for class

That’s when it occurred to me that I need to compile my source code with debug=true option.

Once I solved this problem I hit the next roadblock. Even thought I fixed the debug=true option issue, cobertura-report was still displaying N/A for Line and Branch coverage. I quickly wrote a dummy unit tests and that seemed to work. Cobertura report started showing some numbers. But I was not able to generate any coverage numbers from the main Java application (server) that I was trying to test.

I knew there was some problem with the Shutdown hook not been executed correctly to flush the coverage numbers to the ser file. It turned out that if you have a server which is started as a daemon process in a forked JVM in the build, it needs to have a way to shutdown the server gracefully. So I ended up writing a shutdown command for the server which would basically do a System.exit(0). One need to explicitly call the shutdown script before calling the cobertura report target.

Finally after 3 hrs, there was some ray of hope in my life when I saw a good 86% code coverage. This meant I could now go and refactor the code till I dropped dead.

    Licensed under
Creative Commons License