Thursday, October 10th, 2013
At the Agile Goa conference, I ran a small workshop to help participants understand the core techniques one should master to effectively practice evolutionary design while solving real-world problems.
Key take aways:
- Eliminate Noise – Distill down the crux of the problem
- Divide and Conquer – Focus on one scenario at a time and incrementally build your solution
- Add constraints to future simplify the problem
- Simple Design – Find the simplest possible solution to solve the current scenario at hand
- Refactor: Pause, look for a much simpler alternative
- Be ready to throw away your solution and start again
Sunday, July 8th, 2012
||Acceptance Criteria/Test, Automation, A/B Testing, Adaptive Planning, Appreciative inquiry
||Backlog, Business Value, Burndown, Big Visible Charts, Behavior Driven Development, Bugs, Build Monkey, Big Design Up Front (BDUF)
||Continuous Integration, Continuous Deployment, Continuous Improvement, Celebration, Capacity Planning, Code Smells, Customer Development, Customer Collaboration, Code Coverage, Cyclomatic Complexity, Cycle Time, Collective Ownership, Cross functional Team, C3 (Complexity, Coverage and Churn), Critical Chain
||Definition of Done (DoD)/Doneness Criteria, Done Done, Daily Scrum, Deliverables, Dojos, Drum Buffer Rope
||Epic, Evolutionary Design, Energized Work, Exploratory Testing
||Flow, Fail-Fast, Feature Teams, Five Whys
||Grooming (Backlog) Meeting, Gemba
||Impediment, Iteration, Inspect and Adapt, Informative Workspace, Information radiator, Immunization test, IKIWISI (I’ll Know It When I See It)
||Kanban, Kaizen, Knowledge Workers
||Last responsible moment, Lead time, Lean Thinking
||Minimum Viable Product (MVP), Minimum Marketable Features, Mock Objects, Mistake Proofing, MOSCOW Priority, Mindfulness, Muda
||Non-functional Requirements, Non-value add
||Onsite customer, Opportunity Backlog, Organizational Transformation, Osmotic Communication
||Pivot, Product Discovery, Product Owner, Pair Programming, Planning Game, Potentially shippable product, Pull-based-planning, Predictability Paradox
||Quality First, Queuing theory
||Refactoring, Retrospective, Reviews, Release Roadmap, Risk log, Root cause analysis
||Simplicity, Sprint, Story Points, Standup Meeting, Scrum Master, Sprint Backlog, Self-Organized Teams, Story Map, Sashimi, Sustainable pace, Set-based development, Service time, Spike, Stakeholder, Stop-the-line, Sprint Termination, Single Click Deploy, Systems Thinking, Single Minute Setup, Safe Fail Experimentation
||Technical Debt, Test Driven Development, Ten minute build, Theme, Tracer bullet, Task Board, Theory of Constraints, Throughput, Timeboxing, Testing Pyramid, Three-Sixty Review
||User Story, Unit Tests, Ubiquitous Language, User Centered Design
||Velocity, Value Stream Mapping, Vision Statement, Vanity metrics, Voice of the Customer, Visual controls
||Work in Progress (WIP), Whole Team, Working Software, War Room, Waste Elimination
||YAGNI (You Aren’t Gonna Need It)
||Zero Downtime Deployment, Zen Mind
Sunday, April 17th, 2011
Better productivity and collaboration via
improved feedback and high-quality information.
- Encourages an Evolutionary Design and Continuous Improvement culture
- On complex projects, forces a nicely decoupled design such that each modules can be independently tested. Also ensures that in production you can support different versions of each module.
- Team takes shared ownership of their development and build process
- The source control trunk is in an always-working-state (avoid multiple branch issues)
- No developer is blocked because they can’t get stable code
- Developers break down work into small end-to-end, testable slices and checks-in multiple times a day
- Developers are up-to date with other developer changes
- Team catches issues at the source and avoids last minute integration nightmares
- Developers get rapid feedback once they check-in their code
- Builds are optimized and parallelized for speed
- Builds are incremental in nature (not big bang over-night builds)
- Builds run all the automated tests (may be staged) to give realistic feedback
- Captures and visualizes build results and logs very effectively
- Display various source code quality metrics trends
- Code coverage, cyclomatic complexity, coding convention violation, version control activity, bug counts, etc.
- Influence the right behavior in the team by acting as Information Radiator in the team area
- Provide clear visual feedback about the build status
- Developers ask for an easy way to run and debug builds locally (or remotely)
- Broken builds are rare. However broken builds are rapidly fixed by developers
- Build results are intelligently archived
- Easy navigation between various build versions
- Easily visualization and comparison of the change sets
- Large monolithic builds are broken into smaller, self contained builds with a clear build promotion process
- Complete traceability exists
- Version Control, Project & Requirements Managements tool, Bug Tracking and Build system are completely integrated.
- CI page becomes the project dashboard for everyone (devs, testers, managers, etc.).
Any other impact you think is worth highlighting?
Sunday, March 6th, 2011
“I’m going shopping, can you please give me the details of everything you’ll need for the next year?”
What if I asked you this question?
Don’t just throw the mouse at me yet. You look extremely annoyed, but indulge me for a minute. Do you have any idea how much more you might spend because of your lack of planning? I’m sure when you run out of things, you wish you had planned better. After all good upfront planning is always helpful. Its Industry BEST PRACTICE.
Let’s assume, you are convinced with my logical reasoning and well-polished methodological approach to planning. You start creating a backlog of items you’ll need over the next year. And you start filling out the details for each item in a nifty little template I’ve given you. Of course its taking a lot longer than you imagined, but you are discovering (at least forced to think about) many things you had never thought about.
By the way, at the end of this exercise you’ll need to hand over a signed list to me and you can’t change you mind later. We don’t entertain change requests later as its more expensive. After all, we need to put some constraints to make are planning effective.
What, when, how, how much, etc. all kinds of interesting questions plague your mind. Making you realize how unplanned and clueless you were.
Perseverance always wins, in the end. Finally you have a backlog of items you’ll need for the year.
OK. Cut! Lights!
I bet one out of 2 things about the list of items:
- The list was very ambitious (massively grandiose.) You fantasied every possible thing you might ever need, just in case. (After all, what is the guarantee you’ll get everything you asked for.)
- You came up with a very humble list and since you won’t be able to change it cheaply, you regret now for indulging me.
Either ways its bad news for you.
This is exactly what happens on many software projects. Right at the beginning of the project, people who need the software (users or product management) are forced to come up with a detail spec of everything they need from the software. With a higher price tag for late changes. Which forces them to fantasize everything they might remotely need. After all, they are not sure what really will be required once they have the software year or two later.
The development team gets a pile of stuff with different priorities and importance, but all mixed up.
The team tries to come up with a grandiose vision and architect for the project in the name of extensibility.
Eventually, couple of years from now, somehow if the team manages to deliver the product:
- Its bound to be off target.
- Users will force the team to add new, unplanned features which are very critical for the usability of the product.
- A good 80% of the features are rarely used or never used.
- Those 80% of the features contribute to majority of the bugs and complexity in the software
- The overall product feels like a hotchpotch of “stuff”. Lack of symmetry and conceptual integrity
What you really have is a prototype that the team is ready to discard and start over again.
I call this phenomenon The Window of Opportunity. One opportunity, at the beginning, to express what you want. Take your best guess.
We can do much better than this. I would prefer to start really small, very focused. Use Agile methods to build the product collaboratively using an iterative AND incremental model. Embrace evolutionary design and architecture.
The Window of Opportunity *might* sound good in theory, but its too risky.
Friday, June 11th, 2010
When I start and finish my 3 Day TDD Workshop, I make it clear to the participants that they will have to deliberately practice, on small pet projects or toy code, in a safe-environment ( in-the-nets, if you will), every single day, for a few months (if not years), to get really good at TDD. Deliberately practice on their design, testing, breaking-user-needs-down, pairing, and many other skills using TDD before they can use TDD for prime time.
However, post the workshop, the participants are very excited and they ignore my humble advice. Next day they go back to work and start applying TDD on their production code. While this might look like a good thing (also making managers super happy too.) Over the years, I’ve realized that this pattern is actually destructive. Without the required amount of practice, the excitement soon dies out. And developers start falling back to old habits. Many times leaving a total mess behind.
I was surprised to see very few companies or teams continue and build on these practices, while 80% of them would only use parts of it and fall back. Every time I went back to a company and saw this, I would be very depressed. So I studied what the 20% did, which the 80% did not.
- Were the 20% way smatter or talented than the 80%?
- Did the 20% have better management support, less delivery pressure compared to the 80%?
- Or the 20% worked on simpler projects with no legacy code and so on?
What I found was the 20% quickly realized that they did not have enough skill to apply TDD on their projects. So they:
1. Watched Screen-casts: Watched experts do TDD on real projects. Good starting point: Let’s play TDD by James Shores or this stackoverflow answers
2. Open Source Projects: Studied and gradually started contributing small patches to open source projects, which used TDD. JUnit, Cucumber, JBehave, Fit, FitNesse, etc. are all great examples.
3. Small Pet Projects: Started TDDing small pet projects. Gradually practiced in a safer environment and once they acquired enough skill and confidence, then they start applying TDD on their production projects.
In addition to practicing on their pet projects, on their own, they also took 2 hrs every week (on a fixed day of the week), as a whole team, to practice Test-Driving (TDDing) the same problem. During these social-learning sessions, they practiced Pair programming too.
Logistics: Before the meeting, one of the team member sent out the problem to everyone. On practice day, the whole team gathered in a conference room/team area, picked their pairs, set aside 90 mins to TDD the problem. After 90 mins, few pairs did a quick code walk-thru and explain their solution with important decisions they made during the session. Followed by an open discussion. [I also recommend everyone checks-in their code in some common repository, so it can be used for reference by others.]
Sample Problems: Next big question is, what problems do we use for Test Driving?
Usually I recommend starting with simple programs like:
- Games – Tic-Tac-Toe, Snakes and Ladders, or any other board game.
- Utility Programs – Convert Roman Numerals to Decimals, Diff 2 files, IP to Country Mapping, etc
Once they practiced enough with simple programs, then they would take some large design problems from DesignFest and try to TDD them. The beauty of these problems is, they are quite big, can take up-to a week to finish it completely. Now we are talking something semi-real world, where
- We have limited time to solve/program a complex problem, which needs to be simplified.
- Try to find a relevant System Metaphor that can help you visualize the design
- Do a quick and dirty, low-fi Interaction Design to understand how the user will use this
- Identify and prioritize the crux/essence of the problem, figure out what is the most important, what will give us fastest feedback
- Further thin-slice the identified piece of functionality and
- Then try TDDing it.
This truly helped them get better at:
4. Code Smells Poster: Created a big poster with all the code smells listed on it. Paste the poster in their team area. Every time anyone from the team found a code smell in their project, that person gets up and adds a dot next to the code smell. This makes everyone more sensitive to these smells and increases awareness by making things visible. (Simple game mechanics.)
5. Refactoring Fest: Picked one of the pungent code smell from their code, and organized a RefactoringFest. Meet as a group (once a month.) Developers pair with each other and everyone tries to refactor the same code on their project to eliminate the specific code smell. [Make it clear that the code will not be checked-in after refactoring. Its a learning exercise and we need a safe environment where people don’t fear touching code. Also if you need some real world code snippets to try refactoring, check out my refactoring teasers.]
6. Blog/Diary: Started writing a blog/diary to capture their learnings and list of issue that got in their way of applying TDD on their project. Writing things down really helped them internalize their learning. [Many times when I’m stuck with a problem, I start writing things down, and the answer becomes obvious to me.]
7. Form a Book Club: As a group, they picked any TDD related book of their choice. Then they decided on a chapter and met over lunch once every week. Everyone came to the meeting after having read that chapter. They used the meeting time to highlight the key-takeaways, debate on the subject and if possible demonstrate it via code.
8. Hands-on Geek-Conference: They participated in a geek conference like Code Retreat, CITCon or Simple Design and Test Conference where they got to meet other practitioners and experts. Got an opportunity to pair with them and learn from their experience plus share your own experience. [Stop wasting your time on stupid marketing conferences.]
9. Teach: They taught a course on Unit Testing or Design Principles at an induction program for new employees or at a local user group. [Teaching is the best way of learning.]
Deliberately practicing this way for a few years to really appreciate the depth and benefit of TDD.
Friday, May 14th, 2010
Just because you know the syntax of a programming language, does not mean you know programming
Just because a toddler can make some sounds, does not mean she can speak
Similarly, just because you write tests before you write code, does not mean you know TDD.
TDD is a lot more than test-first. IMHO, following concepts let’s you truly experience TDD:
- Evolutionary Design,
- Acceptance Criteria,
- Simple Design,
- System Metaphor,
- Thin Slicing,
- Walking Skeleton/Tracer Bullet and
- Interaction Design
How long does it take a dev to be well-versed with TDD?
Depends on the dev, but atleast a couple of years of deliberate practice on projects
What can we do to become TDD Practitioners?
Start with deliberate practice in safe env. Then gradually start on your project.
Monday, October 26th, 2009
Over the last year, I’ve been helping (part-time) Freeset build their ecommerce website. David Hussman introduced me to folks from Freeset.
Following is a list of random topics (most of them are Agile/XP practices) about this project:
- Project Inception: We started off with a couple of meetings with folks from Freeset to understand their needs. David quickly created an initial vision document with User Personas and their use cases (about 2 page long on Google Docs). Naomi and John from Freeset, quickly created some screen mock-ups in Photoshop to show user interaction. I don’t think we spent more than a week on all of this. This helped us get started.
- Technology Choice: When we started we had to decide what platform are we going to use to build the site. We had to choose between customer site using Rails v/s using CMS. I think David was leaning towards RoR. I talked to folks at Directi (Sandeep, Jinesh, Latesh, etc) and we thought instead of building a custom website from scratch, we should use a CMS. After a bit of research, we settled on CMS Made Simple, for the following reasons
- We needed different templates for different pages on the site.
- PHP: Easiest to set up a PHP site with MySQL on any Shared Host Service Provider
- Planning: We started off with an hour long, bi-weekly planning meetings (conf calls on Skype) on every Saturday morning (India time). We had a massively distributed team. John was in New Zealand. David and Deborah (from BestBuy) were in US. Kerry was in UK for a short while. Naomi, Kelsea and other were in Kolkatta and I was based out of Mumbai. Because of the time zone difference and because we’re all working on this part time, the whole bi-weekly planning meeting felt awkward and heavy weight. So after about 3 such meetings we abandoned it. We created a spreadsheet on Google Docs, added all the items that had high priority and started signing up for tasks. Whenever anyone updated an item on the sheet, everyone would be notified about the change.
- User Stories: We started off with User Persona and Stories, but soon we just fell back to simple tasks on a shared spreadsheet. We had quite a few user related tasks, but just one liner in the spread sheet was more than sufficient. We used this spreadsheet as a sudo-backlog. (by no means we had the rigor to try and build a proper backlog).
- Short Releases: We (were) only working on production environment. Every change made by a developer was immediately live. Only recently we created a development environment (replica of production), on which we do all our development. (I asked John from Freeset, if this change helped him, he had mixed feelings. Recently he did a large website restructuring (added some new section and moved some pages around), and he found the development environment useful for that. But for other things, when he wants to make some small changes, he finds it an over kill to make changes to dev and then sync it up with production. There are also things like news, which makes sense to do on the production server. Now he has to do in both places). So I’m thinking may be, we move back to just production environment and then create a prod on demand if we are plan to make big changes.
- Testing: Original we had plans of at least recording or scripting some Selenium tests to make sure the site is behaving the way we expected it to. This kind of took a back seat and never really became an issue. Recently we had a slight set back when we moved a whole bunch of pages around and their link from other parts of the site were broken. Other than that, so far, its just been fine.
- Usability: We still have lots of usability and optimization issues on our site. Since we don’t have an expert with us and we can’t afford one, we are doing the best we can with what we have on hand. We are hoping we’ll find a volunteer some day soon to help us on this front.
- Versioning: We explored various options for versioning, but as of today we don’t have any repository under which we version our site (content and code). This is a drawback of using an online CMS. Having said that so far (been over a year), we did not really find the need for versioning. As of now we have 4 people working on this site and it just seems to work fine. Reminds me of YAGNI. (May be in future when we have more collaborators, we might need this).
- Continuous Integration: With out Versioning and Testing, CI is out of question.
- Automated Deployment: Until recently we only had one server (production) so there was no need for deployment. Since now we have a dev and a prod environment, Devdas and I quickly hacked a simple shell scrip (with mysqldump & rsync) that does automated deployment. It can’t get simpler than this.
- Hosting: We talked about hosting the site on its own slice v/s using an existing shared host account. We could always move the site to another location when our existing, cheap hosting option will not suit our needs. So as of today, I’m hosting the site under one of my shared host account.
- Rich Media Content: We questioned serving & hosting rich media content like videos from our site or using YouTube to host them. We went with YouTube for the following reasons
- We wanted to redirect any possible traffic to other sites which are more tuned to catering high bandwidth content
- We wanted to use YouTube’s existing customer base to attract traffic to our site
- Since we knew we’ll be moving to another hosting service, we did not want to keep all those videos on the server which then will have to be moved to the new server
- Customer Feedback: So far we have received great feedback from users of this site. We’ve also seen a huge growth in traffic to our site. Currently hovering around 1500 hits per day. Other than getting feedback from users. We also look at Google Analytics to see how users are responding to changes we’ve made and so on.
- We don’t really have/need a System Metaphor and we are not paying as much attention to refactoring. We have some light conventions but we don’t really have any coding standards. Nor do we have the luxury to pair program.
- Distributed/Virtual Team: Since all of us are distributed and traveling, we don’t really have the concept of site. Forget on-site customer or product owner.
- Since all of this is voluntary work, Sustainable pace takes a very different meaning. Sometimes what we do is not sustainable, but that’s the need of the hour. However all of us really like and want to work on this project. We have a sense of ownership. (collective ownership)
- We’ve never really sat down and done a retrospective. May be once in a while we ask a couple of questions regarding how something were going.
Overall, I’ve been extremely happy with the choices we’ve made. I’m not suggesting every project should be run this way. I’m trying to highlight an example of what being agile really means.
Saturday, June 13th, 2009
I don’t like the I<something> naming convention for interfaces for various reasons.
Let me explain with an example. Lets say we have IAccount interface. Why is this bad?
- All I care is, I’m talking to an Account, it really does not matter whether its an interface, abstract class or a concrete class. So the I in the name is noise for me.
- It might turn out that, I might only ever have one type of Account. Why do I need to create an interface then? Its the speculative generality code smell and violating the YAGNI principle. If someday I’ll need multiple types of Accounts, I’ll create it then. Its probably going to be as much effort to create it then v/s doing it now. Minus all the maintenance overhead.
- Let’s say we’ve multiple types of Accounts. Instead of calling it IAccount and the child classes as AccountImpl1 or SavingAccountImpl, I would rather refer to it as Account and the different types of account as SavingAccount or CreditAccount. It might also turn out that, there is common behavior between the two types of Account. At that point instead of having IAccount and creating another Abstract class called AbstractAccount I would just change the Account interface to be an abstract class. I don’t want to lock myself into an Interface.
Personally I think the whole I<something> is a hang-over from C++ and its precursors.
Some people also argue that using the I<something> convention is easy for them to know whether its an interface or not (esp. coz they don’t want to spend time typing new Account() and then realizing that its an interface).
The way I look at it is good IDEs will show the appropriate icon and that can help you avoid this issue to some extent. But even if it did not, to me its not a big deal. The number of times the code is read is far more than its written. Maintaining one extra poorly named Interface is far more expensive than the minuscule time saved in typing.
P.S: I’m certainly not discouraging people from creating interfaces. What I don’t like is having only one class inheriting from an interface. May be if you are exposing an API, you might still be able to convenience me. But in most cases people use this convention through out their code base and not just at the boundaries of their system.
In lot of cases I find myself starting off with an Interface, because when I’m developing some class, I don’t want to start building its collaborator. But then when I start TDDing that class, I’ll change the interface to a concrete class. Later my tests might drive different types of subclass and I might introduce an Interface or an Abstract class as suitable. And may be sometime in the future I might break the hierarchy and use composition instead. Guess what, we are dealing with evolutionary design.
Sunday, May 3rd, 2009
On the Protest project when I was building the “integration with Ant” feature, I adopted the same thin slice principle. Following are the thin slices I came up with:
- Create an Ant task which can call Protest, it simply returns the tests in the same order as given to it. (Essentially was a copy of the JUnit Ant task)
- Add support for a voter (happened to be Dependency Voter), so that we can actually prioritize the tests based on the dependency algorithm. At this point we went ahead and released this task
- Add support for multiple voters. By now we had created 3 different voters and we wanted to use all the voters
- Provide a way to specify weightage for each voter. Some voters should be able to influence the prioritized list of test more than the others
- Once we have a prioritized list of tests, provide a way to specify what top percentage of that list should be executed. This provides the user tighter control over how much feedback they need depending on the type of change they have just made
- And so on…
Now we could have sat and first designed how the Ant task should look and later wrote the task and then integrated with Protest. But the problem with this approach is:
- We won’t have anything functional and usable until we finish all the tasks. Too scary from a feedback standpoint
- We won’t be able to test anything for real until we finish all the tasks. There is a huge risk involved in this approach
- Essentially we are building an inventory with each of those tasks that are not used immediately. Turn around time for a feature is high
- It requires a lot of upfront thinking, which I’m not generally good at. At least we’ll have to think through all the input each voter would need and so on. Right now we don’t even have all the voters in place and this forces us to think about them now or introduces an unwanted dependency now.
- Lots of people argue that the evolutionary approach will be less efficient (more expensive and time consuming) because it gives an impression of thrashing and rework. In my experience, the big upfront design leads to more rework generally. It creates an illusion of streamlined process but in reality, it is actually lot more work and also leads to a rigid and over-engineered design
- We can also add all the disadvantages of big upfront design here
I hope this example demonstrates the technique of thin slicing and its advantages over its alternatives.
Sunday, May 3rd, 2009
For a given feature, we can come up with multiple thin-slices which can be incrementally (sequentially) built.
Thin Slice is the simplest possible functional, usable, end to end slice of functionality.
Thin Slicing a feature is not a new concept. Generally development teams consider the simple happy path scenario as the first thin slice. The alternative flows are considered as subsequent slices. Today when I think of a thin slice, its slightly more sophisticated than just considering happy path and alternative paths.
Let me explain with an example: Lets consider we are building a web app and we need a feature which allows the user to upload their profile picture.
We can come up with the following thin slices:
- User can upload any photo (predefined set of image format) – won’t look good coz the image size can vary and hence where ever its displayed won’t align well. Might not have AJAX support (depends on what is simple and quick to do). All the bells and whistles are pushed for later.
- Build an image scaling facility so that we can reduce the image resolution and hence its size
- Provide an image cropping facility so that users can crop their profile images
- Instead of uploading an image from my disk, provide a facility to pull it from the web. The user provides the URL
- and so on….
Each slice is functional (end to end, we are not just doing the UI or backend bit). This is great for internal feedback. Might not be good enough for a public release. Esp. with just the first slice. In some sense we are incrementally add more power to the feature with each thin slice. From another perspective it feels like we are iterating over the feature and refining the feature as we go along. In either case, We build the feature by adding one by one the thin slice, till the point we feel we can release this feature. Post release we can continue to add more slices.
It is also possible to come up with thin slices which are perfectly releasable. Lets take the same example and see how we could come up with thin slices which are releasable.
- User can upload a photo (predefined set of image format + clear message showing the expected size and resolution). Any image which does not qualify this criteria is rejected.
- Knock off the resolution constraint and build image scaling facility so that we can reduce the image resolution and hence its size
- Knock off the image size constraint and provide an image cropping facility so that users can crop their images to fit the right size
- And so on…
Check out another example of Thin Slicing from the Protest Project.
Please note that the concept of thin slice is also applicable at a much broader scope than just at a feature level. For Ex: On a given project we use the same concept to plan our small & frequent releases. At a portfolio level we apply the same principle to break projects into smaller loosely coupled projects and then prioritize them.