- How to apply Theory of Constraint [ToC] to identify the bottlenecks or issues the teams are facing during their agile adoption?
- Once we identify the bottleneck, how we delivered knowledge and experience to the teams, just in time to apply that knowledge to eliminate the bottleneck, using the Just-In-Time practice concept?
As a coach, my goal is to help people evolve their thinking to be more “agile/adaptive”. Building great software is only a part of it. Agile/Lean thinking applies and shows in lots of ways even outside software. And in lots of cases I’ve seen that when people start applying these values in other parts of their daily lives, they get a much broader and deeper understanding of this thinking process.
Few years ago, when I was a consultant at ThoughtWorks, people used to ask me, what was my job like. I would respond saying, “My job is to set up small ‘safe-fail‘ experiments for my team.” Learning from one’s mistake in a controlled, safe environment is the best form of learning according to me till date. It has a much long lasting impact. Almost always, this really helped me evolve my understanding of how agility can manifest itself.
When it comes to Coaching, there are different styles. What style you can use depends on:
- Your strengths and weaknesses. By your personality.
- The team & individuals you are coaching.
- Whether you are going in as a coach or they (the team) is coming to you for coaching.
- The short-term and long-term needs of the team
- And so on….
Over and above all, to be effective at coaching, one needs to win the trust of the team. Trust is very important. If the team does not feel safe in your presence, then you can’t be effective at coaching. I strongly emphasize on building trust and gaining respect quickly. This in-fact would be my first goal as a coach.
Following are the code quality criteria I came up with and asked the team to rate themselves against it:
This list will only make sense if you’ve read the context from the previous post.
- Conceptual Integrity: Does the design of various components feels seamless? Do you feel the application is build out of random, incongruent components that have just been plumbed together?
- Do you have odd ball solution smell between various components?
- Consistent error messages are used throughout the application
- Navigating from one part of the application to another part feel like one smooth flow.
- Appropriateness: How appropriate is the current choice of technology stack? Are we using the right technology for solving the right problem and are we using the technology appropriately. How well the software is using the supporting framework and underlying building blocks?
- If its using a RDBMS, is your data really relational data
- If you are using a framework like Wicket and not really using component based design for webpages
- Understandability: Is the code and the concept behind it easy to understand? Is the purpose of the product clear? Purpose of the product goes further than just a statement of purpose – all of the design must be clear so that it is easily understandable.
- What do you think of the Code’s Complexity? (Are branch decisions too complex?)
- Are variable names descriptive and representative of the right entity?
- How much documentation is required to understand the code and its design?
- User Experience: Do the Users enjoy using the product? It is convenient and practical to use. This is affected by the human-computer interface, but goes far beyond just the GUI.
- Is the user interface intuitive?
- Is it easy to perform any operations?
- Is the complexity of difficult operations hidden away from the user?
- Does the software give sensible error messages?
- Do the widgets behave as expected?
- Is the user interface self-explanatory/self-documenting?
- Is the user interface responsive or too slow?
- Conciseness: Is there excessive (redundant) information present? In other words, does it follow the DRY (Don’t Repeat Yourself) principle?
- Code is easy to understand and has optimal amount of code (small well defined classes) and configuration.
- Does the code have Dead Code Smell (Is all code reachable?)
- Is any code redundant? Duplicate Code Smell
- Adaptability: Is the code nicely decoupled? Does it have any irreversible design decision? More generic solutions does not necessarily mean more adaptable.
- Can we use a different database?
- Can we use a different third party library or framework?
- Consistency: Conceptually and Semantically, is the code uniform? Does it have consistent notation, symbology and terminology within itself?
- Is one variable name used to represent different physical entities in the program?
- Are functionally similar arithmetic expressions similarly constructed?
- Does the code have odd-ball solution smell?
- Is a consistent scheme for indentation used?
- Maintainability: Does the code facilitate enhancing existing features or adding new features to satisfy new requirements? Are you spending a lot of time fixing bugs on the existing code?
- Is the design cohesive, i.e. , each module has recognizable functionality?
- Does the software allow for a change in internal data structures (encapsulation)?
- Well modularized?
- Does it have a Solution Sprawl smell?
- Testability: Is it possible to establish automatable acceptance criteria and evaluate its performance? Complex design leads to poor testability.
- Are complex structures employed in the code?
- Higher test coverage does not always necessarily mean testable design.
- Reliability: Can it perform its intended functions satisfactorily every single time fast enough? The extent to which a program can be expected to perform its intended function with required precision. This implies a time factor in that a reliable product is expected to perform correctly over a period of time. It also encompasses environmental considerations in that the product is required to perform correctly in whichever conditions it finds itself – this is sometimes termed robustness.
- Are boundary conditions properly handled?
- Is exception handling provided?
- Structuredness: Are all the building blocks architected well? Right behavior exists in right places.
- Is the business logic bleeding into the UI?
- Is the code tightly coupled and dependent on external sub-systems like type of Database.
- Efficiency: Can it fulfill its purpose without unnecessarily wasting resources? (memory, CPU, external sub-systems, filesystem, etc). A performant application does not necessarily mean that its efficient. For Ex: one might throw a lot of hardware or put a lot of caching to make the app perform better, but may be there was a simpler efficient way. Lots of time, premature optimization leads to less efficient complex solutions.
- Are we using Connection Pools efficiently?
- Is your use-case flow logical? (logically inefficient)
- Security: Is the data protected against unauthorized access and does it withstand malicious interference during its operations? Besides presence of appropriate security mechanisms such as authentication, access control and encryption, security also implies reliability in the face of malicious, intelligent and adaptive attackers.
- Does it allow its operator to enforce security policies?
- Can the software withstand attacks that must be expected in its intended environment?
- Is the software free of errors that would make it possible to circumvent its security mechanisms?
- Does the architecture limit the impact of yet unknown errors?
I always find it very difficult to rate something using an absolute scale. Not sure if this is 1 or 2. Personally I prefer using a relative rating scale. Hence I came up with the following scale for the teams to use while they rate themselves on various criteria.
|1||Needs Immediate Attention|
|2||Needs attention, but other issues seem to be bigger|
|3||Is manageable, but there is scope for improvement|
|4||Its working fine, some more work can push this into the elegant space|
|5||Is Elegant and I’ll be proud to demonstrate this as an example to other teams|
Recently @ a client, I was trying to figure out which product team needs immediate help/attention. Since I have limited time available @ the client location, it would be best to come up with some plan of how I’ll go around the various teams and help them. Trying to fix all the issues of all the teams is impractical. Once a team is identified, I can use Theory of Constraints (ToC) to figure out their biggest bottleneck and work my way up from there. Similarly I can use ToC to identify the biggest bottleneck team (Team which is doing critical work and is slowing other teams down). To be able to come up with an initial plan, I wanted to get a feel of how each team works and each team’s code-base and its design.
I started off by spending about 2-3 hours with each team. During this visit I was playing “health-inspector”:
- I asked the team to explain the product they were working.
- After understanding the product, I took a few relevant scenarios and asked the team to help me build a Value Stream Map.
- I asked the team to share with me what was causing most pain for them and what they thought was working well.
- I asked the team to walk me through their code-base by focusing on one specific feature.
At the end of this meeting we really did not come up with any action items (would be premature at this point). All we had, was a list of observations with the value stream map on a wiki-page. This list can be considered as an initial Debt list. This gave me a decent feel for how the team was working and the kind if issues they were facing.
(One thing I’ve learnt in my previous consulting life is, never take personal notes while people are talking to you on sensitive topics. If taking notes is important because you have been bombarded with information, open a simple text editor or wiki and ask them to summarize after they have explained a point.)
After series of meetings with various teams, I talked to the Business Sponsor and share my concern about the code quality. Trying to explain the details with off-hand random examples from the code was getting difficult. Reading through each team’s observation on the wiki-page was helpful but the results were very subjective. It was not clear which team needs immediate attention. So the Business Sponsor asked me if I could come up with some code evaluation criteria and rate various team’s code-base.
While this made sense, I was concerned that it would be difficult to come up with objective criteria which can then be used to identify which team needs immediate attention. Urgency of attention is to a great extent influenced by soft things that cannot be easily externalized in a metric. Even if we came up with some criteria, the value assigned to the criteria in some context is only relevant in that context. Each team might have a different context and comparing the teams in different context using the same criteria can be error-prone.
Anyway I decided to give it a shot. Worst case, we at least would have some baseline for each team that they can then refer back to as we starting resolving issues. But the question was who should rate the team against these criteria. Personally I’m very uncomfortable rating others. I also think it’s not effective. It encourages the wrong behavior in teams. So I decided to have the team self-rate them by having an open discussion within their respective teams. At least this will ensure everyone in the team is on the same page. And then if there was any difference between what the team rated themselves as and what I thought the rating should be based on my short meeting with them, then it could be easily resolved with an open discussion about it. In some cases I don’t have enough context as the team has.
Refer to How to Rate a Product (its code and design) for more details on the quality criteria and their rating scale I used.
Once I identified a rating system, I created a page with a simple table on the company wiki. Each quality criteria was listed as a Row heading and each team’s name as a Column heading. Each team was requested to rate themselves. After the teams rated themselves, in a couple of cases, I had a discussion with them about rating of specific quality criteria and came up with mutually agreed numbers.
Something very interesting (unexpected), happened once all the rating was done. Seeing all the teams’ rating on one page helped me identify trends across the company. For example Testability was an issue on most teams. This helped me come up with some basic training on Testability which most teams could attend.
It is not fair to compare one team’s rating with another. But certainly a team which has a lot of 1′s is in more need of attention than a team which has 3′s and 4′s. Again, just because a team is in need of attention does not mean one needs to jump straight into the team. We need to figure out where on the critical path of the organization this team lies. If this team is not on the critical path, may be investing time on another team would be fruitful (at least in the short run).
This approach really helped me come up with a 2-prong strategy.
- Define general training requirements for the whole organization which will help each team improve.
- Identify the team that needs immediate attention (bottleneck) and coach that team.
Why are companies so afraid to Fail?
When I meet folks from companies, most of them want to implement Agile, but they want ready made solutions from experienced folks. Basically what they are looking for is a Dummy’s Guide to Agile Transition. They want a completed tested and proven approach (Best Practices) to adopt Agile. They want to make sure there is no room for failure.
Well I don’t really understand how can one learn new techniques/approaches without failing a couple of times? Isn’t failure an implicit part of learning? To really learn something, you need to understand its boundaries and test the waters yourself. Babies don’t learn to walk without falling/hurting themselves. IMHO, if you want a babysitter (Coach), all your life, you will never be able to appreciate and learn new concepts. I’m afraid you will learn how to be waterfall in a different way.
You could get a coach to help you avoid big failure and give a helping hand after a failure, but its unrealistic to expect that a coach will help you successfully transition without any failure. Hard learnt lessons stick around longer.