Tuesday, November 1st, 2011
Many product companies struggle with a big challenge: how to identify a Minimal Viable Product that will let them quickly validate their product hypothesis?
Teams that share the product vision and agree on priorities for features are able to move faster and more effectively.
During this workshop, we’ll take a hypothetical product and coach you on how to effectively come up with an evolutionary roadmap for your product.
This day long workshop teaches you how to collaborate on the vision of the product and create a Product Backlog, a User Story map and a pragmatic Release Plan.
Detailed Activity Breakup
- PART 1: UNDERSTAND PRODUCT CONTEXT
- Define Product Vision
- Identify Users That Matter
- Create User Personas
- Define User Goals
- A Day-In-Life Of Each Persona
- PART 2: BUILD INITIAL STORY MAP FROM ACTIVITY MODEL
- Prioritize Personas
- Break Down Activities And Tasks From User Goals
- Lay Out Goals Activities And Tasks
- Walk Through And Refine Activity Model
- PART 3: CREATE FIRST-CUT PRODUCT ROAD MAP
- Prioritize High Level Tasks
- Define Themes
- Refine Tasks
- Define Minimum Viable Product
- Identify Internal And External Release Milestones
- PART 4: WRITE USER STORIES FOR THE FIRST RELEASE
- Define User Task Level Acceptance Criteria
- Break Down User Tasks To User Stories Based On Acceptance Criteria
- Refine Acceptance Criteria For Each Story
- Find Ways To Further Thin-Slice User Stories
- Capture Assumptions And Non-Functional Requirements
- PART 5: REFINE FIRST INTERNAL RELEASE BASED ON ESTIMATES
- Define Relative Size Of User Stories
- Refine Internal Release Milestones For First-Release Based On Estimates
- Define Goals For Each Release
- Refine Product And Project Risks
- Present And Commit To The Plan
- PART 6: RETROSPECTIVE
- Each part will take roughly 30 mins.
I’ve facilitated this workshop for many organizations (small-startups to large enterprises.)
More details: Product Discovery Workshop from Industrial Logic
Focused Break-Out Sessions, Group Activities, Interactive Dialogues, Presentations, Heated Debates/Discussions and Some Fun Games
- Product Owner
- Release/Project Manager
- Subject Matter Expert, Domain Expert, or Business Analyst
- User Experience team
- Architect/Tech Lead
- Core Development Team (including developers, testers, DBAs, etc.)
This tutorial can take max 30 people. (3 teams of 10 people each.)
Required: working knowledge of Agile (iterative and incremental software delivery models) Required: working knowledge of personas, users stories, backlogs, acceptance criteria, etc.
“I come away from this workshop having learned a great deal about the process and equally about many strategies and nuances of facilitating it. Invaluable!
Naresh Jain clearly has extensive experience with the Product Discovery Workshop. He conveyed the principles and practices underlying the process very well, with examples from past experience and application to the actual project addressed in the workshop. His ability to quickly relate to the project and team members, and to focus on the specific details for the decomposition of this project at the various levels (goals/roles, activities, tasks), is remarkable and a good example for those learning to facilitate the workshop.
Key take-aways for me include the technique of acceptance criteria driven decomposition, and the point that it is useful to map existing software to provide a baseline framework for future additions.”
Doug Brophy, Agile Expert, GE Energy
- Understand the thought process and steps involved during a typical product discovery and release planning session
- Using various User-Centered Design techniques, learn how to create a User Story Map to help you visualize your product
- Understand various prioritization techniques that work at the Business-Goal and User-Persona Level
- Learn how to decompose User Activities into User Tasks and then into User Stories
- Apply an Acceptance Criteria-Driven Discovery approach to flush out thin slices of functionality that cut across the system
- Identify various techniques to narrow the scope of your releases, without reducing the value delivered to the users
- Improve confidence and collaboration between the business and engineering teams
- Practice key techniques to work in short cycles to get rapid feedback and reduce risk
Saturday, December 4th, 2010
Every day I hear horror stories of how developers are harassed by managers and customers for not having predictable/stable velocity. Developers are penalized when their estimates don’t match their actuals.
If I understand correctly, the reason we moved to story points was to avoid this public humiliation of developers by their managers and customers.
Its probably helped some teams but vast majority of teams today are no better off than before, except that now they have this one extract level of indirection because of story points and then velocity.
We can certainly blame the developers and managers for not understanding story points in the first place. But will that really solve the problem teams are faced with today?
Please consider reading my blog on Story Points are Relative Complexity Estimation techniques. It will help you understand what story points are.
Assuming you know what story point estimates are. Let’s consider that we have some user stories with different story points which help us understand relative complexity estimate.
Then we pick up the most important stories (with different relative complexities) and try to do those stories in our next iteration/sprint.
Let’s say we end up finishing 6 user stories at the end of this iteration/sprint. We add up all the story points for each user story which was completed and we say that’s our velocity.
Next iteration/sprint, we say we can roughly pick up same amount of total story points based on our velocity. And we plan our iterations/sprints this way. We find an oscillating velocity each iteration/sprint, which in theory should normalize over a period of time.
But do you see a fundamental error in this approach?
First we said, 2-story points does not mean 2 times bigger than 1-story point. Let’s say to implement a 1-point story it might take 6 hrs, while to implement a 2-point story it takes 9 hrs. Hence we assigned random numbers (Fibonacci series) to story points in the first place. But then we go and add them all up.
If you still don’t get it, let me explain with an example.
In the nth iteration/sprint, we implemented 6 stories:
- Two 1-point story
- Two 3-point stories
- One 5-point story
- One 8-point story
So our total velocity is ( 2*1 + 2*3 + 5 + 8 ) = 21 points. In 2 weeks we got 21 points done, hence our velocity is 21.
Next iteration/sprit, we’ll take:
* Twenty One 1-point stories
Take a wild guess what would happen?
Yeah I know, hence we don’t take just one iteration/sprint’s velocity, we take an average across many iterations/sprints.
But its a real big stretch to take something which was inherently not meant to be mathematical or statistical in nature and calculate velocity based on it.
If velocity anyway averages out over a period of time, then why not just count the number of stories and use them as your velocity instead of doing story-points?
Over a period of time stories will roughly be broken down to similar size stories and even if they don’t, they will average out.
Isn’t that much simpler (with about the same amount of error) than doing all the story point business?
I used this approach for few years and did certainly benefit from it. No doubt its better than effort estimation upfront. But is this the best we can do?
I know many teams who don’t do effort estimation or relative complexity estimation and moved to a flow model instead of trying to fit thing into the box.
Consider reading my blog on Estimations Considered Harmful.
Saturday, December 4th, 2010
If we have 2 user stories (A and B), I can say A is smaller than B hence, A is less story points compared to B.
But what does “smaller” mean?
- Less Complex to Understand
- Smaller set of acceptance criteria
- Have prior experience doing something similar to A compared to B
- Have a rough (better/clearer) idea of what needs to be done to implement A compared to B
- A is less volatile and vague compared to B
- and so on…
So, A is less story points compared to B. But clearly we don’t know how much longer its going to take to implement A or B.
Hence we don’t know how much more effort and time will be required to implement B compared to A. All we know at this point is, A is smaller than B.
It is important to understand that Story points are relative complexity estimate NOT effort estimation (how many people, how long will it take?) technique.
Now if we had 5 stories (A, B, C, D and E) and applied the same thinking, we can come up with different buckets in which we can put these stories in.
Small, Medium, Large, XL, XXL and so on….
And then we can say all stories in the small bucket are 1 story point. All stories in medium bucket are 3 story points, 5 story points, 8 story points and so on…
Why do we give these random numbers instead of sequential numbers (1,2,3,4,…)?
Because we don’t want people to get confused that medium story with 2 points is 2 times bigger than a small story with 1 point.
We cannot apply mathematics or statistics to story points.
Saturday, July 10th, 2010
A prioritized user story backlog helps to understand what to do next, but is a difficult tool for understanding what your whole system is intended to do. A user story map arranges user stories into a useful model to help understand the functionality of the system, identify holes and omissions in your backlog, and effectively plan holistic releases that delivery value to users and business with each release.
Tuesday, April 20th, 2010
At the Agile Coach Camp Goa 2010, we had a small side discussion about the difference between Use Case and User Stories. More importantly, if an Use Case contains many User stories or whether an User Story contains many Use Cases.
According to Mike Cohn, User Stories are smaller in scope compared to Use Cases.
Even Martin Fowler has the same understanding.
IMHO it does not matter. But it’s important to note that when some people refer to User Stories, they really mean the final stage of the User Story. Hence they always say, an Use Case contains many User Stories. In real world, I see User Stories have a life-cycle. They start out big & vague and gradually are thin sliced to executable User Stories. Mike Cohn referes to them as Theme > Epic > Story > Task.
I’m particularly influenced by Jeff Patton’s Work on this topic. Jeff highlights that User Stories really need to be at an User Goal level rather than an implementation level (at least when you start out). Else it would lead to big-upfront-design. Also most users won’t be able to relate to really granular stories. Highly recommend reading his blog on The Mystery of Shrinking Stories.
To understand the overall approach check out his User Story Mapping Slides.
Monday, October 26th, 2009
Over the last year, I’ve been helping (part-time) Freeset build their ecommerce website. David Hussman introduced me to folks from Freeset.
Following is a list of random topics (most of them are Agile/XP practices) about this project:
- Project Inception: We started off with a couple of meetings with folks from Freeset to understand their needs. David quickly created an initial vision document with User Personas and their use cases (about 2 page long on Google Docs). Naomi and John from Freeset, quickly created some screen mock-ups in Photoshop to show user interaction. I don’t think we spent more than a week on all of this. This helped us get started.
- Technology Choice: When we started we had to decide what platform are we going to use to build the site. We had to choose between customer site using Rails v/s using CMS. I think David was leaning towards RoR. I talked to folks at Directi (Sandeep, Jinesh, Latesh, etc) and we thought instead of building a custom website from scratch, we should use a CMS. After a bit of research, we settled on CMS Made Simple, for the following reasons
- We needed different templates for different pages on the site.
- PHP: Easiest to set up a PHP site with MySQL on any Shared Host Service Provider
- Planning: We started off with an hour long, bi-weekly planning meetings (conf calls on Skype) on every Saturday morning (India time). We had a massively distributed team. John was in New Zealand. David and Deborah (from BestBuy) were in US. Kerry was in UK for a short while. Naomi, Kelsea and other were in Kolkatta and I was based out of Mumbai. Because of the time zone difference and because we’re all working on this part time, the whole bi-weekly planning meeting felt awkward and heavy weight. So after about 3 such meetings we abandoned it. We created a spreadsheet on Google Docs, added all the items that had high priority and started signing up for tasks. Whenever anyone updated an item on the sheet, everyone would be notified about the change.
- User Stories: We started off with User Persona and Stories, but soon we just fell back to simple tasks on a shared spreadsheet. We had quite a few user related tasks, but just one liner in the spread sheet was more than sufficient. We used this spreadsheet as a sudo-backlog. (by no means we had the rigor to try and build a proper backlog).
- Short Releases: We (were) only working on production environment. Every change made by a developer was immediately live. Only recently we created a development environment (replica of production), on which we do all our development. (I asked John from Freeset, if this change helped him, he had mixed feelings. Recently he did a large website restructuring (added some new section and moved some pages around), and he found the development environment useful for that. But for other things, when he wants to make some small changes, he finds it an over kill to make changes to dev and then sync it up with production. There are also things like news, which makes sense to do on the production server. Now he has to do in both places). So I’m thinking may be, we move back to just production environment and then create a prod on demand if we are plan to make big changes.
- Testing: Original we had plans of at least recording or scripting some Selenium tests to make sure the site is behaving the way we expected it to. This kind of took a back seat and never really became an issue. Recently we had a slight set back when we moved a whole bunch of pages around and their link from other parts of the site were broken. Other than that, so far, its just been fine.
- Usability: We still have lots of usability and optimization issues on our site. Since we don’t have an expert with us and we can’t afford one, we are doing the best we can with what we have on hand. We are hoping we’ll find a volunteer some day soon to help us on this front.
- Versioning: We explored various options for versioning, but as of today we don’t have any repository under which we version our site (content and code). This is a drawback of using an online CMS. Having said that so far (been over a year), we did not really find the need for versioning. As of now we have 4 people working on this site and it just seems to work fine. Reminds me of YAGNI. (May be in future when we have more collaborators, we might need this).
- Continuous Integration: With out Versioning and Testing, CI is out of question.
- Automated Deployment: Until recently we only had one server (production) so there was no need for deployment. Since now we have a dev and a prod environment, Devdas and I quickly hacked a simple shell scrip (with mysqldump & rsync) that does automated deployment. It can’t get simpler than this.
- Hosting: We talked about hosting the site on its own slice v/s using an existing shared host account. We could always move the site to another location when our existing, cheap hosting option will not suit our needs. So as of today, I’m hosting the site under one of my shared host account.
- Rich Media Content: We questioned serving & hosting rich media content like videos from our site or using YouTube to host them. We went with YouTube for the following reasons
- We wanted to redirect any possible traffic to other sites which are more tuned to catering high bandwidth content
- We wanted to use YouTube’s existing customer base to attract traffic to our site
- Since we knew we’ll be moving to another hosting service, we did not want to keep all those videos on the server which then will have to be moved to the new server
- Customer Feedback: So far we have received great feedback from users of this site. We’ve also seen a huge growth in traffic to our site. Currently hovering around 1500 hits per day. Other than getting feedback from users. We also look at Google Analytics to see how users are responding to changes we’ve made and so on.
- We don’t really have/need a System Metaphor and we are not paying as much attention to refactoring. We have some light conventions but we don’t really have any coding standards. Nor do we have the luxury to pair program.
- Distributed/Virtual Team: Since all of us are distributed and traveling, we don’t really have the concept of site. Forget on-site customer or product owner.
- Since all of this is voluntary work, Sustainable pace takes a very different meaning. Sometimes what we do is not sustainable, but that’s the need of the hour. However all of us really like and want to work on this project. We have a sense of ownership. (collective ownership)
- We’ve never really sat down and done a retrospective. May be once in a while we ask a couple of questions regarding how something were going.
Overall, I’ve been extremely happy with the choices we’ve made. I’m not suggesting every project should be run this way. I’m trying to highlight an example of what being agile really means.
Monday, May 19th, 2008
We have just finished with the Penn State University Project. This project was a way for me to evaluate the reverse sourcing phenomena and to make a difference to our education system by playing a demanding customer’s role for a University project. Overall I had a great experience doing this and the students really enjoyed working on this project. (I owe a separate blog to talk about the overall project, now that it is over). In this blog, I want to highlight something that I always felt about user stories but was not clearly able to articulate.
The Mystery of Shrinking Stories:
Over the last 5+ years of my experience with User Stories, I’ve seen people constantly trying to reduce the scope (size) of the user stories. (Jeff Patton has a really interesting blog about the same). Seeing this trend I’ve been frustrated a lot of time when my customer or team members don’t understand what is happening.
The problem was the stories were too granular that it was easy to miss the big picture.
Functional Areas :
To avoid this problem we started grouping the stories into functional areas. The Product Owner would identify the highest priority functional area and we used that functional area to identify our iteration/sprint stories. The developer would come up with a list of stories and tasks that fit under that functional area. (Mostly this would be done before the planning meeting). In the planning meeting we would have a collaborative session with the Product Owner to decide what stories need to go into the iteration/scrum based on the estimates. But we still had a big estimation session and then we would identify a list of stories. This clearly simplified our planning meeting, but
- The team members still did not have 100% clarity during the iteration/sprint on why certain stories made it and some did not.
- Both sides felt like there was a compromise in terms of the identifyed stories. The Product Owner might want some story but the developers would say its not possible because the estimate is high on it. The Developers might want to work on some story because it makes sense from a technical point of view, but the Product Owner did not prioritize that story.
- Also another problem we ran into was there were a lot of times when we wanted to pick stories from different functional areas. But in this approach we could pick up stories only from one functional area.
Using Themes instead of Functional Areas
We used different approaches to identify Themes.
- First we started off with identifying a list of high priority stories and then tried to put a theme around it. This clearly did not work as well.
- Some times we tried to identify a list of high priority functional stories and use a theme for the technical stories. This worked well when the technical debt was really high. For Example, Refactoring is a theme for this iteration/sprint.
- Finally we asked the customer to identify a theme and the developers would then identify stories based on the theme. This was very similar to the functional areas except that now we could identify stories from different areas.
Again we had similar issues with previous approach.
Each Iteration/Sprint has a clear Goal
Finally we tried having a goal per iteration/sprint. The Product Owner would communicate the goal to the team; the theme would discuss about the goal with the PO and get their clarifications. After which the developers would come up with a list of stories that will help them achieve that goal. Sometimes they might only be able to achieve part of the goal, and then the goal will be updated accordingly.
This gave the business folks to clearly communicate what they wanted to see at the end of the iteration and also that fits into our release goal (each release had a clear goal, but before this we did not have a goal per iteration/sprint). For the developers this meant they could thin slice the stories accordingly to fit into the iterations. Suddenly developers moved from an estimation mindset to a budgeting mindset. Also from the PO’s perspective the story is really the goal, which is fairly high-level and the developers used user stories as tasks to keep track of progress and to split work amongst them. I was never a fan of tasks under the stories and this helped us get rid of them. Technically user stories (not really give by the user, written by the development team from user’s perspective) became our tasks and the goal became our user stories. This way we were able to make our stories quite large in scope (really, its one story per iteration/sprint). There is a similar concept in Scrum. They talk about Goals for a sprint, but there are multiple goals per sprint. In our case we had only one goal per sprint and we had stories under that.
What we did on the Penn State Project:
We started off with me (Customer or Product Owner) explaining them the overall vision of the project. So that, all of us has the same mental model. We used a mindmap for this. Next, I started of setting a goal for the iteration and also giving them a list of high priority stories (since they were new to the whole concept of stories). After a few iterations, I just gave them goals, asked them to email me the stories the next day and then in our bi-weekly demos, they would demonstrate each story and I would get to rate if they fulfilled the goal or not.
Some sample Goals are:
- Have at least one conference up and running
- Build versioning into the system ( This meant they had to add version each page, each artifact under the page, version templates, version the whole conference, there could be multiple conference under one site, so version all of them)
- Add template support (This means layout templates, Style sheet, include template, etc)
This has been my experience of moving away from Granular User Stories to Goals. I would love to hear your experience.