There are still companies arguing about the good things about waterfall… So I’ll take “being agile” from another angle and will try to prove being agile is just being smarter than using waterfall!
First: where is the problem on any project?
From , https://dannorth.net/2010/08/30/introducing-deliberate-discovery/
“Learning is the constraint”
Liz Keogh told me about a thought experiment she came across recently. Think of a recent significant project or piece of work your team completed (ideally over a period of months). How long did it take, end to end, inception to delivery? Now imagine you were to do the same project over again, with the same team, the same organisational constraints, the same everything, except your team would already know everything they learned during the project. How long would it take you the second time, all the way through? Stop now and try it.
It turned out answers in the order of 1/2 to 1/4 the time to repeat the project were not uncommon. This led to the conclusion that “Learning is the constraint”.
Edit: Liz tells Dan North the thought experiment and quote originated with Ashley Johnson of Gemba Systems, and she heard about it via César Idrovo.
If we assume the only difference is that the second time round you have learned about the problem, this would suggest that the biggest impediment to your throughput was what you didn’t know.
So the problem is: the main constraint on a project is lack of knowledge.
So what is the solution?
The solution is to make learning more effective. So how?
- Work on high value and high risk requirements first and don’t waste time yet on low value and high risk:
- High risk means lack of knowledge is important (high risk = we don’t know how this is going to work out).
- Then high value, low risk
- Low value, low risk
- Finally, if any money left, low value, high risk: why? Why would you spend time learning on something that has low value and may not be needed at the end?
- Request for feedback as quickly as possible: how do you know what you have learned is correct? How do you know you have understand your customers? Learning is effective if first what you have learnt is correct: your customers will tell you. Ask for quick feedback. Don’t delay.
- Work as a team:
- One does not know everything, communicate as often as possible with your team members and customers (or proxies). You’ll learn a lot from people, not necessarily because they know more but by discussing, they may ask you some very good questions!
- Don’t split design, development and testing between different people: make sure same people work on all phases otherwise team members spend time transferring information; spend time sharing information.
- Fail fast, learn fast, improve fast: give the team an environment to try and experiment new things. But since we want quick feedback, if they fail, they will fail quickly, learn faster and improve faster!
- Be strong on Quality: whatever you deliver, accept it only if it meets your criteria in terms of quality. I am not saying “get it right first time”, quite the opposite. Go fast, get feedback, iteration until quality is right and product is right. Because you iterate quickly, they will get it right quickly and will know how to get right faster next time.
- Foster a “learning” culture: books are amazing; some people have a wealth of experience they have captured in books; learn from them so you can quicken learning (from years – for the authors – to weeks – for you!).
- And then, use BDD: BDD is asking concrete questions to get concrete examples from your customers. Conversations are the critical part. And the added benefits are: faster learning and understanding of what the right product to build is; set of acceptance scenarios (tests); drive development (and living documentation); and even executable spec (when you automate these examples); etc
If you agree, “lack of knowledge” is really the constraint, then if you focus on learning effectively, you will deliver faster than when using waterfall and also you will demotivate your teams (if you are already using Scrum or Kanban too)!
So, being agile is learning more effectively so hence reducing the main constraint on your project so reducing time-to-market for your product: this is being business friendly… Sooner (less costly), faster (shorter time-to-market) hence higher ROI!
There are lots of prototyping tools out there. I have found Visio to be on the best tools, not only for pure UX creation but as an overall tool to be used from gathering requirements to testing and support.
Benefits of Visio – the tool that goes a long way, depth and width on a project:
- When gathering user stories, you can quickly prototype some workflows to confirm your understanding of the user stories.
- You can put a few screens and their interactions in on single page having a quite complete view of a specific “theme” (set of user stories that are related).
- Visio is part of microsoft offices so companies have already licenses for it and can be installed easily on people’s computer. In other words, you don’t need to go and justify your manager to buy an expensive UX tool.
- Visio allows multiple layers: from navigation by linking screens with arrows, specifying Business Rules by adding information on these arrows, adding details on each UI element (dates, buttons, filters etc),having a quite high fidelity prototype
- Visio is great for testing: since so much information is on a visio page, you can just go through each UI element, business rules, error messages to confirm the application matches your visio spec.
- Visio is great as a reference to your application for support: functional specs are hard to maintain; visio is quick to update: you just need to see, not to read.
- Visio is visual… People are visual
- The example below seems complicated as multiple layers are provided… But this example of a true project isn’t complicated. Why? Simply because creating the UX is an iterative and continuous effort. Everybody working on the application is involved one way of the other, including engineers developing the product.
Having involved in 2 data migration projects, a big one and a small one, this is the approach that has worked using scrum:
- Balance depth and width
- Don’t go to deep as not all data is as important as other.
- DEPTH is harder than WIDTH.
- If you cannot measure the quality of the data you have migrated at the end of the sprint, you have been too deep or/and too wide (do not trade of quality for quantity): this is key!
Let’s assume for a project you have ten of thousands of “customers* to move from one source of data made of multiple DBs where business rules have evolved and no all data are “compliant” against the rules added (example: password protection requires special characters now). For each customer, loads of data needs to be migrated. You have split population into small populations.
Sprint 1: Simplicity – Migrate successfully a tiny population using few data – Go for smaller.
- Select a small population and migrate minimum amount of data, reducing size of the population if needed. Example:
- Name, first name and date of birth
- Put automated tests in place (“count” etc)
- Put data quality reporting in place with CLEAR information why some “customers” were not migrated.
Of course, there isn’t much value yet because you may not be able to use such data in your product…Still keep the amount of data to be migrated small: don’t look for depth or width but for Quality, especially in the reporting and testing. Migration should be to an environment as close as possible to the production environment.
Sprint 2 – Go slowly for depth and faster for width
- Based on sprint 1, for the chosen tiny population, add more data to be migrated: DEPTH. For instance, add addresses. DEPTH is difficult so go slowly.
- Based on sprint 1, because you have automated your testing and your reporting, increase size of your population and continue migrating data based on the Name, first name and date of birth only: WIDTH. Don’t be too cautious: automation allows to go a bit faster (if not, review your automation).
Why? DEPTH is because you need to have some *customers* migrated to test your new application. WIDTH: because of quality of the data, performance, memory usage, etc and because it should be easier than going for DEPTH
Sprint 3: Iterate
Keep focusing on DEPTH and WIDTH: do not add more data to “WIDTH” until you have covered a first small population you would be happy to migrate. Keep focusing on DEPTH too.
From my experience, migration can become a nightmare when the quality of data migrated is not under-controlled: you go for depth with a big population and struggle to measure anything, worse you are flooded with poor data. Baby steps are critical here: quality should not be traded-off.
Goal is not to start another war between the #NoEstimates and the others – I still like the idea of Mike Cohn: Commitment-driven.
The goal is to see how I could replace splitting tasks into hours (#NoEstimates?) to get commitment and tracking progress for the benefit of the team.
Option 1: classic commitment driven
Option 2: slicing user stories into “tasks” less than 1 day (#NoEstimate?).
Option 1 – Estimation story points and hours
Step 1: Estimate in story points – using planning poker where the goal isn’t only to get estimates but to clarify user stories: this is a coaching tool, not an estimation tool.
Step 2: Break down user stories into tasks and estimate them in hours at sprint planning. Use “hours left” to calibrate “commitment” for following sprints.
Option 2: Estimation story points and user tasks less than a day
Step 1: same
Step 2: At sprint planning, break down your user stories in smaller “user tasks” (task a the user can complete): they aren’t “user stories” per se but instead of splitting them “waterfall” style, you break them in terms of “workflows” that could possibly add values.
Let’s say you have a 5 point user story about “customer log in”, you could have “user tasks” as opposed to “development tasks” such as:
- Customer has a in valid email or password
- Customer is offered to create an account
- Customer can retrieve his password
- Customer can retrieve his email address
- Customer has a valid user name and password
Now we are happy, they can all be implemented within a day (if not we slice again the not small enough ones) – this is to keep the idea of achieving something every day: “small” means less than a day. So, looking closely, these seem like acceptance tests in fact (so I am going in the direction of Ron Jeffrey in that context).
So I guess I have not “estimated” tasks into hours hence I am a #NoEstimates (from what I have read). So the team can go ahead with their sprint!
For tracking, we could say in the first sprint, we have 50 days of work and commit to 60 “user tasks” and we commit based on “user tasks” all less than 1 day and see how we perform (calibration). Then I can use cycle-time.
- No need to break user story down until we decide to implement them: “just-in-time”
- We keep velocity as it works and planning poker as a coaching tool.
- We reduce waste or kill 2 birds with one stone: we break down user stories into acceptance tests (“user tasks”).
- Less than 1 day is still a day: let’s assume one engineer starts implementing a user task that should be a couple of hours but she/he believes there is more work involved and ends up spending the whole day: this is waste. Hold on: this is miscommunication so let’s put a process. Hold on: processes? Not very agile… What is the likelyhood of this happening? In fact, probably quite low: there is a design task to clarify further; if you waste time once, you may want to think further next time.
- You miss the simple chat: I still think for clarity, since you have already broken down user stories into tasks, a quick estimate on hours allows double-checking we are all clear. In fact, the whole point of estimating isn’t getting getting estimates, this is to make sure we are on the same wavelength. JUST TO BE CLEAR: the purpose isn’t to have a list of tasks and hours: it is to arrive to a commitment.
What about classic tasks such as design, unit testing etc?
- Now, some user stories may require some level of thinking in terms of design so why not be explicit about it. You can add a design task.
- What about code review? This is in the DoD so technically, you can add it to the work with the user task.
- What about unit testing? Acceptance tests? Same, since this is part of DoD, this is included in the user task.
So a user task is done follows the same DoD than a user story. So we have done is breaking down user stories into small valuable tasks.
So what is left to “estimate” is effort for the design and design should be less than a day quite likely.
You can break down user stories into design of the user stories and acceptance tests and use cycle time.
You can also get quite estimates on each user tasks but with the goal in mind to just double-check we are all on the samewavelength and that can’t cost much.
“To make an idea work, you need to build on an idea, not destroy it”
An example of epic progress report in Excel:
File: Epic Progress Report
Here is a list of tips to improve your product backlog – reminder: still around conversation!
- Impact mapping: link business with user stories (impactmapping.org)
- It does answer the question: what is end goal of such user story: increasing sales? Remaining competitive? Catching up with the competition? etc
- User story mapping: link
- Low fidelity prototyping: using powerpoint, drawing, balsamic etc
- Before trying to get UX details, allow you to test a new idea
- Using powerpoint is very easy and everybody can reuse the work and collaborate:
- Define UX: drawing on a piece of paper or on visio for instance can be very quick and can help refine some user stories: this is very effective when having a UX parallel development track
- Having INVEST user stories: the best way to know whether this is testable is to write some example tests and using BDD format is great (Given/When/Then)
- The great thing about examples, apart from confirming the user story, is that some examples become user stories. Examples help flash out critical details.
- Make business rules explicit: rules are quite often “embedded” in the user stories. I have found make them explicit (When “event” Then “Action”) can be helpful especially when documenting your software product. You can add these rules in your UX design too.
- Example: When password or username is incorrect, Then user should try to reset their password or try another time (up to 3x)
- When user has failed 3 times, then user must reset his password
- Example: When password or username is incorrect, Then user should try to reset their password or try another time (up to 3x)
Quite a lot to get on with it – but don’t forget this is a collaborative effort whatever options you use!
You have ever wondered how to get these short videos on Youtube where you see a hand writing or moving pictures?
And you thought how tricky it must be!
This is not. Tools exist. They do most of the work for you: you can get your first animation in 15 minutes by simply following their tutorials. Download videoscribe from videoscribe.co or use powtoon from http://www.powtoon.com/
Here is my first presentation on creation of Definition of Done.
This is magic: impressive but very easy. The background music is provided, you can record the voice over by the click of a button, etc