Monday, July 6, 2015

Braxton LS2

Intro: Explain the build-measure-learn feedback loop.  What is the purpose of this loop?  Why is it a loop?

The build-measure-learn feedback loop is the name for the process in which new products and services are developed and improved. This loop, in traditional management spheres, is often broken into parts and distributed job-wise among a multitude of people, however, in a start-up, this is not always optimal. Instead, do to no one part of the loop having any purpose without the others, the goal should be to minimize the amount of time it takes to make it through this development loop. The reason the process is a loop is because it links back into itself, and the development should not stop as the process is finished. Instead, one should analyze the data and learning they have acquired and feed it back into the process, coming up with a new or updated version of the product, while staying true to your original vision.

Chapter 7: Going back to some of the leap-of-faith assumptions, what would some minimum viable products (MVPs) look like that could validate these assumptions?  What would we measure with the MVPs?  Think specific to last season's game.

Some of the MVPs our team could use to test our leap-of-faith assumptions include things such as prototypes, quick CAD models, and constant new part iterations, be they designed and custom built or jerry-rigged. We could measure effectiveness of different parts, the way these parts interacted with the overall design, and how easy the part was to use. For example, we could create prototypes of a back-loading ramp (which we did, at least in some capacity). These ramps could improve iteration by iteration as we evolved our overall design based on observations we made about the ramp. Now, in the end, we did not load from the back, and we could have either learned that even faster on our protobot, or we could have adapted to load from the back if we so chose. 

Chapter 7: In the Grockit story, Ries talks about KanBan.  With this approach, you cannot start new work unless features that are build are "validated" to actually improve the product (robot).  This would appear to slow us down.  Why might it be worth it based on this case study?

Our team's use of KanBan this past year was...hectic, but useful. We used a version of it which did not institute a limit in how many items could be in any bucket at one time. This then emphasized speed in completing the tasks, not quality or learning. This caused a problem nearly every weekend where we ran out of jobs, and when we needed people to work on higher priority jobs, we had difficulty pulling them off of low priority filler jobs, causing delays. We also had different 3rd and 4th buckets. Our buckets were simply in need of check and verified done, instead of built and in need of verified learning. However, this was partly due to not only the experimental nature of our usage of it, but also the fact that our build time was far less intensive, and we already had a lot of validated learning. Not to say we couldn't use more, but that we didn't need as much as we would have during normal build season. It would be worth doing in the normal KanBan sense because, not only due to the learning we gained from our experiences with it, we also have a season in need of more validated learning. Normal KanBan would emphasize that we have not only developed a part, without too many parts being developed at the same time, and have learned from its iteration, and we are ready to apply what we've learned to a new iteration, or whole new part. We now would emphasize validated learning and quality, rather than speeding through the tasks in teams of 2 or 3.

Chapter 7: Ries talks about metrics with the 3 A's: actionable, accessible, and auditable.  Explain what this means and why numbers that don't meet these criteria are vanity metrics.

The 3 A's refer to metrics that are not only useful in the sense that you can devise a clear course of action from them, but also easy to understand and verifiable in the sense that they are trustworthy. Numbers that don't meet all three of the criteria, be it actionibility, accessibility,  auditibility, or any combination of the 3, are vanity metrics, which means that they are metrics that can easily cause a false sense of security, or false conclusions can be drawn from them. These metrics usually include gross totals and the like. Usually, metrics that meet the 3 A's are baseline metrics that test your leap-of-faith assumptions, or are cohort metrics about customer usage.

Chapter 8: Explain the words "pivot" and "persevere" in the context of a team's way of doing things.

Pivot, in the context of our team, means to change something in our strategy, say a part on a our robot, or the subject of a grant application, while still holding on to our overall vision and using what we've learned. Persevering, on the other hand, is sticking to our current strategy and staying the course, while still using what we've learned.

Summary:

This section of the book featured chapters on how to account for a startup, how to test using an MVP, the difference between vanity and actionable metrics, and more. A lot of this part of the book was applicable to our robotics team, and some parts, like KanBan, I recognized as something we have used, a least partially. All in all, this chapter was informative and partly review, and has taught me a lot about how to measure our team's success throughout the build season.


3 comments:

  1. I definitely agree with the KanBan issue of speed over anything else. The whole second half of the season was a continuous struggle of taking people off filler jobs and into more important tasks.

    ReplyDelete
  2. I think that we could've learned about the back loading quicker on our protobot but we didn't test it quick enough, due to how many people were stuck in their low priority jobs.

    ReplyDelete
  3. I agree with the explanations for pivot and persevere you put. It shows which one might be better to use.

    ReplyDelete