Sunday, July 5, 2015

LS2 Chad


Intro: Explain the build-measure-learn feedback loop.  What is the purpose of this loop?  Why is it a loop?

The loop part of the build measure learn feedback loop is the most important part.  The loop allows the start up to have an idea about how to provide a customer make a quick product or product update, then develop a measurement to determine if the update meets the customer need the learn if it meets the need.  If the loop is not completed the team doesn't now if the change was effective or they don't know what aspect of the change may have been critical in the success or failure of the test.  In many cases, teams do not complete the learning opportunity of this loop by just building something, sending it out and not following up with a measurement to determine success.

Or certain disciplines effectively use a version of the build measure learn that while making a very reliable product, may actually increase the time to market.

Engineering is one of these disciplines that use build measure learn but can cause delays in getting critical customer feedback by striving for perfection  Often engineers visit job sites to measure loads going into a machine, learn how a product is used.  Then the engineer goes back to the office and virtually builds a machine for a virtual test.  In the simulation of the stresses in the structures there is a lot of testing of hypotheses such as what doe the stress do when this radius is changed.  Once this is done, then the product is built and tested again in actual working environments.  Measure, learn make more changes.

This all sounds great and part of a robust build measure learn feedback loop however, in many cases the tests are done with the objective of learning about very specific aspects of the design not if the customer would buy it or not.   Professional operators are used, controlled access test sites are selected.  Because of long cycle, these products have many enhancements making the measurement of the effect of any one feature very difficult.

Chapter 7: Ries says that traditional accounting doesn't work for a startup.  By analog, preset milestones during the build season (such as "we will be driving by the end of week 2 of build season" or "we will be able to score 20 points in a match by the week 0 competition") may not work either.  Do you agree?  What SHOULD we measure instead, starting with kickoff Saturday, to know we are actually making progress?

I debated writing on this question because as I read this section I kept coming back to how do we put measurements on the season and should it just be the build season or should in include the preseason.  It is the direct question of what is success and how should it be measured.

At our introduction session to the summer leadership course the students said that learning and preparation for success in a career was the objective of FRC.  The number of points and if the robot is driving or not is a very poor measurement of this success.  A structured preseason self assessment with a quick weekly survey would be a potential measure.  One question could be did the participate (mentors also) complete the self assessment.  This measures engagement.  Another question could be did the participate get an opportunity to improve a key skill this week.  To acknowledge that your preseason responses could not have captured all chances to learn, a question could ask did the participant if they learned a skill that they did not think they would learn at the beginning of the season.  These responses could be graphed in a funnel chart.

The question I struggle with is what is the alternative to drive by, score so many type milestones and metrics.  Or even are the point scores are bad milestone.  While sitting in the back yard playing with the dog, Aren and I discussed applying the Kanban system to the robot design.  "Line the hall with modular prototypes" is what Aren's comment was.  A possible measure of how the season is going could be a backlog, in progress, built and validated kanban bucket chart.  Each design in the kanban buckets have a point improvement associated with them so we would have a way to not loose track of the estimated points.  Ries basically explains how the team develops a takt time and the time each concept spends in the buckets evens out.  Takt time is the elapsed time to complete each unit of work.  In this case, backlog would be how long the idea has been known, in progress is how long it takes to get the build plan, build and validation are obvious.

Let's not forget our other disciplines.  Out reach can have multiple funnel charts.  Funding is contacts, contacts expressing interest, pledged and funded.  Community awareness might be more difficult to develop a metric that is not a vanity metric.

PLEASE COMMENT ON THIS!

Chapter 7: Ries talks about metrics with the 3 A's: actionable, accessible, and auditable.  Explain what this means and why numbers that don't meet these criteria are vanity metrics.

Actionable, accessible and auditable metrics are metrics that can be easily found and understood by team members, measure clear cause and effect of action taken or the action that needs to be taken and are able to withstand creditable review.  Metrics that are not able to demonstrate clear cause and effect are vanity metrics.

Metrics like number of design iterations or outreach contacts made are vanity metrics because the effects measured would be correlatory at best.  Mr. Pethan's comment of an outreach metric of the number of donations less than $50 could be a good metric because different outreach strategies could be measured for cause and effect.

Chapter 8: Explain the words "pivot" and "persevere" in the context of a team's way of doing things.

If the team were using a kanban metric an making incremental progress with each bucket full of ideas, then a persevere strategy might be sound.  If the kanban buckets were empty or very low and progress was stagnate, then a pivot might be required.  Without a good bml feedback loop a pivot vs persevere will be difficult to implement.

Chapter 8: Working rapidly to get the first working product (MVP) is seen as a good thing, but has limitations.  How do you know when to work cheap/fast and when to slow it down and make a higher quality product?

If the MPV isn't of sufficient quality to engage your customers, then the measurements may not be accurate even with early adopters.  As the startup gains critical mass and moves from early adopters to a more main stream customer, the main stream customers may get frustrated with cheap fast and low feature products.


End with a summary of what you learned.

My key take away is that the mentors need to help the teams develop metrics other than yeah it worked to determine success.  I also think that for the robot design these metric may not be easy to determine using metrics that are not vanity based.  Based on my comments, you should also assume that I really like the kanban approach to concept development.

4 comments:

  1. I was thinking of commenting on your answer before I got to your plea to comment. I completely agree with using some accounting method for things other than those directly related to robot build. We have some events coming up, and we can try multiple ways to engage the public that attends these events and measure how many people are willing to provide contact information, how many ask for more information, how many extended conversations do we have (and how long are they?) Determine which methods (or which person's approaches) works best and then refine from there.

    ReplyDelete
  2. I think that reworking our measurement system is a great idea. If we just focus on getting the robot to score "X" points or to be driving by a certain date we are not really measuring the mission to teach kids about STEM and prepare them for careers. I do however believe that these milestones should be included in there somewhere. Deadlines and goals are everywhere in the real world and students definitely learn about real careers when they adhere to them. Based on this I think that the best scoring system would include some major "standard" milestones and goals combined with a measure of involvement in the effort to reach them. As long as people put in the effort they will learn about STEM while preparing for real world challenges which is the point of FRC.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Jacob for sure we need milestones. We should carefully think about the order of the milestones. Maybe the substructure is a low priority but the material handling or the powertrain is the first milestone. Then build some measurements around achieving the milestones or even if they are the correct milestones.

    ReplyDelete