Monday, July 6, 2015

Jacob LS2

  • Intro: At what point in the build season did we find out how accurate our assumptions really were?  Was it possible to accelerate this?  Could the assumptions be broken down into small experiments?  If so, how?
A lot of our assumptions were proven during the final push to assemble the robot.  We could have tested most of them earlier if we hadn't gotten bogged down on smaller things.  A notable assumption we made that followed this pattern was that our bot would be unstable with a standard wheelbase.  We never actually tested this until we had the final robot there since our prototype couldn't hold six totes and some math supported the assumption.  This could have been tested several different ways including a A/B test with two wooden wheel bases.  Alarmingly, some of our biggest assumptions were not even tested until the actual competition.  In particular our stacking strategy, one on bottom and always lift second, was adopted due to the failure of our "tote lands flat" assumption.  This was "tested" on our homemade wooden chute which we knew was not comparable, but we wanted them to land flat so when they did, we were happy and left it at that. 
  • Intro: Explain the build-measure-learn feedback loop.  What is the purpose of this loop?  Why is it a loop?
The build measure learn feedback loop is designed to allow a business to rapidly test hypothesis and adjust a product for optimum use as quickly as possible.  The business builds a prototype, tests it against the hypothesis and then makes changes based on what is learned.  It is a loop because when changes are made based on what is learned, more changes need to be made to optimize the product even more.
  • Chapter 7: Ries talks about metrics with the 3 A's: actionable, accessible, and auditable.  Explain what this means and why numbers that don't meet these criteria are vanity metrics.
In order for a report to be considered actionable, it must demonstrate clear cause and effect.  This means that the data has to be based around a particular aspect and how it affected the outcome rather than just the outcome as a whole.  To be accessible simply means that anyone could look at the data and draw a conclusion.  This means that the numbers are presented in an understandable way and the meaning is clear.  In order for something to be auditable it needs to be provable.  If you can test it by actually asking the customers questions, it is probably auditable.  If a there is no noticeable cause for a change in data (or lack thereof), the data is hard to understand, or it is something that can't be tested, it has significantly less meaning than something that meets these three requirements.  In other words, its great to feel like your doing well based on data, but if you don't know why and can't test why the data is kind of pointless.
  • Chapter 8: Explain the words "pivot" and "persevere" in the context of a team's way of doing things.
In our team, to pivot is to change an aspect of the design or design process of our robot.  Pivots are bigger than small tweaks, such as changes in cam designs, and generally have to have some data backing them up.  A good example of this is when we threw out the idea of field loading.  To persevere is to simply keep trying to prove on your current idea.  We like this option since it makes us feel like our idea is a good one and the team has both suffered and improved based on this tendency.  A good example of persevering is how we never swayed from using the chute.
  • Chapter 8: Working rapidly to get the first working product (MVP) is seen as a good thing, but has limitations.  How do you know when to work cheap/fast and when to slow it down and make a higher quality product?
MVP's are all about testing hypothesis and figuring out how to improve  based on this.  This means that once you are to a point where you have settled on an overall design and done several iterations on minor changes, you should start making quality parts as they might be actually used later on.  In terms of our team, quality work should start happening before driving begins, but should be focused on during driving.  Basically we should be constantly testing until we want our robot competition ready due to time constraints on driver practice.
  • Summary
From this section I learned three key things.  Get as much testing done as possible, be prepared to change direction if necessary, and develop a way to measure progress in a way that actually represents something.  The continuous testing of Hypotheses allowed many of the case studies to get off the ground and to the point of pivoting.  While pivoting can feel like failure, it is almost always required to improve your business as a whole.  It should also always be backed by data that has a true meaning.  If data can't be tied to a certain cause then your really don't know what is causing the change and can't make sound business decisions based on it.

2 comments:

  1. Very insightful assessment regarding finding out if our assumptions were accurate. Unfortunately, unless we are willing and able to build field elements according to actual competition specs (with proper materials, correct carpet for tote landing etc.) we will always have to make do with an approximation. I think we should have pre-considered a 'plan b' in the event totes did not land flat, so that we had the 'start with yellow, lift second tote' option already in mind once we saw how a competition field behaved.

    ReplyDelete
  2. I think the assumptions were not tested well. I did not think of the chute example. This could have been fixed not only by more realistic testing, but also by using our solution the first time we find it instead of reminding dimensions

    ReplyDelete