Monday, June 29, 2015

LS2 Nate D.

  • Intro: At what point in the build season did we find out how accurate our assumptions really were?  Was it possible to accelerate this?  Could the assumptions be broken down into small experiments?  If so, how?
  • We did not find out how accurate our assumptions were until our final bot was in the week zero competition.  We never had a fully functional bot until this point and hadn't really driven.  We could have broken this down into a few different tests using a drive train and our pro type lifter.  We could have broken down how maneuverable our bot is with a drve train and some weights so we could figure out how lining up the totes would go.  We could have factored in lifting and driving speed to give ourselves a quicker and more accurate estimation of points per round.

  • Intro: Explain the build-measure-learn feedback loop.  What is the purpose of this loop?  Why is it a loop?
  • The build-measure-learn feedback loop is used to improve a product to get a better product.  The purpose is to put a product out there, see how it functions, and figure out ways to change it to be better.  It is a loop because it should be used over and over again until you have a high achieving product.  If you settle for "good enough" you will inevitably fall behind those who go the extra mile to be above and beyond.

  • Chapter 7: Ries says that traditional accounting doesn't work for a startup.  By analog, preset milestones during the build season (such as "we will be driving by the end of week 2 of build season" or "we will be able to score 20 points in a match by the week 0 competition") may not work either.  Do you agree?  What SHOULD we measure instead, starting with kickoff Saturday, to know we are actually making progress?
  • We should measure based on progress.  If we were to start by making a decent robot we could use the bml feedback loop to improve our robot to a high scoring bot.  We could start with a bot that scores maybe 10 points by week two and attempt to improve it by 5-10 points per week by adjusting features on the bot.

  • Chapter 7: Going back to some of the leap-of-faith assumptions, what would some minimum viable products (MVPs) look like that could validate these assumptions?  What would we measure with the MVPs?  Think specific to last season's game.
  • Some mvps would be drive trains and lifter ideas for the game we had last season.  We could prove the ideas to be effective or ineffective and improve on them without disassembling our whole robot.  We could measure things like speed, maneuverability, driver comfort, and precision.

  • Chapter 8: Explain the words "pivot" and "persevere" in the context of a team's way of doing things.
  • The basic definition of a pivot is a turn and persevere is to push onward.  In context of a team, pivoting would involve taking the idea in a new direction.  If a product was poorly received you would want to shake things up a bit to make it more appealing and well received.  To persevere in this sense would to continue with what you have been doing.  This is the attitude of "if it ain't broke don't fix it," so you continue to go in a direction that has shown to have favorable outcomes.

  • End with a summary of what you learned.
  • I learned all about the best ways to start and improve a product.  It shows that is best not to use mass amounts of resources to get a perfect product and have the possibility of it being unwanted.  It is better to start with something low quality and improve it to the needs of the customer.  This will show actual growth and what features are positive and which are negative. This is a process that could be incredibly useful in robotics because we can diagnose a problem right away when we only change one thing at a time.

3 comments:

  1. I agree that the week zero competition could have been used to do a few tests of our bot in its current state, devising tests based on maneuverability and point scoring. Hopefully we'll have a chance to do something similar to this next year.

    ReplyDelete
  2. I agree we should have done specific component testing at the beginning of the season to get accurate judgements of our robots capabilities. Doing this instead of waiting for a competition would have been more efficient.

    ReplyDelete
  3. I found it funny how true it was that we didn't get driving practice until the end of the season. I also agree with your list of MVPs.

    ReplyDelete