Monday, July 6, 2015

LS2 John

    Intro: At what point in the build season did we find out how accurate our assumptions really were?  Was it possible to accelerate this?  Could the assumptions be broken down into small experiments?  If so, how?

Not until we had a ‘working’ prototype did we know how accurate we were.   It could have been accelerated perhaps by breaking down the elements of a cycle.  For example, knowing our drive speed would help us determine time for travel. Setting up a
prototype tote loaded, wheeled platform and seeing how fast we could move/turn while maintaining control would be helpful.

Intro: Explain the build-measure-learn feedback loop.  What is the purpose of this loop?  Why is it a loop?

The loop is where you decide what it would be useful to learn, determine what measurement will help you learn that, and then build something that will support making that measurement.  This is working the loop 'backward' to where you can then
actually build, measure, learn.   Then based upon what you have learned, you come up with new ideas, which lead to new things to learn (or validate) and the process repeats.

Chapter 7: Ries says that traditional accounting doesn't work for a startup.  By analog, preset milestones during the build season (such as "we will be driving by the end of week 2 of build season" or "we will be able to score 20 points in a 
match by the week 0 competition") may not work either.  Do you agree?  What SHOULD we measure instead, starting with kickoff Saturday, to know we are actually making progress?

Not sure I know the answer to this one.   Typically, my FRC experience tells me that we should analyze the game, determine the possible skills our robot could exhibit, assess which skills are both within our capabilities and of sufficient value.   
After that, we need to start nailing down hard to change specifics, such as drive type and frame size.  Concurrent with that, prototyping several alternative approaches for game piece manipulators.

Chapter 7: Going back to some of the leap-of-faith assumptions, what would some minimum viable products (MVPs) look like that could validate these assumptions?  What would we measure with the MVPs?  Think specific to last season's game.

A 2015 MVP robot would literally just need to be able to drive, and push game pieces around.   If you can push a tote onto a scoring platform, you have a product that can contribute.   We could measure maneuverability, ability to drive over 
scoring platforms both head on, diagonally and lengthwise.   By simply adding a driving camera, we could determine how easily drivers can operate when their direct vision is obscured and how well they can avoid scored stack collisions that can't 
be directly seen.  It should be noted that an MVP to play in a regional is not the same as an early build season MVP for giving to drivers/programmers for learning and experimenting-so actually, being able to push totes isn't even a requirement.

Chapter 8: Explain the words "pivot" and "persevere" in the context of a team's way of doing things.

An FRC team can persevere or pivot in many ways.   With respect to build season, we can pivot on some fundamental design choices (as long as we have enough runway- which in our case is time, mostly.)  Other ways we could pivot is deciding to 
expand the focus of the team into more aspects unrelated directly to build/compete.  For example, mentoring FLL and/or FTC teams, hosting events or helping at event to help other teams, such as MN Splash, or GoFirst's Summer Robotics Summit, or 
the FRC kickoff.   We could also 'pivot' by focusing on raising significantly more funds to support attending a second regional or even trying to get a slot at the FRC Championship.   We could pivot to try to attract a wider diversity of BHS 
students-not just those 'naturally' attracted to Robotics, but those in other fields of interest to help make the team more robust. 

Summary: 
  These chapters helped me realize that there are useful things that can be done that are seemingly not on the 'direct' path of building the robot.  For example, we could have created a simple wheeled platform, stacked five or six totes on it and 
pushed/turned it manually to see how easily tote toppled.   Then determine ways to avoid toppling that were minimal in terms of weight and easy to maintain-such as using the fiberglass rods as we eventually did at the top. 
We need to apply more of the scientific method to test and validate early on. 
I realize that one notion in the book doesn't directly apply to Robot building (but does to other parts of the team, such as fundraising.)   We don't have a large and growing customer base to consider - we have our drivers, we have our co-
competitors that we are trying to impress at competition.

1 comment:

  1. The small batches and small experiments will complement each other. It would be extremely helpful to basically test subsystems like you suggest rather than waiting to pull the cover off an entire robot and start testing.

    ReplyDelete