upstater1
Hall of Fame Poster
- Joined
- Nov 29, 2005
- Messages
- 26,624
- Reaction score
- 16,900
That seems like a key piece of information, but it's also the only time that the gauges seems to be internally inconsistent. Otherwise, the same-gauge variance would seem pretty small, less than .15 PSI. (I'm getting that because the difference between the gauges is consistently between .3 and .45 PSI, so the same-gauge error is likely less than the difference between those two.) I'd like to know more about the conditions behind those three tests, particularly the timing of them relative to each other and to when the ball was brought inside.
Honestly, the variance is the only part of the physics that I'm having trouble explaining. This article kind of hand waves it away–the author explains that variance will be higher because more time passed when the Pats' balls were being measured and it was during a time when they would have been further from equilibrium and thus gaining pressure faster, but the numbers still don't add up. Min/Max difference in the Pats' halftime readings is 1.35 PSI, which is above the effect of the transient warming curve even if you consider the entire halftime (which certainly wasn't the case).
So, yes, I get the logo vs. non-logo gauges, and I get the warming curve, and I get the difference in initial temperature assumptions. I feel safe in saying there are entirely reasonable scenarios where the average pressure of the Pats' balls can be explained perfectly well by physics. That's average, though, and the variance is harder for me to explain. The only thing I can think of that makes that work is that the initial variance was higher, but that directly challenges Anderson's statement that the initial pressure was consistently 12.5 PSI. I don't have a real problem saying that and think it's plausible, particularly since the initial pressures weren't written down; but it's a different type of argument than the rest of the physics corrections since everything else can be explained without challenging someone's testimony, only their assumptions and models.
The high variance of measurements on the intercepted ball is useful, but not entirely consistent with the measurements of the other balls and thus I'm tempted to say it's a result of something like there being a few minutes between tests of the intercepted ball rather than inaccuracy of the gauges (calibration aside).
Is there a mechanism that's been demonstrated that would account for high variance? Maybe something to do with how long the ball was in play or how wet one ball got compared to the rest?
Too many variables. Some balls were waterlogged for instance and used extensively in the field of play. others were drier and in bags.
I'd also venture to say that it's asking a bit much for someone to be so exact on the measurements pre-game. I'm sure he was aiming for 12.50 and thought that 12.35 or 12.65 was just fine. AND, if we had all the Colts balls, your question would be answered more completely. Then we'd have another set for measuring variance.
Here's something else everyone seems to miss.
The report set aside one of the 4 Colts balls because the reading with the non-logoed gauge was actual HIGHER than the reading with the logoed gauge, and it was .45 higher, while in every other time that logo gave lower readings, for the Colts AND the Patriots balls.
So the 11 Patriots balls were measured against 3 Colts balls.