Hi---I know many already understand this, but I want to try to summarize for people to spread the word easily. I talked at length with a redskins fan tonight who was already sort of on the Pats side, but when he heard everything he was convinced the case is lame. There are many worthwhile details to note, but here is the CRUX to be pushed when people are making the case---we need to pass this on far and wide. It's late now, and I'll try to give the details in the report in the next day or two, but this is the gist:
1) Okay, it seems clear to me that none of the alleged "probable" evidence matter if it isn't for the PSI data. Wells claims the data is problematic for the Patriots for 2 key reasons: a) the Pats drop from pre game to halftime game was about .3 or .4 more than it should have been given science, and b) it was about double what the Colts was. For the sake of argument, let's say the Pats dropped about 1.4 when it should have been about 1, and the Colts dropped about .6 (THESE NUMBERS SPEAK VOLUMES__Keep them in mind)
2) One gauge consistently measured .3-.45 higher than the other one
3) (HUGE) --the refs said they were pretty sure they measured pre-game with the lower gauge first, overfilled, and then the second ref used the higher gauge to let some air out and get it to 12.5 (NOTE: If the higher gauge was used second--as they refs said they thought it was--then a 12.5 on that gauge would be more like a 12.1 on the lower gauge). They don't know what gagues were used in what order at halftime.
4) the league then steered this to become that the lower gauge as probably used second pre-game.
5) return to 1 above. If the higher gauge was used second pre-game, as the refs said was likely until the league steered the opposite direction, then a 12.5 pre-game ball would measure 1.4 lower at halftime (1 for normal weather, .4 for gauge difference) if measured by the low gauge at halftime. This by itself would basically make up for the difference for how much "too much" deflation had occurred given the science. Similarly, if the Colts were measured pre-game with the lower gauge and then tested at half time with the higher, the 1 PSI halftime drop would only register as .6. So, the pats 1.4 would really be 1, and the Colts .6 would really be 1. They would be the same, AND the Pats would ALL be within acceptable pre-game/halftime differential range (4 of the 12 already were on both gauges!)
Now, the halftime gauge situation was all undetermined (I'll have to check this again) but the switch in pre-game recording is clear cut. Therefore, it is "more probable than not" that the higher reading was given pre-game, and 50-50 that the lower gauge was used halftime. BUT, the .4 extra deflation can only be explained by 1) Pats tapering with ball after pre-game meadurement, or 2) the scenario described above (high guage used pre-game, low gauge halftime). Now, it seems to me that if one scenario involves cheating, and one is explained by ordinary events, it is more "probable than not" that the ordinary events explanation describes the actual state of affairs (Occams' Razor principle).
So, on the PSI alone, we should say that if steering had not occurred, it was at best more probable than not that the PSI tests were within range, and at worse, it was inconclusive and not judgment could be made in either direction. In other words, the most straightforward, original account by the refs, actually supports the opposite of what Wells claims in terms of "probability."
Period, end of story, nothing more could be said (though if one is really bored, many other reasonable rebuttals/interpretations in favor of Pats could be given--but it's not necessary--the most normal interpretation of PSI data already tells the story).