1) That's why the range is so large: To adjust for bad tests and other errors in statistical sampling. But even the low ends of the estimated ranges are many, many magnitudes higher than the official tallies
2) As a general rule in this type of testing, you are far, far more likely to have false negatives than false positives. The Stanford researchers confirmed this when they did their own validation.
1) Yes, but the assumptions that are made to adjust for errors in sampling may render the analysis inaccurate or misleading. For example, they heavily weighted a convenience sample to try to make it representative, but certain substrata may be underpowered within the sample for such an analysis. That's not really that big a deal, some of those magnitudes are implausible but the issues with the test kits are the big problem.
2) False negatives don't matter much, but false positives matter a lot, and the issue is not just accuracy, it's specificity. If I take 50 people that I knew were infected with covid and I give them a serological test that tests for chicken pox antibodies, it's almost certainly going to have a 100% positive rate. Would my chicken pox antibody test be a 100% valid and accurate test for covid antibodies then? This is an exaggeration, but it's important because the closeness of endemic coronaviruses and the novel coronavirus means that you could end up picking up antibodies for the former if the test is not specific enough. This isn't just an academic concern, it was a big problem with SARS serological testing (
Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period). Again, these are Chinese-manufactured tests that have not been approved by China's FDA, it's kind of the wild west.
The false positive rate claimed by the kit manufacturers was 2 out of 401 (these samples were taken before the novel coronavirus existed, so they must be negative), which is what the Stanford researchers relied upon, but that rate is
itself a simple point estimate based on measurements within a sample and the confidence interval of that rate contains a false positive rate that would explain away the entire positive sample in the Stanford study as false.
There's a more comprehensive explanation of the testing error issue here:
Counting rare things is hard
A rate of 50 positives out of 3330 healthy people is high: if true, it would imply COViD-19 was much more common and therefore much less serious than we thought. The researchers used a test that had given 2 positive results out of 401 samples known to be negative (because they were taken before the pandemic started). If the false positive rate was exactly 2/401 , you’d get 0.005×3330 false positives on average, or only about 17, leaving 33 true positives. But 2/401 is an estimate, with uncertainty. If we assume the known samples were otherwise perfectly representative, what we can be confident of with 2 positives out of 401 is only that the false positive rate is no greater than 1.5%. But 1.5% of 3330 is 50, so a false positive rate of 1.5% is already enough to explain the results! We don’t even have to worry if, say, the researchers chose this test from a range of competitors because it had the best supporting evidence and thereby introduced a bit of bias.
On top of that, the 3330 people were tested because they responded to Facebook ads. Because infection is rare, you don’t need to assume much self-selection of respondents to bias the prevalence estimate upwards. You might be surprised to see me say this, because yesterday I thought voluntary supermarket surveys were a pretty good idea. They are, but they will still have bias, which could be upwards or downwards. We wouldn’t use the results of a test in a few supermarkets to overturn the other evidence about disease severity; we want to use them to start finding undetected cases — any undetected cases.
The obvious way to test these results is just to do serological testing in New York City. If the magnitudes are really "20 to 30x higher than case estimates" then half the city or more has already been exposed, so testing 1,000 people should get you hundreds of positives, well beyond any uncertainty over the test kits' false positive rates. If it only gets you a few dozen, well...