**tl;dr: based on test characteristics, I suspect this study overestimates historical infections**
As the study emphasizes, the bottom line estimates depend a lot on the test characteristics.
The estimated prevalence would plummet with even a **very** small overestimation of the specificity, i.e. a low number of false positives:
>For example, if new estimates indicate test specificity to be less than 97.9%, our SARS-CoV-2 prevalence estimate would change from 2.8% to less than 1%, and the lower uncertainty bound of our estimate would include zero.
I think that the manufacturer's estimated test specificity (resulting in the lowest estimate of prevalence) should have the most weight, since it's based on the largest sample:
From the manufacturer, 2 false negatives out of 371 pre-COVID samples generated a specificity of ~99.5%.
In the study's own **much smaller** analysis, 30/30 pre-COVID samples were negative. But such a small sample won't reliably distinguish between a specificity of 98-99-100%.
Complicating things, the study's estimated sensitivity (false negatives) was **much higher** than the manufactur's.
Manufacturer was 25/25 for IgG (develops later) and