Also I definitely believe that false positives are related to true positives. From this perspective, false positive pcr can occur if the person has had Covid and has residual viral RNA (which lasts for weeks) but is no longer shedding live virus. And the questionably “false” positives where the sample is really positive in the PCR sense (there is actual COVID RNA) but the person is not sick or infectious (the viral RNA is old fragments of virus, not “live” infectious virus) will only occur if some of the population tested has had COVID in the past. Had Purdue chosen to test all 50,000 students and staff every week, 10 times the number would have reported as testing positive weekly. Robert Hagen, MD, is recently retired from Lafayette Orthopaedic Clinic in Indiana. To first order you might say the probability of a false positive is something like k * pp, where pp is the percentage of true positives and k is a number between say 0.003 and 0.1 but If pp = 0 then doesn’t matter how big k is you won’t get any. Yes. Many of these classes include practicums, laboratory sessions, and group projects that require some in-person attendance. A classic 1978 article in the New England Journal of Medicine reveals this problem. Base rate fallacy, or base rate neglect, is a cognitive error whereby too little weight is placed on the base, or original rate, of possibility (e.g., the probability of A given B). Medpage Today is among the federally registered trademarks of MedPage Today, LLC and may not be used by third parties without explicit permission. Of those, about 35 are positive each day, according to the university's dashboard. If we are doing the same kind of test, then that’s what we’d expect to be generating EVERY DAY in the US. New York City was the first major urban center of the COVID-19 pandemic in the USA. False positives might also occur due to cross-reactivity with other corona viruses. Day after day the positive percentage stays in a tight range of about 0.85-0.99%. The original question is why the % positive is so consistent. As the unemployment rate soared in April to its highest levels since the Great Depression, with 14.7 percent of workers without jobs, the coronavirus shutdown fell … One night, a cab is involved in a hit and run accident. The base rate of jobseeker was inadequate before the pandemic and it will remain inadequate unless the government acts to increase it permanently. But isn’t it also rather implausible that the *genuine* rate would stay the same? 2 here gives some sense of how these Ct values vary with different machines and reagents (and also with viral load): And having high test numbers means lab technicians put trays of 96 or 144 samples in a machine, run a preset procedure, and determine a result. (The actual incidence of active COVID-19 in college age students is not known but estimated to be less than 0.6% by Indiana University/Fairbanks data. (or a FN). NZ went a long time with no positive samples, during that period I’d expect very low false positive rates. What we really need is a test to tell us whether a symptomatic person is shedding virus and is therefore infectious. Only 14% gave the correct answer of 2% with most answering 95%. Keywords: Experimental Analysis of Behavior, Heuristics, Base-Rate Neglect Suggested Citation: Suggested Citation Pico, Claudia and Gil Mateus, Edwin and Clavijo Álvarez, Álvaro, Cognición y Conducta en La Falacia de Las Tasas de Base (Cognition and Behavior in the Fallacy of Base Rates… We will make all reasonable efforts to address your concerns. November 7, 2020. I know there is some rushing with COVID-19, but any diagnostic test should go through a validation, a series of experiments to assess it specifications. But that assumes that each daily or weekly “rate of hospitalizations” has a fixed relationship to the underlying population at risk, same with cases and deaths. Guess not everyone is prepared to believe the rate in New York is as low as it appears. The purpose of the random testing was surveillance to encourage students and staff to maintain proper behavior. Modelers at Imperial College London estimated something closer to 1% in early February. Test results of a population of 2,000, with a virus prevalence of 30% (top) and 3% (bottom), for a test with a 5% false-positive rate. It doesn’t look like that variations are too much out of line, but I don’t know how they can be reconciled with false positive rates we’ve seen in the papers. You can’t contaminate a well with a positive sample if you don’t have any positive samples. Only one has been hospitalized and none have died. If NZ decides to run only out to say 30 cycles, then they won’t detect microscopic contamination (10 extra cycles is about ~ 1000x extra amplification). When you are doing hundreds of thousands of tests, the variability is swamped by the sample size perhaps? If you get a positive here in the US where we’re generating 40000 new cases a day country wide, no one is going to pay any extra attention to it. Maybe NY is post-pandemic, in the “endemic” phase of the disease, so it’s basically constant rather than exponentially growing/declining? The base rate … Accessibility improvements made to our sites are guided by the Web Content Accessibility Guidelines (WCAG), Level AA. Even using a test with only 90% specificity, the number of false positives will be much less significant. And cases are possibly messy because TX is reporting a lot of back-log old cases not counted in the “new daily”. Hmm. By not reporting these groups separately, we really have no idea what's going on in our town. By those increased numbers of testing, 4% of our Indiana population is now being tested for COVID-19 every week. Or actually the true positives would be clustered by region but the false positives not so – so (they’d remain a constant as a % of the number of tests)? There are both known positive and negative controls on those trays. Restaurant occupancy, sporting events and other large gatherings are again limited at a greater level than state requirements. Two SDs of this would translate +/- 0.7%. Yes NY has a significant proportion of false positives, It’d have to at that low level. by Robert Hagen, MD When these tests return negative, significant confusion occurs. Their lateral flow assay monitoring (known high number of false positives) or the PCR testing, where whole countries like New Zealand can have no cases despite continued testing? That’s close to the range stated (.85 – /99%). Panic happens because the media industry tends to engage in what can be described as a base rate fallacy (Hardman, 2015) which is the idea that people tend attribute a higher level of risk to a situation when they are not aware of the actual base rates of such phenomena. So you can also decide if you need all, just one, or 2/3 to indicate a positive. (3) Should it attempt to classify patients into groups that quantify the certainty they will get sick (and for how long)? What counts as “COVID related hospitalization” has changed over time. Since staff and students combined are 50,000 at Purdue University, 5,000 tests are done every week. But I think in early summer cases rose, then hospitalizations, then deaths. Bad decisions can be made because of a misunderstanding of statistics. Those who choose to go home will often have another test by their personal physician. As happens sometimes, I receive two related emails on the same day. I haven’t run numbers on that, but by eye it looks to have a weekly modulation. Another wrinkle for the measurement problem; both of contagious individuals and viral load sufficient to be related to death. To prove that the test is sufficiently sensitive and specific you run the test on several 96 well plates with a known pattern of synthetic positives and synthetic negatives. One study analysing excess deaths for influenza over four years estimated the number for 2016-2017 “season” (the highest of the four years) to be 24,981. You definitely don’t need an entirely different kind of test as Navigator suggested. But keep in mind you can also do multiple primers (roughly checking for different viral genes) and see some but not others cross the threshold. Most of us in healthcare have a fairly good understanding of math but are not nuanced in the field of statistics. We also rely on our community to tell us when they experience an issue with any of our sites, and we give consideration to all feedback that is provided to us. Remember if you contaminate 1% of the tests with your positive control, then you’ll get 1% positive rate, and that’s easy to do by accident. Let's take a closer look. Something odd going on right now in TX (and probably other states). Conjunction fallacy – the assumption that an outcome simultaneously satisfying multiple conditions is more probable than … We have been oversold on the base rate fallacy in probabilistic judgment from an empirical, normative, and methodological standpoint. I’m not sure if it’s 10% or 50% but it’s undoubtedly more than 5% of the positive tests that are not true positives. > These are not randomized tests, through a sparse, clustered set of interactions with a great deal of heterogeneity. Given the possibility of ‘stale’ PCR tests for weeks or even months after infection, if everyone who is admitted to hospital is tested, could that mess things up if there are relatively few currently symptomatic people but many cases in the recent past? A systematic review of the accuracy of covid-19 tests reported false negative rates of between 2% and 29% (equating to sensitivity of 71-98%), based on negative RT-PCR tests which were positive on repeat testing.6The use of repeat RT-PCR testing as gold standard is likely to underestimate the true rate of false negatives, as not all patients in the included studies received repeat testing and … The actual sensitivity and specificity of COVID-19 tests are unknown as these tests were okayed by the FDA under Emergency Use Authorization. We strive to make all of our content accessible to all users and continually work to improve various features of our sites. You then analyze how often the test gives incorrect results. Furthermore, probably if anyone gets a positive in NZ immediately everyone jumps on it and re-tests the original sample. Data were collected from 177 Zip Code Tabulation Areas (ZCTA) in New … Now the cases/deaths declines are not extremely steep declines. For a positive control you run the test with known fragments of RNA in it (or known to have virus grown in culture in it). COVID deaths in Indiana average about 23 per day, but that too is going up. © 2020 MedPage Today, LLC. When the incidence of a disease in a population is low, unless the test used has very high specificity, more false positives will be determined than true positives. Yet those numbers would be only representative of the positivity of mass testing, not the prevalence of infective patients. I think the timing on registration of everything, cases, deaths, tests (maybe not hospitalizations but maybe even that) is so all over the place that it’s hard to pin down leading and lagging based on daily or weekly numbers. Well, in designing the test, you run the test adding “nucleotide free water” instead of sample, and this is your negative control. The difference in the numbers can be quite striking and certainly not inherently understandable. September 15, 2020 at 4:27 pm. So the test serves as its own ‘post-measure’ or gold standard. The last example brings me to what is perhaps the most pervasive reason behind the conjunction fallacy: we tend to ignore base rates. The tests are "good enough" for diagnosing patients with symptoms but not nearly as effective when used for a random testing program. If the true infection rate of those tested is .92%, then I get a standard deviation of Sqrt(.0092 * .9908 / 90000) = .00032. There's nothing like fear to generate abnormal behavior. If presented with related base rate information and specific information, people tend to ignore the base rate in favor of the individuating information, rather than correctly integrating the two. I do not think that assumption is valid anywhere in USA over any period longer than a few weeks. Yeah, I’m not saying that entirely explains it either. As of a week ago, our two local hospitals with a combined 350 beds had 18 patients admitted with a COVID diagnosis. No wonder FP and FN rates are all over the place than. biopsy verified by open surgery to detect FP/FN). Base rate fallacy/false positive paradox is derived from Bayes theorem. The usual diagnostic tests may simply be too sensitive and too slow to contain the spread of the virus.”. And I would imagine that the positives and thus false positives might be clustered by region. We have learned in the past from routine PSA testing and mammograms that a positive test in a screening situation needs to be taken in context. Latest opinion, analysis and discussion from the Guardian. I don’t really know what to think about all this, but I’ll share with you. This is the kind of thing you’d see them do when they get a sudden positive after weeks of zero positives in all of New Zealand for example. Manufacturers' data have not yet been corroborated by the agency. Purdue is a major research university with a strong emphasis on STEM education. That would imply that either testing was growing / shrinking in step with the spread / decline of the virus, or that New York was *right at* R=1 for quite a while. Of course it’s possible to contaminate with the synthetic positive control, but again, if everyone jumps on the positive result and does a re-test, re-testing will reveal it was spurious. The samples are prepped and analyzed in the order specified by the collectors, and lab prepping the samples also splits every sample so it can be tested later. >>where whole countries like New Zealand can have no cases despite continued testing? The numbers have caused our county health department to move cautiously. The COVID PCR test just returned a yes-no, “But similar PCR tests for other viruses do offer some sense of how contagious an infected patient may be: The results may include a rough estimate of the amount of virus in the patient’s body.”. The base rate is the actual amount of infection in a known population. Such improvements to our sites include the addition of alt-text, navigation by keyboard and screen reader technology, closed captioning, color contrast and zoom features, as well as an accessibility statement on each site with contact information, so that users can alert us to any difficulties they have accessing our content. Base Rate Fallacy Defined Over half of car accidents occur within five miles of home, according to a report by Progressive Insurance in 2002. But hospitalizations almost perfectly flat. Across MedPage Today and its businesses, digital accessibility is a core priority for us throughout our design and development phases. I expect that under these conditions people are doing better than that, but maybe they’re contaminating 0.2% of tests… that’d still be in the 10 or 20% of positives are false. Purdue has discussed using a serial testing protocol. The cut-off for a yes/no test is determined based on the validation, typically a number near but below the truncation value. Do you think we are at the limits of the test and there may be a significant amount of false positives? An elaborate plan was implemented, including a signed pledge from all students to behave properly, wear masks, maintain social distancing. The Prosecutor’s Fallacy can be avoided by making sure the probability answers the right question, by focusing on how the evidence applies to the ‘defendant’ and not on the ‘evidence’ alone in the absence of other relevant factors. Could this be the reason for increased hospitalizations? Description: Ignoring statistical information in favor of using irrelevant information, that one incorrectly believes to be relevant, to make a judgment. At the empirical level, a thorough examination of the base rate literature (including the famous lawyer—engineer problem) does not support the conventional wisdom that people routinely ignore base rates. So how many false positives has NZ had ever since the start of the pandemic? It would be a welcome advance to be able to discern, separate, and quantify concerns here. Testing procedures might be different between countries too. We investigate whether potential socioeconomic factors can explain between-neighborhood variation in the COVID-19 test positivity rate. Which NY State numbers are we talking about? I think the “positive tests” mean different things to different people. The results the lab sees will look something like this: Abstract. So if that were why, then would we expect the trend to change soon (IE either hospitalizations to drop, or cases to rise)? These are not randomized tests, through a sparse, clustered set of interactions with a great deal of heterogeneity. Using the same test on patients with COVID-19 symptoms, because their incidence of disease is 50% or greater, the test does not have to be perfect. We must compare apples to apples and oranges to oranges rather than just making fruit salad out of the whole thing. For Covid 19, we have far more accurate figures from 20 February 2020 to the time of writing: 32,330 deaths. Base rate fallacy – making a probability judgment based on conditional probabilities, without taking into account the effect of prior probabilities. Different places use different primers, equipment, and sample collection then different thresholds for what counts as a positive. In the United States, that appears to be between 5 percent and 15 percent. Commingling of data in our county from the people tested WITH symptoms together with the randomly tested Purdue students WITHOUT symptoms has occurred. Those 35 students who test positive daily are added to our county totals (many of those county positive tests are done on people with COVID-19 symptoms). I think you misplaced a decimal for the SD. Germany had an effective R near 1 during late May and early June: public health measures/reopening is adapted to the rate of spread and has “R near or below 1” as a target, so steady numbers may simply be a result of politics and behaviour. The Times article, which is not so old—it’s from 29 Aug—is entitled, “Your Coronavirus Test Is Positive. I think that would be a reasonable expectation but there’s so many inconsistencies in timing and, as you point out, even in the basic definitions. For example, this happens when scholars like Kahneman and Tversky attribute to their experimental subjects the errors of the so-called conjunction fallacy and base-rate fallacy, and also when it is claimed that someone has committed the gambler's fallacy (Woods , 478–492). Base rate neglect is a specific form of the more general extension neglect. The researchers asked 60 Harvard physicians and medical students a seemingly simple question: If a test to detect a disease with a prevalence of 1/1,000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease? Hence a 2 SD range of +/- 7% of the mean, which gives the right range. If negative do nothing. But .00032 / .0092 is 3.5%, not .35%. Cases down / tests up (leading indicator). The number of tests doesn’t seem to be changing that much, so it would still imply an oddly flat curve. Ideally, testing those WITH symptoms would be reported separately from those randomly being tested WITHOUT symptoms. The Indiana State Department of Health advised against a random testing program, as it felt overall data accuracy would be difficult. Eight weeks ago, Indiana was performing 20,000 tests per day. Diversity in an approach is fine but the problem is that how the details vary over time and location are unavailable then all the numbers get treated the same. It’s kinda like when you find a burnt spot of ground: sure, that area may not be in flames now, but there sure was a fire, so you want to know whereit may have spread while it was burning. The material on this site is for informational purposes only, and is not a substitute for medical advice, diagnosis or treatment provided by a qualified health care provider. Yes, and this might be true in some places, but looking at the # of tests performed in NY it does not seem to be true there. Two SDs of this would translate +/- 0.7%. How the swab is performed shouldn’t really matter, as those who are shedding will have viral RNA throughout the whole airway: mouth, throat, nose and nasopharynx. Throw all those four groups in together if you want, but just understand you are not getting a true picture of what is going on. A decision was made to perform random testing on 10% of the students and staff each week. If the only variation of the numbers were from random sampling variation, then the standard deviation would be about 0.35%, based on 90,000 tests per day (test count data from ). Even using a test with 99% specificity with a 1% population incidence generates 10 false positives for every 9 true positives. . In the US we’re doing 700-800k tests a DAY. Tests for the coronavirus range from 90% to 99% specificity. So far, 90% of the students who test positive do not develop symptoms. “In most diagnostic tests, one needs to have a completely different and verifiable way of assessing the presence or absence of something”. Sure makes sense. In most diagnostic tests, one needs to have a completely different and verifiable way of assessing the presence or absence of something (e.g. I’m pretty sure they do something else, instead of running the same test on the same sample over and over again, without the knowledge whether the specimen is positive or negative. Had this data been commingled with testing of symptomatic individuals, there certainly would have been an outcry by the casual observer to close everything down again. MedPage Today is committed to improving accessibility for all of its users, and has committed significant resources to making our content accessible to all. False negatives should not really occur in those with recent onset symptoms as viral shedding occurs prior to and for the first week or so of the clinical course. I may have missed it, but what exactly is the gold standard (post-test) used to verify if a PCR test is indeed a FP? Luckily, Purdue keeps their own dashboard and with some calculations their data can be extracted from the county data to give us a ballpark guess. And in the age of COVID-19 there's plenty of fear going around (so expect a lot of it). Therefore, of the positive results, only 60/ (60+97)≈38% will be correct! Unfortunately, the lack of understanding of the statistical principle of base rate fallacy/false positive paradox has led to some confusing numbers. Antigen tests will be used on the random population with subsequent confirmatory PCR tests used for anyone who initially tests positive. Contact tracers are telling positive testers who have nowhere to isolate to be evaluated at their hospital emergency room. All rights reserved. Hospitalizations ought to lag cases, but lead deaths. There would also be variation in the number of tests performed each day. Now I’m commenting on things I understand poorly, but wouldn’t you expect that the contamination rate would be fairly variable, depending on whether some lab tech got a bad night’s sleep or was fighting with their partner, etc.? If the only variation of the numbers were from random sampling variation, then the standard deviation would be about 0.35%, based on 90,000 tests per day (test count data from Base rate fallacy/false positive paradox unfortunately becomes ignored when one does this. . Purdue University made the decision in late spring to resume in-person classes for its fall session. Contact traced people identified as being close to a COVID patient WITH symptoms (>10% incidence of testing positive for COVID) would also be another category and those identified by contact tracing who were near a person who tested positive WITHOUT symptoms (>1% incidence of having COVID) would be a fourth. A test with 95% specificity has a 5% false-positive rate. Up to this point, Purdue has done random testing on about 1,000 students per weekday. In mining and metal exploration all assays are done using the same chemical process, but checked using duplicates, certified blanks and certified standards. The incidence of a disease in the population that you are testing is extremely important for accuracy. This study pretends to know, Basbøll’s Audenesque paragraph on science writing, followed by a resurrection of a 10-year-old debate on Gladwell, Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond, “We’ve got to look at the analyses, the real granular data. What I was referring to was when you get a positive result that you think might be from contamination of the test, you then rerun the test going back to the original swab sample on a different machine with a different lab tech at a different time in duplicate or triplicate etc… if you get all negatives, you can conclude contamination was the issue. Did the only the doctor receive the yes-no or does the lab test itself only produce a yes-no? That’s what contact tracing does. A witness claims the cab was green, however later tests show that they only correctly … If the lab has the more detailed results then the information is out there somewhere. Cases are clustered in the city, with certain neighborhoods experiencing more cases than others. So areas where the base positive rate is higher, the % of positives that are false positives is lower? I also wonder if it could be an issue of defining “COVID related” hospitalizations. It is not implausible that testing is “growing / shrinking in step with the spread / decline of the virus”: the more people in my circle being diagnosed to be positive, the more likely I am undergoing a test. What counts as a “case” has changed over time. I have worked with PCR data for a long time. It’s always tough when you’re looking at a press release to figure out what’s going on.”. As demonstrated with the above mentioned figures, COVID-19 has still not reached a point where it surpasses other illnesses … A systematic review of the accuracy of covid-19 tests reported false negative rates of between 2% and 29% (equating to sensitivity of 71-98%), based on negative RT-PCR tests which were positive on repeat testing. In effect what you’re looking for is an expected temporal sequence among what are likely non-comparable tallies. Without knowing the specificity of the test, the number of these positives that are false positives is unknown. Typically specificity, 1- the false positive rate, is reported as 99.9%, not 100%, when there are no false positives. 6 The use of repeat RT-PCR testing as gold standard is likely to underestimate the true rate of false negatives, as not all patients in the included studies received repeat testing and … Stopping an outbreak is always time-sensitive, so you don’t really have time to double-check results before you initiate tracing contacts and isolating them. Certainly positivity rates are going up here. Repeat the PCR test multiple times and see it come up negative repeatedly. Are NZ tests the same as US tests? You also do not know if a low virus concentration in the sample really means a low virus concentration, for example the swabbing may not have been done properly. A few options to consider: (1) Should a positive test only indicate presence or vestige of the virus? A classic explanation for the base rate fallacy involves a scenario in which 85% of cabs in a city are blue and the rest are green. Our efforts are ongoing. My guess is that most of these are likely unknown. First, contrary to the conventional wisdom, a thorough examination of the literature reveals that base rates are almost always used and that their degree of use depends on task structure and internal task representation. Its worse than that. It’s more than sufficient to test for contamination. I’ll point out that some of these tests will be repeatedly re-testing the same people, so the sampling variation could be even smaller than that. (4) Should it predict the likelihood that a person can infect another person, and under what conditions? MedPage Today believes that accessibility is an ongoing effort, and we continually improve our web sites, services, and products in order to provide an optimal experience for all of our users and subscribers. I thought these were standardized for commercial testing equipment and so should give standard output. Maybe. In the past few months, we've seen that one of these odd behaviors is attributed to a significant number of health-news headlines recommending vitamin C to purportedly assist one's immune response to COVID-19. Hmmm, I get a different standard deviation but the same range. I was wondering if you have any comment on the NY State Covid numbers. The NFL contamination case in August is an example of how a high false positive rate tied into a situation in a lab. As of today, that Washington Post cases and death tracker added an increase of 2,732 to their deaths count for New York state by yet again changing their definition. The Washington Post has reported Covid-19 death rates as high as 5% in the United States. The Prosecutor’s Fallacy is … We are dealing with a fluorescence measures that can show positive even without contamination.