Since May, COVID-19 diagnostic testing in Pennsylvania has steadily increased, while the percentage of tests that come back positive has steadily fallen. These reassuring trends are even more striking in New Jersey, which was a national hot spot for coronavirus in the spring.
Both states now report that their proportion of positive tests, called the positivity rate, has been below 5% for at least 14 days. That’s the World Health Organization’s benchmark for having infection transmission under control.
Pandemic monitors such as the respected Coronavirus Resource Center at Johns Hopkins University consistently say Pennsylvania’s positivity rate is almost twice as high as the state says. On Friday, Pennsylvania reported it was 3.2%, while Hopkins said it was 6.2%. Yet Hopkins and N.J. agree on that state’s rate, now below 2%.
Even if statistics make your eyes glaze over, keep reading. This positively puzzling situation will become clearer — as will its importance.
“Test positivity is extremely useful, but has also become one of the most commonly misunderstood metrics for monitoring the COVID-19 pandemic,” analysts at another well-respected monitor, the COVID Tracking Project at Atlantic magazine, wrote this week.
Calculating the positivity rate sounds as simple as whipping out a calculator: Divide the number of positive molecular diagnostic tests (numerator) by total tests (denominator) and express the result as a percentage.
This rate is an important factor in government decisions about opening schools, travel restrictions, and resuming normal activities. However, there is no national standard way to calculate and report the rate in the U.S.
Additionally, there isn’t even clear guidance about mixing together diagnostic tests (the by-now familiar swab-up-the-nose test) with less reliable blood tests that suggest you had an infection in the past or maybe just recently got infected.
Blending results from different types of tests can skew positivity rates and paint an inaccurate picture of the pandemic.
What’s more, the positivity rate can be lowered by including repeated negative tests of the same people — as both the Keystone State and the Garden State do when they calculate their rates.
Consider that Pennsylvania has more than 2.7 million test results — yet only about 1.9 million of the state’s 13 million residents have been tested.
The state health department’s position is that leaving out the duplicative tests would yield an incomplete picture of testing. (However, if someone is tested more than once in seven days, only the first test is included.)
“If we were only to provide unique positive and negative test results, it would not be anywhere near the full scope of testing in the state,” state health department spokesperson Nathan Wardle wrote in an email to the Inquirer.
While there is debate about the best methodology, experts agree that positivity is a valuable indicator of whether enough testing is being done to find and isolate infected individuals quickly, before they spread the disease.
“I think some states are making a bigger deal [of the 5% benchmark] than they should,” said Jennifer Nuzzo, lead epidemiologist for Hopkins' coronavirus resource center. “It’s not an absolute number. The important thing is the trend.”
The positivity rate is “a management metric,” said Ronald Fricker, a professor of statistics at Virginia Polytechnic Institute and State University. “It’s a way to assess whether we are doing enough testing.”
The fishing analogy
The positivity rate is shaped by two complicated factors: the prevalence of the virus, and testing strategies.
In March, as COVID-19 roared across the U.S., there was no testing strategy because molecular diagnostic tests were practically nonexistent. As testing capacity slowly ramped up, the strategy was to test only people with symptoms. Now, testing is being used for surveillance of asymptomatic people — such as college students, nursing home residents, and health-care workers.
When testing was limited to obviously sick people, the number of positive tests (the numerator) went up relative to total tests (the denominator), and so did states’ positivity rates. Pennsylvania’s rate hovered around 20% in late March; New Jersey’s was even higher, hitting a peak of 64% in mid-April.
Fricker uses a fishing net analogy. If you catch fish every time you send the net down (high positivity), then the water is loaded with fish you haven’t caught (undetected cases that are spreading the virus).
On the other hand, if testing includes periodic surveillance of asymptomatic people who are likely to be infection-free, it drives up the number of tests and reduces the positivity rate. Using the fishing analogy, it’s like sending down a huge net that sometimes comes up empty.
So what is the proper way to treat results from people tested over and over to make sure they continue to be negative? Is it more accurate to exclude those repeated tests?
“I’m not sure we can necessarily call it ‘more accurate,’ " Nuzzo at Hopkins said. “It’s a different calculation and potentially answers different questions.”
Nonetheless, Nuzzo and her colleagues prefer to calculate state positivity rates using numbers of unique people tested, and exclude duplicate tests.
Here’s the rub: The states don’t have to share that data, so the Hopkins team can’t always do the preferred calculation.
What the states do share is collected and published daily on the Atlantic magazine COVID Tracking Project’s website. (Interestingly, the project site states it does not calculate positivity rates “and will not do so until we are confident in our ability to communicate precisely about these complex issues.")
Pennsylvania provides unique people numbers — which Hopkins uses to calculate positivity. Pennsylvania also provides total test numbers — the basis for the state health department’s math.
New Jersey, in contrast, only provides total tests.
So there it is: Hopkins' positivity rate for Pennsylvania is higher than the state says it is because they are using different denominators. Hopkins’ rate for New Jersey closely agrees with what that state reports because they are using the same denominators.
“Fewer than half the states give us the data to exclude duplicate tests,” Nuzzo said.
Early in the pandemic, leaders in both Pennsylvania and New Jersey got bad press for falling woefully short on testing, even shorter than some neighboring commonwealths. That has clearly changed.
But there is no room for complacency.
“Even if the prevalence of the virus is low, it’s worth continuing to do lots of testing because, as we see in Europe, the virus can come back," Fricker said.