Counting Animals Is A Sloppy Business

n 1989, scientists at the United States Fish and Wildlife Service and the National Zoo published a study on migratory songbirds with alarming results. The study relied on 22 years of data from annual surveys of more than 60 neotropical species, birds that breed in North America and overwinter in Central and South America. And the numbers showed that more than 70 percent of these populations, many of which had been stable or growing only a decade earlier, were now plummeting.

Ted Simons was a young wildlife biologist with the National Park Service at the time. To help determine the cause of the apparent declines, his team conducted its own surveys, in the Great Smoky Mountains of Tennessee. Their methods were essentially the same: Groups of trained counters walked along backcountry trails, stopping at prescribed points to count all the birds they saw or heard. However, they included some additional data points, such as the approximate distance between the observer and each bird counted. This extra information allowed them to more accurately calculate the probability of detecting individual birds—a factor that could influence the final estimate. When they analyzed their data, they found reason to question the magnitude of the declines flagged by the 1989 study. Although some species were in fact dwindling, according to follow-up studies, others were likely larger than they had originally seemed.

Puzzled, Simons ran some tests. In a series of experiments in the mid-2000s, he and his colleagues at North Carolina State University (where Simons is now a professor) broadcast simulated bird and frog chirps into a patch of oak-hickory forest and measured how accurately trained counters identified the calls. Expectedly, there were errors. But what struck Simons most was that the counters tended to make certain mistakes in a systematic way, producing biases in the data. The causes were widespread. Surveys of veteran observers, for example, revealed that hearing loss was common. Background noise, such as from traffic or construction, also skewed counts—making quieter or shier species seem more rare than they actually were. And if counters expected to hear a particular species—because it was common to the area, for example—they often reported observing it, even if its call was never broadcast.

The problem, Simons says, is that the statistical models used to extrapolate total population estimates from observer counts don’t always account for these biases. So it’s easy to get the numbers wrong. “Often what it comes down to is we’re missing more animals than we realize,” Simons says.

-----

Read the rest of the story here.