And yet, many prominent epidemiologists, public health researchers and physicians are pushing back against the concept. NIH Director Francis Collins condemned covid-19 herd immunity-based responses calling them “fringe” and “dangerous,” while WHO Director General Tedros Adhanom Ghebreyesus called it “scientifically and ethically problematic.” Currently the theory of herd immunity is used in public health settings when immunity can be acquired through a vaccine, such as measles or polio, but not when it requires people to contract a disease to develop immunity.
While herd immunity is the theory behind vaccine programs, the concept originated in veterinary medicine and livestock management in the late 19th and early 20th century. This matters because in this setting, economics rather than ethics served as a guiding force. In some cases, it was cheaper to slaughter diseased or suspected animals to prevent the rest from getting sick than expose an entire herd to a disease that could kill or reduce the value of livestock. While this may have helped halt damaging animal diseases, it would be unacceptable for human public health programs. Revisiting the history of managing the spread of animal disease explains why the theory of herd immunity, absent a vaccine, is a deeply troubling approach to managing the spread of covid-19.
At the end of the 19th century, over 1.5 million livestock farms existed with billions of dollars’ worth of cattle, swine, sheep, poultry and goats. In 1884, concerned that deadly infections such as contagious bovine pleuropneumonia and foot and mouth disease threatened the livelihood of farmers and American food security, Congress and President Chester A. Arthur established the Bureau of Animal Industry (BAI) at the USDA through legislation.
This new bureau was tasked with researching animal diseases and granted regulatory authority to prevent, contain or eradicate livestock diseases. Keeping livestock animals free of disease and death ensured a steady supply of meat, milk and eggs for Americans, and protected producers’ incomes.
Not all livestock diseases killed infected animals or rendered them unusable for food production. New York dairy farmers first documented an infectious-disease now known as brucellosis in the 1850s. They noted that the disease would roll through communities every few years causing pregnant heifers and cows to lose their calves. This led to a decrease in milk production — but most infected cows recovered and returned to normal production for the rest of their lives. Owners worried about their bottom line at that moment but did not want to slaughter potentially productive animals. Instead, they hoped to prevent the disease through sanitary measures and treatments.
Cases of cattle brucellosis were reported across the country in dairy cattle and an increasing number of range herds. Through the use of establishing herds for observation and testing, by the early 20th century the bacterial cause of the disease was identified and a test for exposure was developed — but neither a vaccine nor treatment had been discovered.
And so, researchers and farmers offered advice about how to minimize the impact of brucellosis on cattle. At