Dead Darlings Blog: Outtakes from the CDC feature

The CDC actually invented disease surveillance. Before World War II, the concept referred to nothing more than keeping an eye on individuals who had been exposed to serious diseases like typhoid or small pox and then isolating anyone who developed symptoms. During the war, a smallish team of scientists and engineers working for the federal government began applying the concept to pathogens, not people. The team was initially tasked with protecting all “war areas” from malaria, mostly by spraying DDT in every place where the anopheles mosquito was known to proliferate. Amidst this work, the group noticed something. Nobody seemed to be in charge of preventing or even controlling other disease outbreaks. When amoebic dysentery broke out at an asylum in Arkansas, local leaders asked if the mosquito team could help. They did, and word spread. And by the time the war was over, the group was pressing their federal overseers to both continue and expand their disease-control efforts. Returning vets would likely bring lots of “exotic infections” home with them. And typhus, dysentery, plague and more were already here, “progressively infiltrating and entrenching in new sections of the U.S.,” as senior scientist Justin Andrews explained when he announced the new agency in 1946.

Scientists had learned a great deal about how to identify and treat all sorts of microbial infections by then; they had developed a vaccine for Yellow Fever, and were shepherding newly discovered antibiotics through the pipeline at a steady clip. But preventing and controlling disease outbreaks was still, as it always had been, a local concern in part because it typically involved the exercise of powers, such as forced quarantines and business closures, that state and local leaders were loath to turn over to federal ones. State health departments had more than doubled between 1935 and 1945, but the would-be C.D.C.’s proponents argued that those entities were not capable of developing the technology or the training and research programs needed to really control the spread of diseases. Only a federally run center – one created to serve the states, not usurp them – could lead such an effort. “Nothing like it had ever existed before,” historian Elizabeth Etheridge writes in Sentinel for Health, her biography of the agency.

They started out in Atlanta because D.C. was overcrowded with the nation’s military apparatus. The distance from capitol hill proved both a blessing and a curse. C.D.C. scientists and administrators had more freedom than their colleagues at other federal institutions. But they were also invisible at funding time. For the first decade, their setup was truly ramshackle. Labs were located in a series of wooden buildings made for temporary use during the war. They were poorly ventilated, too hot in summer and freezing in winter, when ice cold air blasted up from a hole in the floor beneath the microscope station. Cockroaches were a regular presence – you could not spray to get rid of them without imperiling the mosquito colonies needed for research — and monkeys frequently escaped from their cages. Plans for a new building were floated regularly, but nearly a decade passed before any money was appropriated.

In the meantime, the agency’s existence hinged on its officers’ ability to sell their services to individual state leaders, who retained the power to invite them to or evict them from any given outbreak investigation. They quickly built a reputation for taking on the jobs that no other agency wanted. They were the first to arrive at any given emergency, the last to leave, and the ones with the most cutting edge techniques (their diagnostic labs especially, were unrivaled). They almost always exceeded their own budget and for the first decade faced the constant threat of closure; but their early successes were impressive. They mapped out the path eastern equine encephalitis (a deadly brain virus) takes from birds to mosquitoes to humans, by tracking birds through the Louisiana Bayou. They discovered that histoplasmosis, a rare but serious fungal infection, was not rare at all but was sickening some 30 million people every year, most of whom were being misdiagnosed. And when a deadly manufacturing snafu nearly ruined the nation’s first polio vaccination program, they proved that the shot was safe, when properly made, weeks before clinical doctors reached the same conclusion (an achievement that scored them their first mention in the New York Times). In the early 1950s, C.D.C. officials were staving off a total shutdown by Congress. By the 1960s, they were launching a global effort to eradicate small pox.

As those wins accrued, a loose pattern emerged. The agency received a flush of emergency funding in times of crises, and praise and more responsibility when it saved the day. But it was often neglected and starved of resources, and it was riven by internal conflicts over how to apportion the money it did receive. Multiple fiefdoms emerged and everybody tried to establish their own thing, Etheridge writes. Each branch of the agency – the epidemiologists working in the field, the laboratory scientists developing diagnostics, the communications and public education teams – had strong leadership, and none of those strong leaders were great at working together. By the end of the century, the C.D.C. had morphed into a global beacon with a roster of storied victories over disease and death, and an even longer roster of programs – everything from tuberculosis to S.T.D.s to obesity — under its purview. But its authority remained meagre to non-existent, its funding relatively flat, and its internal dramas continued to fester.