Table of Contents
Earlier this week, Newsweek and the Washington Post reported that the Wuhan Institute of Virology, a lab near the site of the first coronavirus cases in the world, had been studying bat coronaviruses.
The Newsweek report revealed an alarming tidbit: The Wuhan lab at the center of the controversy had for years been engaged in gain-of-function research. What exactly is it? It’s a line of research where scientists take viruses and study how they might be modified to become deadlier or more transmissible. Why would they do this? Scientists who engage in it say that it helps them figure out which viruses threaten people so they can design countermeasures.
To be clear, the coronavirus is definitely not a biologically engineered pathogen. It was not released on purpose, and it is likely to have been the result of accidental transmission through human contact with wild animals, like almost all disease outbreaks in history have been.
But the emerging reports about the lab in Wuhan are making many people aware for the first time that gain-of-function research happens at all. I wouldn’t blame you if your response to this news is this: The government gives grants to researchers to make potentially pandemic viruses deadlier and to make them transmissible more easily between people? Why are we doing that?
The increased attention to gain-of-function research is a good thing. This kind of highly controversial research — banned under the Obama administration after safety incidents demonstrated that lab containment is rarely airtight — began again under the Trump administration, and many scientists and public health researchers think that it’s a really bad idea. Our brush with the horrors of a global pandemic might force us to reconsider the warnings that those experts have been sounding for years.
The US stopped funding gain-of-function research. Then it started again.
In 2019, Science magazine broke the news that the US government resumed funding two controversial experiments to make the bird flu more transmissible.
The two experiments had been on hold since 2012 amid a fierce debate in the virology community about gain-of-function research. In 2014, the US government under the Obama administration declared a moratorium on such research.
That year was a bad one on the biohazard front. In June 2014, as many as 75 scientists at the Centers for Disease Control and Prevention were exposed to anthrax. A few weeks later, Food and Drug Administration officials ran across 16 forgotten vials of smallpox in storage. Meanwhile, the “largest, most severe, and most complex” Ebola outbreak in history was raging across West Africa, and the first patient to be diagnosed in the US had just been announced.
It was in that context that scientists and biosecurity experts found themselves embroiled in a debate about gain-of-function research. The scientists who do this kind of research argue that we can better anticipate deadly diseases by making diseases deadlier in the lab. But many people at the time and since have become increasingly convinced that the potential research benefits — which look limited — just don’t outweigh the risks of kicking off the next deadly pandemic ourselves.
While internally divided, the US government came down on the side of caution at the time. It announced a moratorium on funding gain-of-function research — putting potentially dangerous experiments on hold so the world could discuss the risks that this research entailed.
But in 2017, the government under the Trump administration released new guidelines for gain-of-function research, signaling an end to the blanket moratorium. And the news from 2019 suggests that dangerous projects are proceeding.
Experts in biosecurity are concerned the field is heading toward a mistake that could kill innocent people. They argue that, to move ahead with research like this, there should be a transparent process with global stakeholders at the table. After all, if anything goes wrong, the mess we’ll face will certainly be a global one.
Should we really be doing this kind of research?
Advocates of this kind of gain-of-function research (not all gain-of-function research uses pandemic pathogens) point to a few things they hope it will enable us to do.
In general, they argue that it will enhance surveillance and monitoring for new potential pandemics. As part of our efforts to thwart pandemics before they start — or before they get severe — we take samples of the viruses currently circulating. If we know what the deadliest and most dangerous strains out there are, the argument goes, then we’ll be able to monitor for them and prepare a response if it looks like such mutations are arising in the wild.
“As coordination of international surveillance activities and global sharing of viruses improve,” some advocates wrote in mBio, we’ll get better at learning which strains are out there. Then, gain-of-function research will tell us which ones are close to becoming deadly.
“GOF data have been used to launch outbreak investigations and allocate resources (e.g., H5N1 in Cambodia), to develop criteria for the Influenza Risk Assessment Tool, and to make difficult and sometimes costly pandemic planning policy decisions,” they argue.
“The United States government weighed the risks and benefits … and developed new oversight mechanisms. We know that it does carry risks. We also believe it is important work to protect human health,” Yoshihiro Kawaoka, an investigator whose gain-of-function research was approved, told Science magazine.
According to this logic, if we’d known for years that the SARS-nCov-2019 coronavirus — the virus that is now keeping us all indoors — was a particularly dangerous one, maybe we could have had disease surveillance systems out to alert us if it made the jump to humans.
Others are skeptical. Thomas Inglesby, director of Center for Health Security at Johns Hopkins, told me last year that he doesn’t think the benefits for vaccine development hold up in most cases. “I haven’t seen any of the vaccine companies say that they need to do this work in order to make vaccines,” he pointed out. “I have not seen evidence that the information people are pursuing could be put into widespread use in the field.”
Furthermore, there are unimaginably many possible variants on a virus, of which researchers can identify only a few. Even if we stumble across one way a virus could mutate to become deadly, we might miss thousands of others. “It’s an open question whether laboratory studies are going to come up with the same solution that nature would,” MIT biologist Kevin Esvelt told me last year. “How predictive are these studies really?” As of right now, that’s still an open question.
And even in the best case, the utility of this work would be sharply limited. “It’s important to keep in mind that many countries do not have mechanisms in place at all — much less a real-time way to identify and reduce or eliminate risks as experiments and new technologies are conceived,” Beth Cameron, the Nuclear Threat Initiative’s vice president for global biological policy and programs, told me.
With the stakes so high, many researchers are frustrated that the U.S. government was not more transparent about which considerations prompted them to fund the research. Is it really necessary to study how to make H5N1, with its eye-popping mortality rate, more transmissible? Will precautions be in place to make it harder for the virus to escape the lab? What are the expected benefits from the research, and which hazards did the experts who approved the work consider?
“The people proposing the work are highly respected virologists,” Inglesby said, “but laboratory systems are not infallible, and even in the greatest laboratories of the world, there are mistakes.” What measures are in place to prevent that? Will potentially dangerous results be published to the whole world, where unscrupulous actors could follow the instructions?
These are exactly the questions that the review process was supposed to answer but didn’t.
Sometimes pathogens escape from the lab. Here’s how it happens.
The reason the subject of gain-of-function research can inspire such heated opposition is because the stakes can be so high. Pathogens have escaped labs before.
Take smallpox, once one of the deadliest diseases.
In 1977, the last case of smallpox was diagnosed in the wild.
The victim was Ali Maow Maalin of Somalia. The World Health Organization tracked down every person he’d been in face-to-face contact with to vaccinate everyone at risk and find anyone who might have caught the virus already. Thankfully, they found no one had. Maalin recovered, and smallpox appeared to be over forever.
That moment came at the end of a decades-long campaign to eradicate smallpox — a deadly infectious disease that killed about 30 percent of those who contracted it — from the face of the earth. Around 500 million people died of smallpox in the century before it was annihilated.
But in 1978, the disease cropped back up — in Birmingham, in the United Kingdom. Janet Parker was a photographer at Birmingham Medical School. When she developed a horrifying rash, doctors initially brushed it off as chicken pox. After all, everyone knew that smallpox had been chased out of the world — right?
Parker got worse and was admitted to the hospital, where testing determined that she had smallpox after all. She died of it a few weeks later.
How did she get a disease that was supposed to have been eradicated?
It turned out that the building that Parker worked in also contained a research laboratory, one of a handful where smallpox was studied by scientists who were trying to contribute to the eradication effort. Some papers reported that the lab was badly mismanaged, with important precautions ignored because of haste. (The doctor who ran the lab died by suicide shortly after Parker was diagnosed.) Somehow, smallpox escaped the lab to infect an employee elsewhere in the building. Through sheer luck and a rapid response from health authorities, including a quarantine of more than 300 people, the deadly error didn’t turn into an outright pandemic.
In 2014, as the Food and Drug Administration (FDA) did cleanup for a planned move to a new office, hundreds of unclaimed vials of virus samples were found in a cardboard box in the corner of a cold storage room. Six of them, it turned out, were vials of smallpox. No one had been keeping track of them; no one knew they were there. They may have been there since the 1960s.
Panicked scientists put the materials in a box, sealed it with clear packaging tape, and carried it to a supervisor’s office. (This is not approved handling of dangerous biological materials.) It was later found that the integrity of one vial was compromised — luckily, not one containing a deadly virus.
The 1979 and 2014 incidents grabbed attention because they involved smallpox, but incidents of unintended exposure to controlled biological agents are actually quite common. Hundreds of incidents occur every year, though not all involve potentially pandemic pathogens.
In 2014, a researcher accidentally contaminated a vial of a fairly harmless bird flu with a far deadlier strain. The deadlier bird flu was then shipped across the country to a lab that didn’t have authorization to handle such a dangerous virus, where it was used for research on chickens.
The mistake was discovered only when the Centers for Disease Control and Prevention (CDC) conducted an extensive investigation in the aftermath of a different mistake — the potential exposure of 75 federal employees to live anthrax, after a lab that was supposed to inactivate the anthrax samples accidentally prepared activated ones.
The CDC’s Select Agents and Toxins program requires that “theft, loss, release causing an occupational exposure, or release outside of primary biocontainment barriers” of agents on its watchlist be immediately reported. Between 2005 and 2012, the agency got 1,059 release reports — an average of an incident every few days. Here are a few examples:
- In 2008, a sterilization device malfunctioned and unexpectedly opened, exposing a nearby unvaccinated worker to undisclosed pathogens.
- In 2009, a new high-security bio research facility, rated to handle Ebola, smallpox, and other dangerous pathogens, had its decontamination showers fail. The pressurized chamber kept losing pressure and the door back into the lab kept bursting open while the scientists leaned against it to try to keep it closed. Building engineers were eventually called to handle the chemical showers manually.
- In 2011, a worker at a lab that studied dangerous strains of bird flu found herself unable to shower after a construction contractor accidentally shut off the water. She removed her protective equipment and left without taking a decontaminating shower. (She was escorted to another building and showered there, but pathogens could have been released in the meantime.)
Now, the vast majority of these mistakes never infect anyone. And while 1,059 is an eye-popping number of accidents, it actually reflects a fairly low rate of accidents — working in a controlled biological agents lab is safe compared to many occupations, like trucking or fishing.
But a trucking or fishing accident will, at worst, kill a few dozen people, while a pandemic pathogen accident could potentially kill a few million. Considering the stakes and worst-case scenarios involved, it’s hard to look at those numbers and conclude that our precautions against disaster are sufficient.
Reviewing the incidents, it looks like there are many different points of failure — machinery that’s part of the containment process malfunctions; regulations aren’t sufficient or aren’t followed. Human error means live viruses are handled instead of dead ones.
Now imagine such an error involving viruses enhanced through gain-of-function research. “If an enhanced novel strain of flu escaped from a laboratory and then went on to cause a pandemic, then causing millions of deaths is a serious risk,” Marc Lipsitch, a professor of epidemiology at Harvard, told me last year.
The cost-benefit analysis for pathogens that might kill the people exposed or a handful of others is vastly different from the cost-benefit analysis for pathogens that could cause a global pandemic — but our current procedures don’t really account for that. As a result, allowing gain-of-function research means running unacceptable risks with millions of lives. It’s high time to rethink that.
Sign up for the Future Perfect newsletter and we’ll send you a roundup of ideas and solutions for tackling the world’s biggest challenges — and how to get better at doing good.
Future Perfect is funded in part by individual contributions, grants, and sponsorships. Learn more here.
Support Vox’s explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.
Posts from the same category:
- None Found