Security is a tricky thing. If given a choice between a security system that is 99.99% effective in stopping malevolent attacks or careless mistakes, and one which is only 95% effective, going with the 99.99% one seems like a no-brainer, right? Not necessarily.
The real test of a system isn't what happens when it works right, it's what happens when it fails. And it will -- all systems fail, even the best-designed, 99.99% effective ones. If that level of protection results in reducing the ability of the system to respond to and repair the results of failure, then when the security does (inevitably) collapse, society is far worse off than it would have been with the less-effective, but more robust and flexible, alternative.
Let me give you a concrete example.
The latest issue of New Scientist has an article reporting on the results of recently-adopted American regulations controlling access to dangerous bioagents such as anthrax. The restrictions are intended to make it nearly impossible for terrorists to get their hands on pathogens such as ricin or botulinum. The controls on access include getting security clearances, lab inspections, and registration with the government. The implementation of the new laws has been aggressive and unrelenting, in order to stamp out on any possibility of terrorist access to lethal bioagents.
The problem is that the very aggressiveness that makes it nearly impossible for terrorists to get their hands on pathogens in the U.S. is making numerous biologists shy away from continuing research into significant diseases, including (for example) mad cow. The law's requirements are so complex (and, occasionally, contradictory) and the punishments for even accidental violations so onerous that, despite large amounts of government funds available for bioweapon defense research, some scientists are refusing the work. This, in turn, reduces the possibility of coming up with effective responses to a bioterror attack.
This is not the only option.
A simple fix would be to scale back the aggressiveness of the law's requirements and enforcement. Although this may sound counter-intuitive to some, even a somewhat less aggressive protocol would still result in significant controls on access to pathogens, while no longer driving biologists away from critical research. As a result, the tighter security would be backed up by the ability to respond to failures.
Of equal importance would be keeping information about this research available to the public. While the current round of bioterror laws don't address publication restrictions, major journals such as Nature and Science announced self-censorship rules in 2002. Like the smothering controls on access to biological samples, such attempts to withhold information ultimately do more to reduce the ability of our society to respond to an attack than to prevent that attack from happening.
Security systems fail. Attempts to prevent such failures should not impede our ability to respond effectively when the failures nonetheless happen.
Hard to imagine how we could stop people from producing ricin, which is not a "pathogen" but is instead a toxin which can be easily extracted from castor beans: