Why AIxBiosecurity Matters
Advances in biotechnology - from AlphaFold's protein prediction to tabletop DNA sequencing - are rapidly democratising biological capabilities. Artificial intelligence accelerates these developments further, offering unprecedented abilities to discover novel pathogens, design biological components, and potentially automate complex laboratory processes.
While these technologies promise extraordinary benefits for medicine and the life sciences, they also introduce unprecedented dual-use risks. Addressing these complexities requires integrated knowledge spanning artificial intelligence, life sciences, governance and regulatory frameworks. ERA, alongside the Cambridge Biosecurity Hub (CBH), is helping to build the expertise and networks necessary to mitigate catastrophic risks from AI-enhanced biotechnology.
Through the AIxBiosecurity Fellowship, we identify and nurture emerging leaders working at the interface of artificial intelligence and biosecurity, offering focused guidance, project coordination and strategic partnerships to advance meaningful solutions to pressing biosecurity concerns. With a talented network spanning both AI and biosecurity domains, we facilitate high-impact collaborations where interdisciplinary expertise is most essential.
With this Fellowship and beyond, we are building a comprehensive research foundation and supporting infrastructure to enable AI-enhanced biological capabilities to benefit humans while maintaining safety standards and reducing risks of misuse.
AI Capabilities in the Life Sciences Are Rapidly Expanding
Over the past few years, AI systems have demonstrated remarkable progress in biology and chemistry. Models can now predict protein structures (AlphaFold), design functional genomes (Evo2), and generate new molecular compounds with therapeutic potential, including antibodies (Germinal).
A 2023 paper from The Centre for Long-Term Resilience warned:
“Artificial intelligence and biotechnology are converging in a way that could catalyse immense progress from areas like personalised medicine to sustainable agriculture—as well as substantial risks. There is a potential for new capabilities that threaten national security, including those that may lower barriers to the misuse of biological agents.”
AI-assisted bioengineering is reshaping the biotechnology and biosecurity landscape - enabling unprecedented discovery, but also increasing the risk of deliberate or accidental misuse.
Improving capabilities come with increasing biosecurity risk
As AI systems make biological design tools more accessible and powerful, we face a dual challenge. The same capabilities which enable breakthrough therapeutics could also lower barriers to creating biological weapons, whether through deliberate misuse or accidental release. The number of people with both the knowledge and capability to create biological weapons increases.
This emerging landscape demands a new generation of researchers who can navigate both the technical possibilities and the security implications of AI-enabled biotechnology. The challenge isn't just technical: it requires navigating complex questions at the intersection of cutting-edge science, national security, and global governance. We need people who can advance beneficial applications while anticipating and mitigating catastrophic risks.
AIxBio fellows will work on carefully scoped projects which strengthen our collective ability to assess and mitigate AI-related biological risks. Rather than maximising public-facing outputs, we emphasise responsible research which advances the field's security without creating information hazards. Through this work, we're building a community of researchers equipped to bridge technical, policy, and ethical challenges, ensuring that these powerful technologies are developed safely and deployed responsibly.
We are committed to
addressing these risks
As part of the AIxBio Fellowship, our fellows will spend 8 weeks working on research projects aimed at mitigating risks from the intersection of AI and biotechnology. Possible directions include:
Designing evaluation frameworks for biosecurity-sensitive AI models.
Exploring governance mechanisms to manage dual-use bio-AI capabilities.
Mapping the emerging bio-threat landscape driven by AI-enabled research tools.
Designing interventions to reduce catastrophic risk from biotechnology that are aided by AI (e.g. early warning systems, contact tracing).