A group of leading scientists has launched a voluntary initiative outlining a set of values, principles, and commitments for AI protein design.
The letter discusses the potential misuse of AI tools capable of designing new proteins with unprecedented speed and efficiency.
While AI holds immense promise for addressing pressing global challenges, from pandemic response to sustainable energy solutions, it also raises questions about the possibility of malicious use, such as creating novel bioweapons.
While the proteins designed by DeepMind’s AlphaFold system and later by researchers at the University of Washington School of Medicine demonstrated high affinity and specificity to their intended targets, there is always the possibility that these proteins could interact with other molecules in the body in unexpected ways.
This could lead to adverse side effects or even the development of new diseases.
Another risk is the potential for misuse or malicious use of this technology. Just as AI-designed proteins can be used to create highly targeted therapies, they could also be engineered to cause harm.
For example, a malicious actor could potentially design a protein that targets a specific ethnic group or exploits a particular genetic vulnerability. This highlights the need for robust security measures and ethical guidelines to prevent the misuse of this technology.
As David Baker, a computational biophysicist at the University of Washington and a key figure behind the initiative, notes, “The question was: how, if in any way, should protein design be regulated and what, if any, are the dangers?”
The 100+ signatories believe that “the benefits of current AI technologies for protein design far outweigh the potential for harm” but recognize the need for a proactive approach to risk management as the technology advances.
AI for biology has the potential to help solve some of the most important problems facing our society. Scientific openness will be critical for our field to progress. As scientists engaged in this work, with more than 90 signatories across the world, we are advancing a framework… pic.twitter.com/4m81j87Vo9
— Alex Rives (@alexrives) March 8, 2024
The letter states that “given anticipated advances in this field, a new proactive risk management approach may be required to mitigate the potential of developing AI technologies that could be misused, intentionally or otherwise, to cause harm.”
Signed.
A good idea for safety in protein design (AI powered or not): regulate and control the protein synthesis machines and firmware, not the AI research itself.
This is an idea I promoted at the UK AI Safety Summit. https://t.co/skcUUcgG5a— Yann LeCun (@ylecun) March 8, 2024
As part of the initiative, researchers articulated a set of values and principles to guide the responsible development of AI technologies in protein design.
These include “safety, security, equity, international collaboration, openness, responsibility, and pursuing research for the benefit of society.” The signatories have also voluntarily agreed to a set of specific, actionable commitments informed by these values and principles.
One key aspect of the initiative focuses on improving the screening of DNA synthesis, a crucial step in translating AI-designed proteins into physical molecules.
The scientists commit to “obtain DNA synthesis services only from providers that demonstrate adherence to industry-standard biosecurity screening practices, which seek to detect hazardous biomolecules before they can be manufactured.”
AI’s advances in medicine and biotechnology have been quite incredible, leading to new antibiotics and anti-aging drugs.
To drive these findings forward, the signatories commit to “work collaboratively with stakeholders from around the world” and “refrain from research likely to lead to overall harm or enable misuse of our technologies.”