Op-Ed - February 2026
![Why Fighting AI’s Risks with AI Won’t be Enough Why Fighting AI’s Risks with AI Won’t be Enough, Op-Ed, Alexander Thamm [at]](/fileadmin/_processed_/7/4/csm_why-fighting-ais-risks-with-ai-wont-be-enough-op-ed-februar-2026_a975e90677.jpg)
The invention of the printing press enabled the mass production of texts and dramatically expanded and accelerated the circulation of information. At the same time, it created new channels for the rapid spread of propaganda, misinformation, and manipulation. In other words: Technologies introduced with clearly beneficial aims often carry inherent dual-use potential as the very features that make them valuable, such as speed or scale, often amplify harmful applications.
A present-day example of such dual-use potential of technology is AI in Chemistry. Drug and cure discovery is a laborious task that often takes years. Chemists traditionally have to comb through existing substances and compounds and check all known chemical reactions to then test whether it is possible to synthesize any of them.
This process is now promised to be accelerated by AI: MOSAIC, an AI system developed by Yale and Boehringer-Ingelheim researchers to simplify and accelerate the process of chemical synthesis. According to a recent Nature report, MOSAIC has already allowed for the generation of 35 compounds with the potential to become products like pharmaceuticals, agrochemicals or cosmetics without needing to do any further searching or tweaking. Sounds promising, right? Where’s the issue?
While such systems enable rapid scientific breakthroughs, they simultaneously facilitate the discovery, design, and optimization of toxic, sometimes deadly, chemical agents. Although the discovery of such toxic chemical substances is often coincidental (as with the chemical agent Sarin in 1938), numerous historic examples show how states and militaries have deliberately transformed them into so-called chemical weapons. Industrial chemicals such as chlorine and later mustard gas, for instance, were known to chemists long before they were deployed on a large scale during the First World War.
The Chemical Weapons Convention eventually banned the production, stockpiling and use of chemical weapons in 1993. Today, they are reappearing on the battlefields as recent reports indicate (European Council, 2025). As a result, global attention to this dual-use risk of AI in chemistry and biology is intensifying again.
The California-based drug discovery company Lunai Bioworks recognizes the risk of scientific AI to accelerate the discovery and synthesis of harmful chemical compounds. In response to this, Lunai developed SentinelTM , an AI aimed at containing the risks of other AI’s like MOSAIC. Specifically, SentinelTM was developed to stop large language and scientific models from generating or discovering chemical agents in the first place. It is a “transformer-based AI safeguard designed to be embedded directly within large language and scientific foundation models” (PR Newswire, 2026). Built on Lunai's expansive molecular AI platform and strengthened by proprietary toxicology and in-vivo phenotypic datasets, Sentinel operates as a real-time biosecurity layer, screening molecular outputs before they are produced and stopping potentially hazardous designs at the source.
This initiative surely adds an imperative safety layer. But is fighting the risks one AI poses with another AI the right approach? Relying on a built-in safeguarding AI to prevent harmful discoveries seems misaligned with the goals emerging AI governance frameworks pursue: In recognizing increasing capabilities of AI, such frameworks promote (among other aspects) responsible use, transparency, and human oversight. When an AI governs the activities of another AI, where does human oversight come in? Technical guardrails can reduce risk, but they cannot substitute for external review, clear accountability structures, and enforceable access controls. Without complementary governance such as independent auditing, controlled deployment environments, and norms around responsible use, the burden of prevention is effectively shifted onto the very tools whose capabilities are being accelerated and whose inner workings are too often incomprehensible to humans.
In that sense, the challenge AI poses to chemistry echoes a broader concern long raised by British Computer Scientist Stuart Russell: How can we ensure that increasingly powerful technologies remain aligned with human values?
Perhaps, as Russell suggests, we should start thinking about how we think about AI. Common narratives indicate that when it comes to AI, we seem to assume it’s a matter of aligning “our” goals with the “AI’s goals”. This, Russell argues, is exactly the wrong starting point to think about AI and our relationship with it. AI is, after all, a man-made tool, that inherently cannot have any goals other than those humans build into it. So, instead, we should really think about what our goals with and for AI, and, frankly, for the future of humanity, are, and how those are (or aren’t) reflected in AI technologies.
Russell’s argument reinforces my concerns laid out in the previous paragraph: If we discover that tools like MOSAIC do not align with our goals, we should first adapt or reconfigure that tool rather than layering a new one on top, especially if the new tool’s built-in objectives might also fail to reflect what we’re trying to achieve. The harder – and more necessary – task is to decide what our goals actually are, and to design technologies that reflect them from the start. Otherwise, we risk constructing an ever-growing stack of machines tasked with correcting one another, while human responsibility quietly slips out of the loop.
References
Haote, L., Sarkar, S., Lu, W., et al (2026). Collective intelligence for AI-assisted chemical synthesis. Nature. https://doi.org/10.1038/s41586-026-10131-4
PR Newswire (January 2026). Lunai Bioworks (NASDAQ: LNAI) Launches Sentinel, an AI Safeguard to Block Large Language Models from Generating Novel Chemical Weapons. Available at https://www.prnewswire.com/news-releases/lunai-bioworks-nasdaq-lnai-launches-sentinel-an-ai-safeguard-to-block-large-language-models-from-generating-novel-chemical-weapons-302671394.html
European Council (2025). Chemical Weapons: EU sanctions three entities in the Russian Armed Forces over sue of chemical weapons in Ukraine. Available at https://www.consilium.europa.eu/en/press/press-releases/2025/05/20/chemical-weapons-eu-sanctions-three-entities-in-the-russian-armed-forces-over-use-of-chemical-weapons-in-ukraine
Russell, S. (2020). Human Compatible. AI and the Problem of Control. Penguin
Share this post: