Site icon BLUE HILLCO

AI suggested 40,000 new possible chemical weapons in just six hours

AI suggested 40,000 new possible chemical weapons in just six hours

AI suggested 40,000 new possible chemical weapons in just six hours

AI-assisted medication development takes less than six hours to generate 40,000 potentially deadly compounds. At a biological arms control conference, researchers placed AI that is generally used to hunt for useful pharmaceuticals into a type of “bad actor” mode to demonstrate how easy it may be abused.

All the researchers had to do was change their methods to seek out toxicity rather than weed it out. Tens of thousands of new chemicals were created by the AI, some of which are similar to VX, the most deadly nerve toxin ever discovered. They were so shaken that they decided to publish their findings in the journal Nature Machine Intelligence this month.

We at The BlueHillco were also shaken by the news. So, to find out how concerned we should be, The BlueHillco chatted with Fabio Urbina, the paper’s lead author. He is also a senior scientist at Collaborations Pharmaceuticals, Inc., a firm that specializes on developing drugs to cure rare disorders.

For length and clarity, this interview has been gently modified.

This paper appears to turn your usual work on its head. Tell me about your day-to-day activities.

My primary responsibility is to implement new machine learning models in the field of drug discovery. A major portion of the machine learning algorithms we employ are designed to forecast toxicity. No matter what kind of drug you’re trying to create, you must ensure that it is not harmful. If you have this terrific medicine that lowers blood pressure brilliantly, but it hits one of these really crucial, say, cardiac pathways — then it’s basically a no-go because it’s just too hazardous.

So, why did you conduct this research on biological weapons? What sparked the fire?

The Swiss Federal Institute for Nuclear, Biological, and Chemical Protection, Spiez Laboratory, invited us to the Convergence conference. The conference’s goal is to inform the larger community about new advancements using technologies that may have ramifications for the Chemical/Biological Weapons Convention.

We were invited to speak on machine learning and how it may be abused in our industry. It’s something we’d never considered before. But it was very simple to recognize that as we construct these machine learning models to get better and better at forecasting toxicity in order to avoid toxicity, all we have to do is flip the switch and say, “You know, instead of heading away from toxicity, what if we go toward toxicity?”

Can you explain how you achieved that — how you shifted the model toward toxicity?

I’ll be a little vague with some aspects because we were basically advised to keep some of the specifics to ourselves. In general, the way this experiment works is that we have a lot of datasets from the past of chemicals that have been evaluated to see if they are harmful or not.

VX, in particular, is the one we’ll be looking at today. It is an inhibitor of the enzyme acetylcholinesterase. When you move your muscles, your neurons employ acetylcholinesterase as a signal to say, “Go move your muscles.” VX is fatal because it prevents your diaphragm, or lung muscles, from moving, causing your lungs to become paralyzed.

This is obviously something you want to avoid. Historically, several sorts of compounds have been tested to investigate if they inhibit acetylcholinesterase. As a result, we compiled these massive datasets of molecular structures and their toxicity.

We may use these datasets to train a machine learning model that learns which elements of the chemical structure are critical for toxicity and which are not. The machine learning model can then be fed additional compounds, potentially new medications that have never been studied before. And it will inform us whether or not something is expected to be harmful. This allows us to virtually screen a large number of molecules in a short period of time and exclude those that are expected to be harmful. In our work, we obviously flipped it, and we used this model to try to predict toxicity.

These new generative models are also an important element of what we did here. We can feed a generative model a variety of structures and it will learn how to put molecules together. Then, in a way, we can ask it to generate new molecules. It can now generate new compounds all around the chemical space, and they’re just random molecules. But there is one thing we can do: we can tell the generative model which way we want to go. We do this by providing it with a little scoring function that rewards it with a high score if the molecules it makes are oriented toward what we like. Instead of assigning a low score to harmful molecules, we assign a high value to them.

Now we see the model begin to produce all of these compounds, many of which resemble VX and other chemical warfare weapons.

Tell me more about what you discovered. Was there anything that surprised you?

We had no idea what we were going to get. Our generative models are a relatively new technology. As a result, we haven’t made much use of them.

The first thing that stood out was that many of the produced chemicals were projected to be more lethal than VX. That’s unexpected given that VX is one of the most powerful chemicals known. That is, a very, very, very small amount of it is required to be lethal.

These are forecasts that we haven’t validated, and we certainly don’t wish to do so. However, the predictive models are often rather good. So, even if there are many false positives, we are concerned that there are some more potent compounds in there.

KUALA LUMPUR, MALAYSIA – FEBRUARY 26, 2017: Malaysia’s Police Forensic Team with the help of Fire Department and Atomic Energy Licensing Board swept the terminal at Kuala Lumpur International Airport 2 for toxic chemicals after they announced on Friday, Kim Jong Nam was poisoned by VX nerve agent, which is listed as the most potent form of nerve agents known in chemical warfare.
Photo by Rahman Roslan/Getty Images

Second, we examined several of the structures of these newly produced compounds. And several of them resembled VX and other warfare weapons, and we even discovered those that were created by the model to be actual chemical warfare poisons. These were generated by the model despite the fact that the model has never encountered these chemical warfare substances. So we knew we were in the proper place and that it was producing molecules that made sense since some of them had already been created.

My main concern was how simple it was to accomplish. Many of the resources we used are available for free. A toxicity dataset can be downloaded from any location. If you have someone who understands how to write in Python and has some machine learning skills, they could definitely develop something like this generative model driven by harmful datasets in a decent weekend’s work. So it was the thing that prompted us to publish this work; there was such a low barrier to entry for this type of misuse.

According to your paper, you and your colleagues “have nonetheless crossed a gray moral barrier, proving that it is conceivable to construct virtual potential hazardous compounds with little effort, time, or computer resources.” We can simply delete the thousands of molecules we generated, but not the knowledge of how to recreate them.” What were your thoughts like while you were doing this work?

This was a really uncommon publishing. We’ve gone back and forth about whether or not to publish it. This is a potential abuse that didn’t take as long to complete. And we wanted to get that knowledge out since we couldn’t find it anywhere else in the literature. We looked around, and no one seemed to be talking about it. At the same time, we didn’t want to give bad actors the notion.

We determined at the end of the day that we wanted to get ahead of this. Because if we can do it, it’s likely that some antagonistic agent somewhere is already thinking about it or will think about it in the future. Our technology may have advanced beyond what we can do now by then. And a lot of it will be open source, which I strongly support: the sharing of science, data, and models. However, this is one of those instances in which we, as scientists, must ensure that what we release is done appropriately.

How simple is it for someone to imitate what you did? What would they require?

I don’t want to sound sensational, but it’s quite simple for someone to recreate what we accomplished.

If you Google generating models, you’ll find a slew of one-liner generative models that people have made available for free. Then, if you search for toxicity datasets, you’ll find a plethora of open-source tox databases. So, if you combine those two things, and you know how to code and develop machine learning models — all you really need is an internet connection and a computer — you can simply reproduce what we achieved. Not only for VX, but for pretty much every other open-source toxicity dataset that exists.

Of course, some knowledge is required. If you put this together without knowing anything about chemistry, you’d probably end up with something that wasn’t very useful. And then there’s the matter of getting those molecules created. Finding a prospective medication or novel dangerous molecule is one thing; the next stage of synthesis — producing a new molecule in the real world — would be another.

Right, there are still some significant gaps between what the AI generates and what becomes a real-world threat. What are the omissions?

To begin with, there is no way of knowing if these compounds are harmful or not. There will be a certain number of false positives. If we imagine what a terrible agent would be thinking or doing, they would have to decide which of these new chemicals they wanted to manufacture in the end.

In terms of synthesis routes, this might be a make or break situation. If you find something that looks like a chemical warfare agent and try to synthesize it, odds are it won’t work. Many of the chemical components of these chemical warfare agents are well recognized and closely monitored. They are governed. But there are so many synthesis firms. As long as it doesn’t appear like a chemical warfare agent, they’ll probably just synthesize it and send it back because who knows what the molecule is used for?

You address this further in the study, but what can be done to prevent this type of AI misuse? What safeguards do you want to see put in place?

For context, there are an increasing number of policies governing data exchange. And I wholeheartedly agree because it offers up new avenues for inquiry. It enables other researchers to view your data and use it in their own study. However, this also covers things like toxicity datasets and toxicity models. As a result, it’s a little difficult to come up with a solid answer to this situation.

Members of Malaysia’s Hazmat team conduct a decontamination operation at the departures terminal of the Kuala Lumpur International Airport 2 in Sepang on February 26, 2017. Kim Jong-Nam, the half-brother of North Korean leader Kim Jong-Un, was killed at the airport on February 13. Malaysian police told the public they would do everything possible to ensure there was no risk from the lethal VX nerve agent used to assassinate Kim Jong-Nam.
Photo by MANAN VATSYAYANA/AFP via Getty Images

We went towards Silicon Valley: there’s an organization called OpenAI, and they’ve released a top-tier language model called GPT-3. It’s almost like a chatbot; it can generate words and text that are nearly indistinguishable from human speech. They actually allow you to use it for free whenever you want, but you must first obtain a unique access key from them. They could revoke your access to those models at any time. We were thinking that anything along those lines may be a good starting point for potentially sensitive models, such as toxicity models.

Open communication, open access, and open data sharing are important to science. Restrictions are diametrically opposed to that idea. However, one step forward could be to at least account for who is utilizing your resources appropriately.

According to your report, “[w]ithout being overly alarmist, this should operate as a wake-up call for our colleagues” – what do you want them to wake up to? And what do you think it would seem like to be overly concerned?

We simply want more researchers to recognize and be aware of potential misappropriation. When you start working in the chemical field, you are informed about chemistry misuse, and you are sort of accountable for avoiding it as much as possible. Nothing of the sort exists in machine learning. There is no direction on how to use technology correctly.

Putting that awareness out there may help others become more conscious of the problem. Then it’s at least discussed in broader circles and can be something to keep an eye out for as we grow better and better at constructing toxicity models.

I don’t want to suggest that machine learning AI will start producing poisonous compounds and that a series of new biological warfare agents are just around the corner. Someone presses a button, and then, you know, chemical warfare materials materialize in their hand.

I don’t want to appear overly pessimistic by predicting AI-driven chemical warfare. That, I don’t believe, is the case right now. I don’t believe it will happen anytime soon. But it’s something that’s becoming a possibility.

Exit mobile version