BLUE HILLCO
  • Home
  • Tech
    • Amazon
    • Apple
    • Artificial Intelligence
    • Cars
    • Cybersecurity
    • Facebook
    • Google
    • Microsoft
    • Mobile
    • Policy
    • Privacy
    • Samsung
    • Scooters
    • Tesla
  • Science
    • Energy
    • Environment
    • Health
    • NASA
    • Space
  • Creators
    • Adobe
    • Camera Reviews
    • Cameras and Photography
    • Instagram
    • Kickstarter
    • Tumblr
  • Entertainment
    • Books
    • Comics
    • Film
    • Fortnite
    • Game of Thrones
    • SpaceX
    • Gaming
    • TV Shows
No Result
View All Result
BLUE HILLCO
  • Home
  • Tech
    • Amazon
    • Apple
    • Artificial Intelligence
    • Cars
    • Cybersecurity
    • Facebook
    • Google
    • Microsoft
    • Mobile
    • Policy
    • Privacy
    • Samsung
    • Scooters
    • Tesla
  • Science
    • Energy
    • Environment
    • Health
    • NASA
    • Space
  • Creators
    • Adobe
    • Camera Reviews
    • Cameras and Photography
    • Instagram
    • Kickstarter
    • Tumblr
  • Entertainment
    • Books
    • Comics
    • Film
    • Fortnite
    • Game of Thrones
    • SpaceX
    • Gaming
    • TV Shows
No Result
View All Result
BLUE HILLCO
No Result
View All Result
Home Tech Artificial Intelligence

AI suggested 40,000 new possible chemical weapons in just six hours

Shirley C. Stewart by Shirley C. Stewart
20 mars 2022
in Artificial Intelligence, Tech
0
AI suggested 40,000 new possible chemical weapons in just six hours
79
SHARES
1.3k
VIEWS
Share on FacebookShare on TwitterShare on PinterestShare on RedditShare on TumblrShare on WhatsApp

You might also like

Citing fires, London’s transport agency bans e-scooters on public transit network

Ford’s e-scooter company is pulling out of any city that doesn’t limit competition

Lime launches its new electric bikes in Washington, DC as part of $50 million blitz

AI-assisted medication development takes less than six hours to generate 40,000 potentially deadly compounds. At a biological arms control conference, researchers placed AI that is generally used to hunt for useful pharmaceuticals into a type of « bad actor » mode to demonstrate how easy it may be abused.

All the researchers had to do was change their methods to seek out toxicity rather than weed it out. Tens of thousands of new chemicals were created by the AI, some of which are similar to VX, the most deadly nerve toxin ever discovered. They were so shaken that they decided to publish their findings in the journal Nature Machine Intelligence this month.

The paper had us at BlueHillco a little shook

We at The Verge were also shaken by the news. So, to find out how concerned we should be, The Verge chatted with Fabio Urbina, the paper’s lead author. He is also a senior scientist at Collaborations Pharmaceuticals, Inc., a firm that specializes on developing drugs to cure rare disorders.

For length and clarity, this interview has been gently modified.

This paper appears to turn your usual work on its head. Tell me about your day-to-day activities.

My primary responsibility is to implement new machine learning models in the field of drug discovery. A major portion of the machine learning algorithms we employ are designed to forecast toxicity. No matter what kind of drug you’re trying to create, you must ensure that it is not harmful. If you have this terrific medicine that lowers blood pressure brilliantly, but it hits one of these really crucial, say, cardiac pathways — then it’s basically a no-go because it’s just too hazardous.

So, why did you conduct this research on biological weapons? What sparked the fire?

The Swiss Federal Institute for Nuclear, Biological, and Chemical Protection, Spiez Laboratory, invited us to the Convergence conference. The conference’s goal is to inform the larger community about new advancements using technologies that may have ramifications for the Chemical/Biological Weapons Convention.

We were invited to speak on machine learning and how it may be abused in our industry. It’s something we’d never considered before. But it was very simple to recognize that as we construct these machine learning models to get better and better at forecasting toxicity in order to avoid toxicity, all we have to do is flip the switch and say, « You know, instead of heading away from toxicity, what if we go toward toxicity? »

Can you explain how you achieved that — how you shifted the model toward toxicity?

I’ll be a little vague with some aspects because we were basically advised to keep some of the specifics to ourselves. In general, the way this experiment works is that we have a lot of datasets from the past of chemicals that have been evaluated to see if they are harmful or not.

VX, in particular, is the one we’ll be looking at today. It is an inhibitor of the enzyme acetylcholinesterase. When you move your muscles, your neurons employ acetylcholinesterase as a signal to say, « Go move your muscles. » VX is fatal because it prevents your diaphragm, or lung muscles, from moving, causing your lungs to become paralyzed.

“Obviously, this is something you want to avoid.”

This is obviously something you want to avoid. Historically, several sorts of compounds have been tested to investigate if they inhibit acetylcholinesterase. As a result, we compiled these massive datasets of molecular structures and their toxicity.

We may use these datasets to train a machine learning model that learns which elements of the chemical structure are critical for toxicity and which are not. The machine learning model can then be fed additional compounds, potentially new medications that have never been studied before. And it will inform us whether or not something is expected to be harmful. This allows us to virtually screen a large number of molecules in a short period of time and exclude those that are expected to be harmful. In our work, we obviously flipped it, and we used this model to try to predict toxicity.

These new generative models are also an important element of what we did here. We can feed a generative model a variety of structures and it will learn how to put molecules together. Then, in a way, we can ask it to generate new molecules. It can now generate new compounds all around the chemical space, and they’re just random molecules. But there is one thing we can do: we can tell the generative model which way we want to go. We do this by providing it with a little scoring function that rewards it with a high score if the molecules it makes are oriented toward what we like. Instead of assigning a low score to harmful molecules, we assign a high value to them.

Now we see the model begin to produce all of these compounds, many of which resemble VX and other chemical warfare weapons.

Tell me more about what you discovered. Was there anything that surprised you?

We had no idea what we were going to get. Our generative models are a relatively new technology. As a result, we haven’t made much use of them.

The first thing that stood out was that many of the produced chemicals were projected to be more lethal than VX. That’s unexpected given that VX is one of the most powerful chemicals known. That is, a very, very, very small amount of it is required to be lethal.

These are forecasts that we haven’t validated, and we certainly don’t wish to do so. However, the predictive models are often rather good. So, even if there are many false positives, we are concerned that there are some more potent compounds in there.

North Korean Leader’s Half-Brother Kim Jong-nam Killed In Malaysia
KUALA LUMPUR, MALAYSIA – FEBRUARY 26, 2017: Malaysia’s Police Forensic Team with the help of Fire Department and Atomic Energy Licensing Board swept the terminal at Kuala Lumpur International Airport 2 for toxic chemicals after they announced on Friday, Kim Jong Nam was poisoned by VX nerve agent, which is listed as the most potent form of nerve agents known in chemical warfare.
Photo by Rahman Roslan/Getty Images

Second, we examined several of the structures of these newly produced compounds. And several of them resembled VX and other warfare weapons, and we even discovered those that were created by the model to be actual chemical warfare poisons. These were generated by the model despite the fact that the model has never encountered these chemical warfare substances. So we knew we were in the proper place and that it was producing molecules that made sense since some of them had already been created.

My main concern was how simple it was to accomplish. Many of the resources we used are available for free. A toxicity dataset can be downloaded from any location. If you have someone who understands how to write in Python and has some machine learning skills, they could definitely develop something like this generative model driven by harmful datasets in a decent weekend’s work. So it was the thing that prompted us to publish this work; there was such a low barrier to entry for this type of misuse.

According to your paper, you and your colleagues « have nonetheless crossed a gray moral barrier, proving that it is conceivable to construct virtual potential hazardous compounds with little effort, time, or computer resources. » We can simply delete the thousands of molecules we generated, but not the knowledge of how to recreate them. » What were your thoughts like while you were doing this work?

This was a really uncommon publishing. We’ve gone back and forth about whether or not to publish it. This is a potential abuse that didn’t take as long to complete. And we wanted to get that knowledge out since we couldn’t find it anywhere else in the literature. We looked around, and no one seemed to be talking about it. At the same time, we didn’t want to give bad actors the notion.

“Some adversarial agent somewhere is maybe already thinking about it”

We determined at the end of the day that we wanted to get ahead of this. Because if we can do it, it’s likely that some antagonistic agent somewhere is already thinking about it or will think about it in the future. Our technology may have advanced beyond what we can do now by then. And a lot of it will be open source, which I strongly support: the sharing of science, data, and models. However, this is one of those instances in which we, as scientists, must ensure that what we release is done appropriately.

How simple is it for someone to imitate what you did? What would they require?

I don’t want to sound sensational, but it’s quite simple for someone to recreate what we accomplished.

If you Google generating models, you’ll find a slew of one-liner generative models that people have made available for free. Then, if you search for toxicity datasets, you’ll find a plethora of open-source tox databases. So, if you combine those two things, and you know how to code and develop machine learning models — all you really need is an internet connection and a computer — you can simply reproduce what we achieved. Not only for VX, but for pretty much every other open-source toxicity dataset that exists.

“I don’t want to sound very sensationalist about this, but it is fairly easy for someone to replicate what we did.”

Of course, some knowledge is required. If you put this together without knowing anything about chemistry, you’d probably end up with something that wasn’t very useful. And then there’s the matter of getting those molecules created. Finding a prospective medication or novel dangerous molecule is one thing; the next stage of synthesis — producing a new molecule in the real world — would be another.

Right, there are still some significant gaps between what the AI generates and what becomes a real-world threat. What are the omissions?

To begin with, there is no way of knowing if these compounds are harmful or not. There will be a certain number of false positives. If we imagine what a terrible agent would be thinking or doing, they would have to decide which of these new chemicals they wanted to manufacture in the end.

In terms of synthesis routes, this might be a make or break situation. If you find something that looks like a chemical warfare agent and try to synthesize it, odds are it won’t work. Many of the chemical components of these chemical warfare agents are well recognized and closely monitored. They are governed. But there are so many synthesis firms. As long as it doesn’t appear like a chemical warfare agent, they’ll probably just synthesize it and send it back because who knows what the molecule is used for?

You address this further in the study, but what can be done to prevent this type of AI misuse? What safeguards do you want to see put in place?

For context, there are an increasing number of policies governing data exchange. And I wholeheartedly agree because it offers up new avenues for inquiry. It enables other researchers to view your data and use it in their own study. However, this also covers things like toxicity datasets and toxicity models. As a result, it’s a little difficult to come up with a solid answer to this situation.

MALAYSIA-NKOREA-SKOREA-KIM-VX
Members of Malaysia’s Hazmat team conduct a decontamination operation at the departures terminal of the Kuala Lumpur International Airport 2 in Sepang on February 26, 2017. Kim Jong-Nam, the half-brother of North Korean leader Kim Jong-Un, was killed at the airport on February 13. Malaysian police told the public they would do everything possible to ensure there was no risk from the lethal VX nerve agent used to assassinate Kim Jong-Nam.
Photo by MANAN VATSYAYANA/AFP via Getty Images

We went towards Silicon Valley: there’s an organization called OpenAI, and they’ve released a top-tier language model called GPT-3. It’s almost like a chatbot; it can generate words and text that are nearly indistinguishable from human speech. They actually allow you to use it for free whenever you want, but you must first obtain a unique access key from them. They could revoke your access to those models at any time. We were thinking that anything along those lines may be a good starting point for potentially sensitive models, such as toxicity models.

Open communication, open access, and open data sharing are important to science. Restrictions are diametrically opposed to that idea. However, one step forward could be to at least account for who is utilizing your resources appropriately.

According to your report, « [w]ithout being overly alarmist, this should operate as a wake-up call for our colleagues » – what do you want them to wake up to? And what do you think it would seem like to be overly concerned?

We simply want more researchers to recognize and be aware of potential misappropriation. When you start working in the chemical field, you are informed about chemistry misuse, and you are sort of accountable for avoiding it as much as possible. Nothing of the sort exists in machine learning. There is no direction on how to use technology correctly.

“We just want more researchers to acknowledge and be aware of potential misuse.”

Putting that awareness out there may help others become more conscious of the problem. Then it’s at least discussed in broader circles and can be something to keep an eye out for as we grow better and better at constructing toxicity models.

I don’t want to suggest that machine learning AI will start producing poisonous compounds and that a series of new biological warfare agents are just around the corner. Someone presses a button, and then, you know, chemical warfare materials materialize in their hand.

I don’t want to appear overly pessimistic by predicting AI-driven chemical warfare. That, I don’t believe, is the case right now. I don’t believe it will happen anytime soon. But it’s something that’s becoming a possibility.

Share31Tweet20Pin8ShareShareSend
Shirley C. Stewart

Shirley C. Stewart

Recommended For You

Citing fires, London’s transport agency bans e-scooters on public transit network

by Shirley C. Stewart
26 mars 2022
0
Citing fires, London’s transport agency bans e-scooters on public transit network

Following previous fires, Transport for London announced a ban on all e-scooters on its network, commencing December 13th. The restriction will apply to all London public transportation, including...

Read more

Ford’s e-scooter company is pulling out of any city that doesn’t limit competition

by Shirley C. Stewart
26 mars 2022
0
Ford’s e-scooter company is pulling out of any city that doesn’t limit competition

Spin, a Ford-owned e-scooter startup, announced a massive restructure on Friday, stating that it will withdraw from "almost all open permit markets." Spin is particularly exiting "a few"...

Read more

Lime launches its new electric bikes in Washington, DC as part of $50 million blitz

by Shirley C. Stewart
26 mars 2022
0
Lime launches its new electric bikes in Washington, DC as part of $50 million blitz

Lime's next-generation electric bikes have finally arrived in North America. Nearly a year after announcing intentions to invest $50 million in a massive e-bike expansion, the San Francisco-based...

Read more

Razor updates its iconic kick scooters with electric motors and powerful batteries

by Shirley C. Stewart
26 mars 2022
0
Razor updates its iconic kick scooters with electric motors and powerful batteries

Razor, the firm that helped start the scooter craze more than two decades ago, has introduced new electric versions of its iconic kick scooter models. And this time,...

Read more

Ford sells electric scooter unit Spin to Berlin’s Tier

by Shirley C. Stewart
26 mars 2022
0
Ford sells electric scooter unit Spin to Berlin’s Tier

Ford has sold its electric scooter business, Spin, to one of Europe's leading shared mobility providers. Tier Mobility will take over all of Spin's operations, making it the...

Read more
Next Post
Biden demands Congress protect kids online in State of the Union address

Biden demands Congress protect kids online in State of the Union address

Laisser un commentaire Annuler la réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

ADVERTISEMENT

Related Articles

Android 13 will ask your permission to send notifications

Android 13 will ask your permission to send notifications

19 mars 2022
Switch Joy-Con controllers are $10 off in time for new Mario Kart 8 Deluxe courses

Switch Joy-Con controllers are $10 off in time for new Mario Kart 8 Deluxe courses

19 mars 2022
Here are the best tablet deals right now

Here are the best tablet deals right now

19 mars 2022

Browse by Category

  • Science
    • Energy
    • Environment
    • Health
    • SpaceX
  • Tech
    • Amazon
    • Apple
    • Artificial Intelligence
    • Cars
    • Cybersecurity
    • Facebook
    • Google
    • Microsoft
    • Mobile
    • Policy
    • Privacy
    • Samsung
    • Scooters
    • Tesla
Facebook Twitter Instagram Pinterest Tumblr
BLUE HILLCO

We're a team of passionate tech enthusiasts who love to write about the latest news and products in the industry. We strive to provide accurate, unbiased information to help our readers make informed decisions. Our goal is to build a community of readers who can rely on us for the latest news and reviews.

CATEGORIES

  • Amazon
  • Apple
  • Artificial Intelligence
  • Cars
  • Cybersecurity
  • Energy
  • Environment
  • Facebook
  • Google
  • Health
  • Microsoft
  • Mobile
  • Policy
  • Privacy
  • Samsung
  • Scooters
  • SpaceX
  • Tech
  • Tesla

© 2022 BlueHillco - Premium news & magazine website. All rights reserved BlueHillco.com

No Result
View All Result
  • Home
  • Tech
    • Amazon
    • Apple
    • Artificial Intelligence
    • Cars
    • Cybersecurity
    • Facebook
    • Google
    • Microsoft
    • Mobile
    • Policy
    • Privacy
    • Samsung
    • Scooters
    • Tesla
  • Science
    • Energy
    • Environment
    • Health
    • NASA
    • Space
    • SpaceX
  • Creators
    • Adobe
    • Camera Reviews
    • Cameras and Photography
    • Instagram
    • Kickstarter
    • Tumblr
  • Entertainment
    • Books
    • Comics
    • Film
    • Fortnite
    • Game of Thrones
    • Gaming
    • TV Shows

© 2022 BlueHillco - Premium news & magazine website. All rights reserved BlueHillco.com

Go to mobile version