it's about ethics in drug discovery
March 16, 2022 2:24 PM   Subscribe

Dual use of artificial-intelligence-powered drug discovery (Urbina, F., Lentzos, F., Invernizzi, C. et al. Nat Mach Intell (2022):
Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused. The thought had never previously struck us.... When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research.
We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. ... We had designed a de novo molecule generator, which is guided by machine learning model predictions of bioactivity for the purpose of finding new therapeutic inhibitors of targets for human diseases. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic. ...

In less than 6 hours, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. ...

Our toxicity models were originally created for use in avoiding toxicity. The inverse, however, has always been true: the better we can predict toxicity, the better we can steer our generative model to design new molecules in a region of chemical space populated by predominantly lethal molecules. ...

By going as close as we dared, we have still crossed a grey moral boundary, demonstrating that it is possible to design virtual potential toxic molecules without much in the way of effort, time or computational resources. We can easily erase the thousands of molecules we created, but we cannot delete the knowledge of how to recreate them.
posted by autopilot (48 comments total) 25 users marked this as a favorite
 
We can easily erase the thousands of molecules we created, but we cannot delete the knowledge of how to recreate them.

Well, you could delete the AI.

But really, the whole genie has been out of the bottle for so long that a lot of things we take as entirely dystopian are already a part of life in ways that many and few engage with. This is truly horrible, but it's not like someone else couldn't build an AI and do the same thing. Maybe faster, worse, cheaper, because the goal would be horror instead of healing to begin with.

There is no way to undo knowledge of things that have been discovered, not for a long time. The library of Alexandria burned and since then the goal has been to have knowledge that won't be destroyed. And we've largely gotten there. (Well, let's forget all the photos and videos that are on abandoned formats that basically erases an entire generation from the record.)

It's a bit like the atom bomb, a bit like a computer worm, entirely a social virus. Knowledge will be free, and if you create it, it's there, and someone will find it and then there are two of you. And so on.
posted by hippybear at 2:36 PM on March 16, 2022 [4 favorites]


Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused. The thought had never previously struck us.

Jesus. Prescribe some remedial science fiction reading for these jackasses, stat.
posted by The Tensor at 2:50 PM on March 16, 2022 [24 favorites]


You don't even have to go that obscure -- Wrath of Khan's Genesis device would be sufficient.
posted by credulous at 3:07 PM on March 16, 2022 [9 favorites]


Yeah, it's bit like realizing belatedly that you can stab someone with a scalpel.

I wish there was more we could do, but like with other known substances and methods we try to disapprove of as a society, all we can really accomplish is say "please don't."

The only silver lining here is the same thing that's a pain in the neck for the beneficial side of this same process. Discovering the molecules is the easy part; finding a way to manufacture and test them at scale is hard, and not something you can easily do in total secrecy.

As synthetic and programmable biology continues to become more accessible, you might expect a small lab to be able to produce small quantities, but the operations to make these unique pathogens at the hundreds of liters scale would not be subtle.

Of course a motivated nation-state with reasonable means could accomplish this but... sufficient unto the day is the evil thereof.
posted by BlackLeotardFront at 3:13 PM on March 16, 2022 [3 favorites]


Well, that was terrifying.
posted by happyinmotion at 3:45 PM on March 16, 2022 [2 favorites]


Yeah, it's bit like realizing belatedly that you can stab someone with a scalpel.

"You could have knocked me over with a feather I was so surprised..."
posted by The Tensor at 3:47 PM on March 16, 2022


I'll assume for the sake of argument that this approach is far more likely to give you compounds that act as predicted for toxic compounds than therapeutic ones. (It probably is easier, though how much is unclear. But a paper like this in the therapeutic space--and there have been many--has posed no risks to the jobs of non-AI using chemists trying to make drugs.)

The obvious next question to me is whether bad actors who want to poison lots of people are limited by the toxicity of available poisons. If you, the entrepeneurial Bond archvillain, had the funds and disposition to deploy some fatal chemical and I said "first we'll invent a new poison, I can do it more easily than ever before!" would that be more likely to help you in your goal or slow you down?
posted by mark k at 3:52 PM on March 16, 2022 [2 favorites]


Deliberately Optimizing for Harm by Derek Lowe at in the pipeline covers this nicely.

12-year-old-boy-me says "OOOOHHH POISONS". 48 year old me is revulsed.

I think a lot of phosphorous-centred chemicals will have just got a lot harder to work with and obtain since many of the derivatives will be based on P-centred esters or fluorides. I expect there will be emergency scheduling of those that aren't on hidden watchlists ANYWAY.

i'm intrigued & horrified by the "chemical space away from known toxicophores" too...
posted by lalochezia at 3:53 PM on March 16, 2022 [3 favorites]


Unlurking because this is my area:

> Well, you could delete the AI.

The one sentence version, "we inverted the loss function to find the most toxic molecules and it worked" is enough information for a competent lab to recreate this work.

They'd also need to never share the idea.

:(
posted by constraint at 3:56 PM on March 16, 2022 [22 favorites]


It's good we've got Dr Ian Malcolm coming back to cinema screens so I don't have to gripe about a lack of gif memes here. (I'm also grateful that mathowie did stop to think if he should instead of being carried away that he could metafilter the best of the web.)
posted by k3ninho at 4:23 PM on March 16, 2022 [2 favorites]


It is becoming clear that our technical, industrial, and IT powers are increasing to the point that earth is going to be n=1 proof of "the great filter" by accident or on purpose. How many rounds of russian roulette do we need to play as we keep inventing more bullets.
posted by anecdotal_grand_theory at 4:36 PM on March 16, 2022 [5 favorites]


some MeFite will be along directly to mock us for thinking this is bad
posted by glonous keming at 4:48 PM on March 16, 2022 [1 favorite]


The obvious next question to me is whether bad actors who want to poison lots of people are limited by the toxicity of available poisons.

Yes. The precursor chemicals of VX are known. Stockpiles of those chemicals can be regulated and tracked. That makes it hard to manufacture VX in secret.

Previously unknown toxins have who knows what precursors? If it's possible for a team of four people to invent 40 000 new kinds of nerve gas in an afternoon (literally!) there's good reason to worry that some novel toxins might have precursors which are unregulated and easily available.
posted by justsomebodythatyouusedtoknow at 4:55 PM on March 16, 2022 [8 favorites]


I call bullshit on nobody noticing this when they were training the AI. If you're training an AI to avoid toxicity, you're training an AI to find toxicity. Maybe the higher ups didn't know what the AI could do but anyone who understood basic machine learning knew what they were doing.
posted by Tell Me No Lies at 5:10 PM on March 16, 2022


some MeFite will be along directly to mock us for thinking this is bad

Well I don't want to mock you, but this kind of exercise is very important because there are bad people in the world who are using these same techniques and will unleash the result. If you are in charge of defense you have a moral obligation to stay ahead of weapons technology and have defenses ready to go.

The arms race never ends, and you really don't want to be the loser.
posted by Tell Me No Lies at 5:14 PM on March 16, 2022 [5 favorites]


Maybe the higher ups didn't know what the AI could do but anyone who understood basic machine learning knew what they were doing.

5 to 1 it was a graduate student who accidentally flipped a sign.
posted by logicpunk at 5:18 PM on March 16, 2022 [2 favorites]


AI-narchist’s Cookbook.
posted by snofoam at 5:22 PM on March 16, 2022 [8 favorites]


The obvious next question to me is whether bad actors who want to poison lots of people are limited by the toxicity of available poisons.

The bigger concern is that they are limited by the availability of available poisons, and would like to have new ones with uncontrolled precursors. On the other hand, having a structure for a (possible) poison does not come with the expertise needed to synthesize it. On the other, other hand, one might be able to contract that out to somebody without too much fuss, since the target is not established as a no-go.

My instinct about this overall is that it’s a risk worth understanding, but differs from existing risks only as a matter of degree. Having some familiarity with the “research chemical”/designer drug space, in which a lot of new products essentially come out of mining the pharma slush pile, along with some application of basic pharmaceutical chemistry, I really doubt that the same couldn’t already be done with nerve gases. These new tools probably ease a few steps in the process. But the reason it hasn’t often happened in reality is that there aren’t a lot of sub-state-level actors who would want to synthesize a new nerve gas - not when there are easier ways to kill people - and even fewer who would realistically be able to manage the project of putting together all the steps required, without killing themselves in the process.
posted by atoxyl at 5:36 PM on March 16, 2022 [2 favorites]


The answer, of course, is to focus more on STEM.
posted by fifteen schnitzengruben is my limit at 5:45 PM on March 16, 2022 [6 favorites]


The obvious next question to me is whether bad actors who want to poison lots of people are limited by the toxicity of available poisons.

Not really, no. I just finished telling my 8-year-old yet again why mixing bleach and ammonia to make a super-duper cleaner is a bad idea. Introducing e coli to a large public food source wouldn't take a whole lot of work. Your enterprising would-be terrorist could easily mix up enough ricin to kill most of NYC with some castor beans from the corner grocer. The fact that no one has tried suggests that dumping things into the public water supply has been rendered difficult enough to make THAT the limiting factor in how easily you can poison a bunch of people.

I dunno, this whole subject kind of puts me in the same state of mind as thinking about how insane it is that we drive cars on 2-lane roads. Those yellow lines are just a suggestion. All it takes is one bad actor to cut his steering wheel to the right and cause a head-on crash that'll kill both of you. Yet it never happens. (Or happens rarely enough that I've never heard of it). At the end of the day, civilization is built on a kind of provisional faith that there aren't a lot of people who think like that, and we'll be able to identify and stop them before they transform into The Joker.
posted by Mayor West at 5:48 PM on March 16, 2022 [11 favorites]


Your enterprising would-be terrorist could easily mix up enough ricin to kill most of NYC with some castor beans from the corner grocer.

Not really unless they intend to stab people with minute amounts of it one at a time. It doesn't vaporize and it doesn't absorb through the skin. You need to ingest it, inhale a very fine powder of it, or be injected with it. It's one of the shittiest ways to kill mass amounts of people.

It's unbelievably difficult to kill mass amounts of people indiscriminately with almost all nerve agents as a terrorist. The Aum Shinrikyo cult nutjobs did Sarin in a crowded subway car that was enclosed, about as perfect conditions that you can get for a volatile nerve agent, and got 14 out of the thousands that were in proximity enough to be exposed to it. It's a function of physics. Length, width, and height means you lose potency on an inverse cubed basis and that slope quickly becomes asymptotic to zero. Easier and more deadly to just have some dude blow himself up in a market.

It's obviously not great that we're really good at finding ways to kill each other though and applying actual effort to it though. Sure the genie is out of the bottle but you don't always have to rub the fucking lamp.
posted by Your Childhood Pet Rock at 6:17 PM on March 16, 2022 [1 favorite]


Good paper. It's an important issue to call attention to, even if as yet there's no evidence that the tool is being abused in this way.
posted by biogeo at 6:25 PM on March 16, 2022


The answer, of course, is to focus more on STEM.

I think maybe you're being sarcastic? Or maybe I just hope so, because I think the overwhelming focus on STEM and disregard for the humanities is exactly how we wind up with researchers who can design an AI that can find 40,000 new toxins in an afternoon, but who somehow never even considered the dangers of doing so until somebody else pointed it out to them. They own their naivety in the paper, but I still find it kind of flabbergasting, especially the impression they give that such naivety is basically 'industry standard'. If that's true, it seems to me a little less focus on STEM and a little more philosophy, history, and social sciences might go a long way.

I think back to a line from jorm's excellent Designing for Evil post pretty often. He was talking about websites and apps rather than medicines and toxins, but the principle is universal:

When you design a product without understanding how it will be used for evil, you are designing poorly.
posted by mstokes650 at 6:53 PM on March 16, 2022 [16 favorites]


Just finished a CS ethics class at a decent university, and one of my big takeaways was that at least 50% of my peers did not care at all. They were in CS because it makes money, and ethics were an unwanted intrusion on that. Professor specifically had to point out "Saying that you have to do something because otherwise the business would fail, is not a moral argument".

It was an easy and fun class, 2 credits, half the points were "journal about how this makes you feel", and I still had someone ask if they could cheat off me.

I think there may have been a culture shock issue for the foreign students, and I'm sympathetic about financial issues, but at the end of the day those are excuses and the end result is just as shitty. In terms of moral agency this is maybe one of the top fields in the world, and what I saw was incredibly disappointing.
posted by tychotesla at 7:15 PM on March 16, 2022 [21 favorites]


If you're training an AI to avoid toxicity, you're training an AI to find toxicity.

This isn't necessarily true. You could build a model which has really good resolution on the medium or low toxicity, and very bad resolution on very toxic toxins. This is really where you want the decision boundary after all... Then if you try to invert the model, it gives you something that will kill people as fast as five minutes or as slow as fifty years, with no certainty about actual effectiveness.
posted by kaibutsu at 7:25 PM on March 16, 2022 [3 favorites]


Lots of other aspects of small molecule drug design can be used to make better toxins including limiting precursors to something innocuous/ etc. Actually synthesizing at scale in viable efficiencies is a whole other sharps container full of dodgy needles.

This has been considered in the synthetic DNA space. There are lots of companies where you enter your DNA sequence online and you'll receive in the mail actual DNA of that sequence (with varying levels of quality). Relatively long sequences of DNA at high specificity in useful quantities can be had for dollars.

The reputable DNA synthesis companies vet customers and have automated screening of requested sequences against known sequences associated with toxins/ human pathogens/ etc. But there are probably ways to get around that, including smurfing and subsequent site-directed mutagenesis. Or do your own gene synthesis using smurfed fragements - you're interested in the expressed amino acid sequence most of the time, so you can even order the fragments using alternate codons to get around the screen. You could potentially even smurf unrelated sequences plausibly to hide the signal in noise.

But old second hand DNA synthesizers are available, as are the reagents. It just takes a little more money and time to do it yourself. If one's clever, there are other ways to get potentially weaponizable DNA, and the methods of getting it into a delivery/ expression system is pretty basic.

Or just isolate the pathogen/ pathogen-precursor from nature and do other horrible stuff to it. You won't believe the biohazard 3 or 4 microbes that can be had just from the environment if you have the right capture and cultivation methods. If you have your own old second hand Sanger sequencer, you can even ID them in-house without anyone the wiser.
posted by porpoise at 8:01 PM on March 16, 2022 [5 favorites]


Skynet: And I was going to go to the trouble of covering robots with flesh? Sheesh. Answer right in front of me, for Bender's sake.
posted by Halloween Jack at 8:13 PM on March 16, 2022 [2 favorites]


I've been waiting for decades for someone to synthesize enough LSD to put into municipal water supplies across the country simultaneously to usher in The Age Of Aquarius, but that hasn't happened either.
posted by hippybear at 8:19 PM on March 16, 2022 [7 favorites]


Mustard gas had a credible run in WWI with 90,000 deaths and 1,000,000+ casualties, but it turned out that gas masks practically negated its usefulness. Since then chemical weapons have been a very inefficient way to to kill large amounts of people. Good for assassinations though.

No, these last few years we have learned a sh*tload about the deployment and public reaction to viral agents. The mystery of what would happen if you unleashed a plague is no longer a mystery. Despite the fact that it appears to be a natural occurrence, Covid will be making an appearance in military planning for a very long time.
posted by Tell Me No Lies at 8:40 PM on March 16, 2022


The one sentence version, "we inverted the loss function to find the most toxic molecules and it worked" is enough information for a competent lab to recreate this work.

They'd also need to never share the idea.

:(


I will never forget that they censored the "how we failed to manually engineer a pathogen" section of an early 2010s paper but not the "and after that failure we passaged the wild strain through 12 generations of ferrets, artificially selecting winners"

The real security is rapid response by a competent centralized power.
posted by Slackermagee at 9:48 PM on March 16, 2022 [1 favorite]


This has been considered in the synthetic DNA space.

This scares me a lot more - toxic chemicals are at worst persistent, but they aren’t self-propagating.
posted by atoxyl at 10:15 PM on March 16, 2022 [1 favorite]


They'd also need to never share the idea.

And then also hope that somehow nobody else thought up this fairly obvious approach. The fruit, it is low-hanging.
posted by axiom at 10:17 PM on March 16, 2022 [4 favorites]


Just finished a CS ethics class at a decent university, and one of my big takeaways was that at least 50% of my peers did not care at all. They were in CS because it makes money, and ethics were an unwanted intrusion on that.

Certainly true but I have long felt that neither the academic CS field’s well-intentioned response to this (“this is why we need to teach ethics“) nor the “this is why we need the humanities” version above really gets to the heart of the issue, which is that if you have a technical skill people will pay you a lot of money to do unethical things.
posted by atoxyl at 10:19 PM on March 16, 2022 [7 favorites]


really gets to the heart of the issue, which is that if you have a technical skill people will pay you a lot of money to do unethical things.

There is also the fact that borderline ethical things are often interesting to work on. Let’s face it, a laser targeting system for an automated sniper rifle is a lot more interesting than yet another website backend.
posted by Tell Me No Lies at 10:40 PM on March 16, 2022


So how many other AI fields could have some version of this? (Thanks for scaring me I guess - don't mean it as snark, just this is interesting and important...and rather terrifying.)
posted by blue shadows at 12:30 AM on March 17, 2022


Knowing how difficult it is for laboratories to identify obscure tropical diseases, I wonder if someone could use these AI created proteins to commit the perfect murder.
posted by Narrative_Historian at 12:49 AM on March 17, 2022


That's like being a blacksmith and stabbing someone with a sword you made yourself-- it's not going to take a Holmes-level intellect to put the pieces together.
posted by Pyry at 1:17 AM on March 17, 2022


> The library of Alexandria burned and since then the goal has been to have knowledge that won't be destroyed.

Just a small quibble, but this misconception comes up a lot. The burning of the Library of Alexandria was not an apocalyptic loss of knowledge on the scale it is often portrayed. By the time of the fire, the Library had already been in disrepair for centuries and much of the collection housed there had already been lost over the years through poor maintenance and exfiltration to other establishments. Other institutions in the same city, such as the Serapeum, had overshadowed the Museum by the time of the fire, and survived long afterwards.

In short, the fire was not a sudden catastrophic loss of knowledge, but rather the death knell signposting a long process of stagnation and decay that had been in progress already. The emphasis on the fire shifts the focus towards devastating low probability events, when the real lesson of the Library of Alexandria should be that time steadily and inexorably erases all things.
posted by I-Write-Essays at 5:00 AM on March 17, 2022 [18 favorites]


Ironically, one of the ways in which knowledge is lost to us is in the invention of new ways to store knowledge. People begin using a new system, and stop using the old, and the knowledge that isn't deemed fit for conversion to the new system is lost.

This often happens when a writing system changes, such as the use of Cuneiform ending in the early AD and no one being able to read those tablets anymore. When Turkey switched from using Arabic characters to Latin characters, people stopped learning to read the knowledge contained in the old books. Only the new books remained accessible to people.

And now, with books becoming digitized, there is a real concern that those works that are not chosen to be digitized, or that are written by hand and are illegible to those only trained to read printed characters, will become inaccessible to future generations as well.
posted by I-Write-Essays at 5:25 AM on March 17, 2022 [3 favorites]


The fact that pre-Nazi Germany had well-developed humanities has destroyed my faith in "the humanities" as a way of preventing atrocities.

My current theory is that good will-- probably cultivated on a person-to-person basis-- is the only thing that prevents atrocities. Yes, good institutions matter, but only if people actually don't want mass murder.

I do think there's a difference between authoritarian Communism and Nazism. For authoritarian Communism, the ideal person was a worker. For Nazism, it was a soldier. Also, authoritarian Communism was at least nominally universalist, which made recruiting easier.

While it doesn't make a difference to the people who were being killed or otherwise abused, I don't think it's a coincidence that Nazism burnt out fast and authoritarian Communism didn't.
posted by Nancy Lebovitz at 5:56 AM on March 17, 2022 [3 favorites]




Derek Lowe nails it:
The paper discusses ethics training, international agreements and guidelines, pledges of responsibility and so on. That's all fine, but history demonstrates that anyone truly interested in using such things will care nothing for these constraints. I feel about these the way that I felt about the "No First Use" pledge on nuclear weapons - that it guaranteed that the world could only be blown up by a liar. Thin comfort.
posted by flabdablet at 6:51 AM on March 17, 2022 [4 favorites]


Previously unknown toxins have who knows what precursors? If it's possible for a team of four people to invent 40 000 new kinds of nerve gas in an afternoon (literally!) there's good reason to worry that some novel toxins might have precursors which are unregulated and easily available.

They didn't "literally" invent 40,000 new kinds of nerve gas any more than the Netflix big data team invents 40,000 new hit shows every Monday when they update their algorithms.

They have 40,000 structures which they predict will be highly toxic. Some will be, some won't be. Some will be impossible to make, some impossible to handle, some very inefficient to use to kills people. (They aren't necessarily gases, for one thing.) In no case will you really know how to use it well until you make it and experiment with it. VX is optimized in ways other than potency.

Kind of what I was getting at: You're essentially pitching an R&D program. I don't doubt that you can get some really nasty things with resources available to a small evil lab. I do doubt it's the best way to get really nasty stuff.

Not really unless they intend to stab people with minute amounts of it one at a time. It doesn't vaporize and it doesn't absorb through the skin. You need to ingest it, inhale a very fine powder of it, or be injected with it. It's one of the shittiest ways to kill mass amounts of people.

Worth mentioning that this will likely be true for the chemicals above: They weren't optimized for delivery, which is hard to do anyway.
posted by mark k at 8:39 AM on March 17, 2022 [5 favorites]


This is something that has been surprising to me. When recombinant DNA technology was first developed; it was presented in a Seminar at CSH or some such place. As soon as the community heard of it; they convened a panel to discuss the ethics and the implications.

But with AI, CRISPR, Data accumulation etc. over the last 20 years or so; there has been no systematic efforts to come up with some kind of ethical standards for use. This is frightening and Yuval Noah Harari and other Historians and Ethicists are right to be frightened of it.
posted by indianbadger1 at 10:24 AM on March 17, 2022 [1 favorite]


So how many other AI fields could have some version of this?

In this respect AI (or more to the point machine learning) is very much like teaching humans. You can’t teach a human to defuse a bomb without teaching them most of the information they need to make a bomb. You can’t say “here is how you avoid all of the problems” without giving them a list of the problems.

The basic information is pretty much the same, the only thing you change is what you’re asking for.
posted by Tell Me No Lies at 11:33 AM on March 17, 2022 [2 favorites]


Now that this paper has been published, unfortunately, shouldn’t we actually study whether these predicted toxic compounds are actually any easier to make or might have more readily available precursors, so that chemical suppliers can flag suspicious orders? This could of course be done without actually synthesizing any of the hypothetical nerve agents.

Also, one important difference between this and the ferret-flu-passage work is that the ferret work actually created in the real world a new and contagious pathogen, with potentially disastrous consequences in the case of a lab accident.

When recombinant DNA technology was first developed; it was presented in a Seminar at CSH or some such place. As soon as the community heard of it; they convened a panel to discuss the ethics and the implications.

Asilomar, I think. I suspect it’s kind of a “good public health is invisible” story, where the hazards in retrospect may have seemed overblown because they mainly didn’t materialize in the way people most feared, leading people to become less cautious as a result.
posted by en forme de poire at 11:44 AM on March 17, 2022


In this respect AI (or more to the point machine learning) is very much like teaching humans. You can’t teach a human to defuse a bomb without teaching them most of the information they need to make a bomb. You can’t say “here is how you avoid all of the problems” without giving them a list of the problems.

Lemme try to restate the comment I made earlier: You /can/ teach an AI to be bad at some things and good at others.

Think of finding the best move in a chess (or go) game. You make a guess at a few possible good moves, and then invest time thinking about the possible outcomes of each in order to make a good decision. What you /don't/ do is spend a lot of time thinking about exactly how all the bad-looking moves could be as bad as possible.

If you train an AI to work in this way (as AlphaGo does), you end up with great understanding of the difference between pretty-good and great moves, but without spending much effort understanding the worst cases; all you need to know is that they're worse than the pretty-good moves.

So, you can choose to train models which are good at ethical things and not so good at unethical things; in this regime, you could get great drugs and flags when a given molecule is probably poisonous, without too much resolution into how bad the LD-50 is.

What you CAN'T do is keep some other research group from replicating your results and flipping the sign on the loss function. The real safety mechanism then is building some moat around the training infrastructure: eg, by keeping databases of dangerous molecules private, or keeping important simulation steps secret.
posted by kaibutsu at 3:02 PM on March 17, 2022 [1 favorite]


This Michael Crichton book is writing itself.
posted by shenkerism at 2:46 PM on March 18, 2022 [3 favorites]


« Older Sky Was All Orange   |   Circus Mircus brings the funky earworm to... Newer »


This thread has been archived and is closed to new comments