A Digital Twin Might Just Save Your Life
March 25, 2024 4:57 AM   Subscribe

Equations are just a way of describing nature [...] Air is a fluid and blood is a fluid, so the same equations that model the air around an aircraft are the ones used to model the blood inside your body.” Joe Zadeh writes 6500 words for Noema magazine [via Arts & Letters Daily]

Among the links within is this unsettling long piece by Kaloaine Fainu for The Guardian about the Pacific island nation Tuvalu's efforts to preserve itself.
posted by cgc373 (31 comments total) 14 users marked this as a favorite
 
(A long form article about digital twins, a digital model of an intended or actual real-world physical product, system, or process (a physical twin) that serves as the effectively indistinguishable digital counterpart of it for practical purposes, such as simulation, integration, testing, monitoring, and maintenance.)
posted by zamboni at 5:49 AM on March 25 [1 favorite]




It's a fascinating idea, but of course the more detailed the model -- the more personalized -- the more costly it becomes, and really, who is paying for this? When I rode the metro train into work yesterday, the car was full of sleeping homeless people. No one is spending a dime on these actual human beings; society has declared their lives without worth. How much is it going to spend on a virtual twin of the average person? How much is less than nothing?

So I can see, as the article suggests, that this could be reserved for the super rich. But are their problems really so unique that this level of modeling is necessary to predict them? I don't need to make a virtual twin of Donald Trump to tell him to change his diet to prevent heart disease, or to stop rawdogging porn stars lest he catch an std. I'm not even a doctor and I feel confident in my pronouncements. But (a) would he listen, and (b) how much code do you really need to write to arrive at this advice?

All that said, this a wonderfully written article, and I am delighted to know about the deconsecrated Spanish church that houses a supercomputer. That's terrific.
posted by kittens for breakfast at 6:10 AM on March 25 [9 favorites]


Yes, this is still useful even when not ultra-personalized. If 10% of people have an adverse reaction to a drug, you can get by with two models. (Ok, vastly oversimplified to make the point that we don't all need to be billionaires with a unique, personal model.)

Tangentially, it's weird to see the direction Gelertner went post-Unabomber: anti-woman, climate change denier, etc. The climate one is especially ironic given that climate models have been shown to perform quite well.
posted by CheeseDigestsAll at 6:26 AM on March 25 [3 favorites]


On that tangent: According to Wikipedia, back in the '80s he named a programming system he co-designed Linda, "for Linda Lovelace, the lead actress in the porn movie Deep Throat, mocking the naming of the programming language Ada in tribute to the scientist and first attributed computer programmer, Ada Lovelace"... so it looks like his approach to life might not be strictly post-Unabomber.
posted by trig at 7:12 AM on March 25 [3 favorites]


Air is a fluid and blood is a fluid, so the same equations that model the air around an aircraft are the ones used to model the blood inside your body.

Air is a Newtonian fluid and blood is non-Newtonian?
posted by biffa at 7:31 AM on March 25 [1 favorite]


The inefficiencies of the physical world, so the sales pitch goes, can be ironed out in a virtual one and then reflected back onto reality. Test virtual planes in virtual wind tunnels, virtual tires on virtual roads. “Risk is removed” reads a recent Microsoft advertorial in Wired, and “problems can be solved before they happen.”

vs.

Boeing is using digital twins to design airplanes.
posted by chavenet at 7:35 AM on March 25 [1 favorite]


One has to wonder: How accurate can these things truly be? Whom do they serve? What damage to the environment will the necessarily massive amounts of computation do? Are we really interested in having yet more of our lives and shared public goods uploaded into digital realms and controlled by a tiny group of techno-capitalists?

and the answers to those questions are 1) we can save children and 2) women and 3) it's inevitable so just sit back and relax lol

this article inspires me to ask - how many technolibertarian utopianists does it take to screw in a lightbulb

I leave the punchline up to people much funnier than I am
posted by paimapi at 7:37 AM on March 25


Military contractors like doing simulations, because testing the real thing involves spending millions of dollars a shot. I remember reading (I think in The Pentagon Wars) where the procurement officer managed to push through at live fire test of a personnel carrier, and it failed in all sorts of exciting ways that the simulation didn't predict.
Simulations can be useful, but they aren't reality.
posted by Spike Glee at 7:43 AM on March 25


It's honestly hilarious that the author doesn't realize that models will be used not to help people but to harm them.

Why model somebody's health except to kick expensive people off insurance rolls as far in advance as possible?
posted by humbug at 7:50 AM on March 25 [4 favorites]


Climate scientists have been sounding the alarm for decades based on digital models. Some of them study things like the effect of temperature on infectious diseases. This kind of knee-jerk anti-science posturing is tiresome.
posted by mubba at 8:07 AM on March 25 [14 favorites]


Wikipedia says the phrase "digital twin" was invented in 2010 but I really only recall hearing it in 2019 at the earliest. Until just now I thought this phrase was used to sell computer simulation technology without the buyer realizing it. Now I realize it's how people are selling IoT tech to people without realizing that's what they're buying.

Climate scientists have been sounding the alarm for decades based on digital models. Some of them study things like the effect of temperature on infectious diseases. This kind of knee-jerk anti-science posturing is tiresome.

I'm somewhat naive in this area: is there a "competition" where models are published and evaluated on data that arrives afterwards? Or are we relying entirely on self-reported evaluations of model accuracy? In ML we worry about models overfitting the training data set and becoming worse at modeling out of sample data, so I assume there are similar evaluations performed.

Basically, I'm curious what the accuracy of these models is, and why you'd want to compare a stochastic model averaging across years (or decades!) and entire countries of space to one attempting to model and predict a specific object's status every minute. I don't think it's "anti-science" to expect digital twinning approaches to fall down at the modeling step being way, way harder than was promised.
posted by pwnguin at 8:24 AM on March 25 [2 favorites]


humbug: "Why model somebody's health except to kick expensive people off insurance rolls as far in advance as possible?"

I'm part of a project working on an early-stage digital twin of the impacts of climate change on health related outcomes, especially since—like most things—, it impacts the poor (and women and migrants, etc) more than it does the rich (and male and citizens, etc). This is being specifically aimed at accomplishing policy changes during 2024 and benefiting the people with highest adverse impacts by, for example, crossing population and tree cover data with risk factors like heart disease, diabetes, etc, to be able to define where cooling stations or communal health centers are most needed, etc.

But we're probably really insurance industry shills, so don't listen to me.
posted by signal at 8:28 AM on March 25 [20 favorites]


Boeing is using digital twins to design airplanes.

Nb., Boeing also used cutting edge numerical simulation and modeling when they and their planes worked. Seattle had a yellow pages entry for "Mathematicians" entirely from spinoffs.

The hell, pwnguin, you’re JAQoffing climate models while working in modeling?
posted by clew at 8:51 AM on March 25


the math of 1) computing resources and 2) private industry funding along with the historical knowledge that the current AI boom was specifically set off by the US DoD's glowing praise of the tech and the amount of time it saved in planning military actions in the Iraq War certainly does produce some JAQing off when it comes to 'digital twinning', sure

is it really so out-of-bounds to think that it's not going to be the researchers studying public health and climate change who will make up the largest users of this branch of tech futurism? that this is exactly the kind of thing marketers, the military, law enforcement, and so on would invest billions into to turn this world into an even more dystopian hellscape than it already is?
posted by paimapi at 9:11 AM on March 25 [1 favorite]


Air is a Newtonian fluid and blood is non-Newtonian?

you've never cut yourself and seen oobleck come out?
posted by Dr. Twist at 9:19 AM on March 25


ah actually -

US DoD leverages digital twin modelling for all systems

The defence industry is already using digital twinning tech to design, troubleshoot and enhance concepts for systems. This risk-free method is a growing technique for defence companies trying to meet the new consumer demands that the war in Ukraine and the US-China rivalry have introduced.

Air Force Goes All in on Digital Twinning—for Bombs As Well As Planes

Digital twinning involves creating a detailed virtual model of an aircraft, weapons systems, or other artifact, so that it can undergo initial testing without the time and expense of building a prototype. The Air Force’s new trainer jet, the eT-7 Red Hawk, was designed and underwent initial testing using the technology, with former Secretary Barbara M. Barrett boasting during AFA’s Air, Space & Cyber Conference that it had flown “thousands of hours before it [took] off,” and it was “assembled hundreds of times before any metal [was] even cut.”

High-tech policing: Technological advancements expand and enhance police capabilities

Digital twinning isn’t like creating holograms or deepfakes for Hollywood films, but more like cloning objects, systems or processes in digital form. The technology could assist departments in simulating and assessing responses, emergency coverage and resource deployment. Again, Forbes cites use by departments in China but does not confirm instances by U.S. law enforcement agencies.

Digital Twins: A Marketer's Guide

A digital twin of the customer is a virtual representation or digital avatar of a customer. It can be used to learn potential customer behaviors and simulate or anticipate customer outcomes. Customers are most often individuals—but can also be personas, groups of people, or even machines.

it's not even JAQing, it's just the current reality. we're X thousand years on from the parable of Pandora's Box and all we have to show for it is (see above)
posted by paimapi at 9:23 AM on March 25 [1 favorite]


The hell, pwnguin, you’re JAQoffing climate models while working in modeling?

I work downstream of ML models, mostly dealing with Kubernetes. I don't study climate, like, at all. I'm sure climate change is real, my question is how well academic modeling translates to the industrial applications HBR is promising.
posted by pwnguin at 9:30 AM on March 25


We at the DoD want a digital twin with attitude. You've heard the expression, "let's get busy"? Well, this is a digital twin who gets "biz-zay!" Consistently and thoroughly. We're talking about a totally outrageous paradigm.
posted by credulous at 11:18 AM on March 25 [2 favorites]


Bruno Latour’s Science In Action nicely covered the idea of shrinking time and space when modeling as a form of “mastery” in science and engineering: when you can preview outcomes in the model or the twin, you are in some ways superior to or outside the natural phenomenon. Today’s digital simulations are advanced but we’ve always constructed small-scale twins to speed up simulation and make predictions. “Effectively indistinguishable counterpart” is doing a lot of work here that James C. Scott might have a thought or two about.

If you’re in the Bay Area, it’s worth checking out the US Army Corps of Engineers Bay Model in Sausalito as an example. Today it’s a cool tourist attraction, in the 1960s it was a twin of the real bay used to study the impact of a plan to dam the bay.
posted by migurski at 11:46 AM on March 25 [3 favorites]


palmapi: this article inspires me to ask - how many technolibertarian utopianists does it take to screw in a lightbulb
(While not funner,) the modelling was tricky, a portion were avowed incels and didn't accept any screwing, the rest provided romantic chemistry despite their abusive personalities, which researchers banked as evidence that homosexuality is not a choice. Aside from sex jokes, the simulation couldn't get more than one technolibertarian utopianist to collaborate with anyone else, so "Error: only communists collaborate" is my punchline.
posted by k3ninho at 12:15 PM on March 25


I'm somewhat naive in this area: is there a "competition" where models are published and evaluated on data that arrives afterwards? Or are we relying entirely on self-reported evaluations of model accuracy? In ML we worry about models overfitting the training data set and becoming worse at modeling out of sample data, so I assume there are similar evaluations performed.

Sorry this turned out so long.

Climate (and many scientific) "models" are different than ML in important ways relevant to this question. At least, it sounds like you are kind of thinking in test and training set terms and looking at overall error, which isn't really right. So the answer to "self reported accuracy or competition" is neither.

This sort of scientific model starts with putting as much detailed theory and empirical knowledge in as possible. The you simplify as much as you need to and see how it matches reality. Common scientific models adjacent to my field are things like "We'll treat the organs in the human body as a bunch of compartments connected by tubes carrying blood" and "We'll view this molecule as a atoms connected by rigid sticks." In climate IIUC an early model was (is) that the Earth's atmospheres was a set of cells with energy flowing in and out.

The key thing is that theory really constrains you. We know what the blood flow into the liver is, and we can do a simple experiment to know what happens to a drug while it's in the liver. The model doesn't can't wildly and decide eyeball volume and gallbladder weight are the two most important factors.

Now, there is "parameterization"--we do have numbers that we don't know, so we optimize them for model fit. Model builders know this is a risk, at least community wide: von Neumann's claimed with five parameters he could model an elephant and with six he could make it wiggle its trunk. But this extreme claim is, practically speaking, not true in these types of models. Your parameters and uncertainties are still constrained, and you absolutely get data that doesn't fit and you can't make fit and is frustrating (or not, depending on what you need out of the model.)

One the early-ish things that really convinced me as a layman that the climate modelers knew what they were doing--not just "were generally right" but understood a lot of detail--was that satellite temperature measurements didn't make sense and didn't fit into the models. It turned out that the measurements were in fact wrong--there was, I am not making this up, a sign error in the formula to report the data for years that made the reported values too low. Modelers understanding this before experimentalists is the opposite of just running numbers to minimize errors.

For validating these models really digging down to specifics tends to be more important than getting the endpoint right. Disputes tend to focus on these factors. Which is also the point: You want and need scientific understanding, whether any of those equations you had are fundamentally wrong or inapplicable, and to understand the size of uncertainties.
posted by mark k at 12:26 PM on March 25 [6 favorites]


If you’re in the Bay Area, it’s worth checking out the US Army Corps of Engineers Bay Model in Sausalito as an example. Today it’s a cool tourist attraction, in the 1960s it was a twin of the real bay used to study the impact of a plan to dam the bay.

Yeah, that's great. Apparently there was (is?) a massive one for the Mississippi River too. Somewhere in my travels I've also seen an earthquake simulator they actually put a house inside and shake around. The article alludes to old physical models near the end, but I find this stuff fascinating.

A scale model of a building can tell you a lot about what it will look like. But, of course, you aren't testing the weight of the building or stress on load bearing components in a scale model: You have to do math and look up materials strength. Same with building a 'toy' steam engine or doing a reaction in flask before you do it in a chemical plant. You do the traditional physical model-building, but you are also doing a mathematical model too, and people who didn't do that part blew themselves up. So in a sense you were already making a "digital twin" even in the early industrial revolution. (Good observation TFA about how implicitly overpromising that phrase is.)

I have zero problem with this technology in the abstract. If you're worried that computer simulations will only be available to help rich people you may want to re-examine the cost to run a series of clinical trials or build a wind tunnel. Not something in reach of the poor either! Is it better and more efficient than other models? Sometimes obviously not. Personally I think ELEM Biotech is going to lose a lot of investors a lot of money.
posted by mark k at 12:49 PM on March 25 [1 favorite]


I'm somewhat naive in this area: is there a "competition" where models are published and evaluated on data that arrives afterwards? Or are we relying entirely on self-reported evaluations of model accuracy? In ML we worry about models overfitting the training data set and becoming worse at modeling out of sample data, so I assume there are similar evaluations performed.

The short answer is yes. Other researchers sit down and compare the various climate models all the time, though not (to my knowledge) on a big leader board like kaggle. RealClimate maintains this page with a running comparison of models versus reality, which is a nice starting point. The IPCC reports like AR6 also include sections evaluating model performance, but they tend not to be the most approachable to regular people who don't do climate modeling.
posted by selenized at 1:01 PM on March 25 [4 favorites]


to stop rawdogging porn stars lest he catch an std.

Is this a fair thing to say, or just a default assumption we should question? Anyone I know involved with sex work is getting checked far more regularly than anyone not.
posted by Audreynachrome at 2:09 PM on March 25


> Air is a Newtonian fluid and blood is non-Newtonian?

All Newtonian fluids are exactly alike; each non-Newtonian fluid is non-Newtonian in its own way.

– Leo Tolstoy
posted by Phssthpok at 2:18 PM on March 25 [2 favorites]


Is this a fair thing to say, or just a default assumption we should question? Anyone I know involved with sex work is getting checked far more regularly than anyone not.

I'm not sure if it's something I would do, speaking non-judgmentally but with an eye toward the probability of infection going up as the number of partners increases, but certainly I would imagine some production environments are very safe. At risk of engaging in recursive logic, though, I can only guess that a porn star who would consent to rawdogging Donald Trump is less likely to be a performer who takes great precautionary measures in general.
posted by kittens for breakfast at 2:29 PM on March 25


This thread, which started as a piece on specific technical advances, dipped way down low into conspiracy-adjacent and 101-level fight-the-man, then made an attempt at actually discussing the topic, but ended up being about US politics, Trump and unexamined prejudices about sex workers, is a sort of digital twin of Metafilter as a whole, no?
posted by signal at 4:24 AM on March 26 [2 favorites]


well now I’m imagining a band pass filter for comments and sure, that might work, yeah

or a hell band pass filter
posted by clew at 3:15 PM on March 26 [1 favorite]


All stable processes we shall predict.
All unstable processes we shall control.
John von Neumann
posted by thatwhichfalls at 10:04 PM on March 28 [2 favorites]


Apparently there was (is?) a massive one for the Mississippi River too

At Vicksburg there was a large scale physical Model. but it ended at New Orleans.

speaking of "theory really constrains you". because the Corps never bothered to model the alluvial deposition south of New Orleans, that has begun to fade away.

the landscape becomes the map.
posted by eustatic at 12:16 AM on March 31 [1 favorite]


« Older Free as in Dive. Here's your free thread!   |   Families in cars, driving all night with the heat... Newer »


This thread has been archived and is closed to new comments