Instant everything. Incredible prices. Big heart.
May 26, 2021 8:01 AM   Subscribe

On May 24, Lemonade, a new insurance company, went to Twitter to unveil their innovative program for keeping costs low: AI. In a now-deleted tweet, the company stated: "...when a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can't, since they don't use a digital claims process." #lawtwitter was quick to chime in.
posted by Silvery Fish (64 comments total) 20 users marked this as a favorite
 
Should get sued for false advertising calling it AI, apart from anything else.
posted by GallonOfAlan at 8:06 AM on May 26, 2021 [8 favorites]


Huh, they deleted the tweet then? My timeline yesterday was basically just people dunking on them - AI nerds, scientists, campaigners - it really struck a chord across a bunch of different communities I follow. Makes me wonder what they thought the reaction would be.
posted by simonw at 8:08 AM on May 26, 2021 [18 favorites]


The technology definitely doesn't work and they probably know that. There may not even be anything in the box.

Lemonade may be thinking that just making claimants feel like they're being watched will reduce the chance of fraud, which is in line with a popular bit of behavioral mumbo-jumbo.
posted by grobstein at 8:09 AM on May 26, 2021 [7 favorites]


Lol they just posted a blog post that comes down to "we don't do any of the things that we claimed to do in that tweet."

I use Lemonade for renters insurance because it's very cheap (less than $90 per year). But I will definitely re-evaluate.
posted by muddgirl at 8:10 AM on May 26, 2021 [8 favorites]


Why does so much "innovation"/"disruption" turn out to be either reinventing the bus but in some way worse or technophrenology? It's uncanny how often stuff like this happens.
posted by an octopus IRL at 8:11 AM on May 26, 2021 [41 favorites]


I'm generally not very keen on "socially conscious" companies, but from checking their website and reading their apology post referenced by muddgirl, they seem to be trying to do good things in a transparent manner, made a poorly-worded tweet, and are now clarifying and apologizing. The "dunking" on the company is just stupid.
posted by davidmsc at 8:15 AM on May 26, 2021 [1 favorite]


Why does so much "innovation"/"disruption" turn out to be either reinventing the bus but in some way worse or technophrenology?

People get excited about new things, but not if they have to change their own behavior to take advantage of them. Filling an existing niche with something just different enough is a good business model.
posted by Tell Me No Lies at 8:17 AM on May 26, 2021


The "dunking" on the company is just stupid.

The company claimed it could use "non-verbal cues" to deny insurance claims. Opaque algorithms based on handwavy stuff like "non-verbal cues" almost always have a disproportionately negative effect on already marginalized groups, and in this case that would amount to actual money denied based on...what? A computer deciding your lying because you blink too much? Or your eyes are too close together? What? What we've learned (or at least what I've learned) from seeing stuff like this publicized previously is that it is always based on preexisting biases. In this case I think the dunking is an active societal good in that hopefully it'll make the next company think twice before trying to peddle pseudoscientific nonsense with the potential to harm real people.
posted by an octopus IRL at 8:22 AM on May 26, 2021 [77 favorites]


Lol they just posted a blog post that comes down to "we don't do any of the things that we claimed to do in that tweet."

I’m not able to find a full copy of the tweet anywhere, so it’s kind a hard to judge their “clarification”.
posted by Tell Me No Lies at 8:24 AM on May 26, 2021 [1 favorite]


Lol I think I know the person who wrote that blog post
posted by potrzebie at 8:33 AM on May 26, 2021 [1 favorite]


In their clarifying statement, they link to a lengthy blog post from 2019 -- "AI Can Vanquish Bias". I like what this company is trying to do; I appreciate that they have set themselves up as a B-Corp. But I am seriously cautious about *any* company that hasn't rigorously examined their "AI can cure X" bias.
posted by Silvery Fish at 8:33 AM on May 26, 2021 [6 favorites]


Here's a link to the deleted tweets, which make their PR apologia seem a whole lot less benign.

The deleted thread notes that they collect "100x more data" than traditional insurers, including "non-verbal cues." Their apology suggests that people are misunderstanding the latter term. They say they aren't talking about phrenology or emotion recognition, but never define what, exactly, "non-verbal cues" are.

There's also a graphic that defines their business model as "grow fast > predictive data > machine learns > delight customers" (rinse, repeat). This would appear to put predictive data and machine learning at the core of their business in a way that the apology post does not.

Their repeated and insistent use of the term "delight" is particularly cringey.
posted by evidenceofabsence at 8:35 AM on May 26, 2021 [26 favorites]


Archive.org capture of the original tweet: https://web.archive.org/web/20210525033026/https://twitter.com/lemonade_inc/status/1396868192019099655

Edit: thanks, @evidenceofabsence.
posted by Silvery Fish at 8:36 AM on May 26, 2021 [5 favorites]


Should get sued for false advertising calling it AI, apart from anything else.

Is there even an agreed-upon legal definition of AI? 'cause, holy crap, there are a ton of companies today claiming to "harness the power of AI" and other such marketing bromides.
posted by Thorzdad at 8:36 AM on May 26, 2021 [5 favorites]


I got a quote from Lemonade on insurance. It 40% higher than Geico, with lower quality ratings. If this is AI in action we are safe from our computer overlords for the forseeable future.
posted by jcworth at 8:43 AM on May 26, 2021 [4 favorites]


Somewhat related thing I saw yesterday; someone poked at an "AI job interview" engine and found they could manipulate it into giving higher ratings simply by doing the interview in front of a bookshelf instead of a plain wall.
"AI" or Algorithmic engines inherit so many biases off their training data that are invisible to the builders. It always seems to carry forward systemic biases that the builders aren't aware of, and results in marginalized groups getting the bad results time and time again.
posted by msbutah at 8:44 AM on May 26, 2021 [38 favorites]


Finally! There's the excuse I need to keep books in the house even though I have an e-reader and a minimalist partner who would like to jettison all of our belongings into space. Thanks, AI!
posted by evidenceofabsence at 8:46 AM on May 26, 2021 [15 favorites]


It is not going to be pretty when the precariously enlarged bubble bursts.

We’re just about due for another AI winter.
posted by 1970s Antihero at 8:50 AM on May 26, 2021


When life gives you basilisks, make rokonade.
posted by chavenet at 9:01 AM on May 26, 2021 [17 favorites]


msbutah, that investigation into a German Al-based hiring service is amazing! Thanks so much for that link. I would like to see similar investigations by US journalists, who entirely too often act as transcription services when it comes to business and tech coverage. That was inspiring.
posted by Bella Donna at 9:02 AM on May 26, 2021 [4 favorites]


Their repeated and insistent use of the term "delight" is particularly cringey

I've worked for several companies over the past 10 years, all of whom used some form of "delight the customer" in their IT departmental "mission statement" language. I can only think it was borrowed from Google or some other place. Spoiler alert: it sucks.
posted by I_Love_Bananas at 9:15 AM on May 26, 2021 [8 favorites]


MeFi’s own LawTwitter, surely.
posted by corb at 9:20 AM on May 26, 2021 [1 favorite]


I cannot imagine how an insurance provider could possibly delight me, a customer. Like normally I pay them money and nothing bad happens, but sometimes something bad happens and they pay money instead of me paying money and even when that works as it’s supposed to I am not delighted so much as relieved that I didn’t have to pay for the bad thing myself.
posted by aubilenon at 9:21 AM on May 26, 2021 [19 favorites]


The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities. These flagged claims then get reviewed by our human investigators.

This confusion led to a spread of falsehoods and incorrect assumptions, so we’re writing this to clarify and unequivocally confirm that our users aren’t treated differently based on their appearance, behavior, or any personal/physical characteristic.

AI is non-deterministic and has been shown to have biases across different communities. That’s why we never let AI perform deterministic actions such as rejecting claims or canceling policies.


Something I learned during the dunk fest yesterday was that they still don't seem to realize if they're using AI to figure out which claims get investigated they're still putting already disadvantaged people at yet another disadvantage if their claims are getting flagged and other people who the AI decided are "normal" just get their claims paid.

I hope this episode also showcases the importance of the liberal arts in that writing skills are still really important too and always will be. STEM isn't everything.
posted by bleep at 9:24 AM on May 26, 2021 [12 favorites]


I remember sitting through a corporate presentation a few years back (maybe for Trov insurance before they pivoted?) - but it was a new fancy micros-insurance type setup. Like you could turn on insurance for your bicycle whenever you (or more accurately your phone) left the house, or just insure you skis for the weekend whenever your phone showed you in the geolocation of your favorite ski mountain etc. Basically you could pick and choose when insurance applied and only paid insurance for that time - down to the minute. They had a similar setup if I recall for claims - submit photos and describe the situation (can't recall if text or audio) and you'd get insta-paid on claims. I just remember thinking at the time how much data they'd have on you given always on monitoring of geolocation, and the detailed inventory of everything you owned (they basically needed to know your "Trov" of goods by having you list out all your assets in the mobile app) etc.

I guess this is the natural extension of that because it seemed prime for fraud if you could insure your camera for an hour while you went for a "hike" and then take a video of yourself at the top of a cliff pretending to lament about how your camera just "fell" off it so can I have $500 now please?
posted by inflatablekiwi at 9:36 AM on May 26, 2021 [1 favorite]


Why does so much "innovation"/"disruption" turn out to be either reinventing the bus but in some way worse or technophrenology? It's uncanny how often stuff like this happens.
I think a lot of what we see in "tech solutions to human behavior" is an attempt to fix an element of a system without actually addressing the structure of the system itself. I work in traffic safety research, and when automated vehicles really entered the public consciousness around 2014-2015 (note: they were a research topic well before that), there was a ton of excitement around this tech solving the "human behavior" problem. The assumption that human behavior is a simple thing that can be fixed is rooted in a deep misunderstanding of a report NHTSA published; you'll often hear people say 94% of crashes are caused by human error, but the actual, original report claims that 94% of crashes involve a human error as the last link in a causal chain. It's effectively the last hole in the consecutive slices of Swiss cheese that lined up to cause a crash, to borrow a metaphor from James Reason.

So, folks who don't want to grapple with the broader system that produces crashes and lines up all those holes of risk (i.e., we have a confluence of bad speed limit laws, motorization monoculture, and mismatches between land development and roadway design) parrot that misquoted study as a simple solution, and those folks get funding and media because we humans are much more attracted to simple solutions than efforts that require changing our whole system.

It seems the same tension between systems structure and quick solutions is at play in this specific system too.
posted by TheKaijuCommuter at 9:42 AM on May 26, 2021 [31 favorites]


It's a good thing that some computer science programs got around to thinking about teaching ethics...three years ago?
posted by evidenceofabsence at 9:45 AM on May 26, 2021 [1 favorite]


"We transformed our institutional and systemic biases into an opaque algorithm so no one can call us on our shit!"

I bet they're real popular with the psychopathic Silicon Valley investor crowd.
posted by seanmpuckett at 9:48 AM on May 26, 2021 [5 favorites]


i think i remember lemonade as getting a bit of traction on hacker news when they came out, and i was... delighted... at the idea that maybe i could actually buy insurance without having to talk to someone on the phone! ...and so went to the website and through the whole process until i got to "call us to confirm all this!" sigh. i did not.

seems an especially bad idea to have this kind of behavior-based judgment pushed at a community with a higher-than-average incidence of autism.
posted by Clowder of bats at 9:50 AM on May 26, 2021 [5 favorites]


There was an NPR program years ago (which I failed to find a link to) pointing out that in many jurisdictions insurance companies are paying the salaries of specialized prosecutors and police arson investigators, and that there are many cases of claimants being sent to jail by such 'public' officials who used debunked junk science forensic techniques to secure convictions.

This use of AI is just more junk science to deny claims. I haven’t heard of cases where AI has been used as evidence against claimants in fraud prosecutions, but I don’t know of anything which might forestall it — certainly not the ethics of insurers.
posted by jamjam at 10:06 AM on May 26, 2021 [4 favorites]


"We transformed our institutional and systemic biases into an opaque algorithm so no one can call us on our shit!"

"We built a machine that faithfully replicates the hiring decisions of our human interviewers"
"We've thoroughly tested your machine and can prove it's extremely racist"
"We're sorry. We'll go back to the humans. And no before you ask you may not scrutinize our humans!"
posted by Pyry at 10:10 AM on May 26, 2021 [6 favorites]


Why does so much "innovation"/"disruption" turn out to be either reinventing the bus but in some way worse or technophrenology? It's uncanny how often stuff like this happens.

For all that I am a critic of capitalism, the current model is really very good at parting people from their money. There's this idea that stodgy old businesses do things the boring way and don't have the dazzling new insights that a 22 year old unencumbered by The Way Things Are Done, so they can be Disrupted. But a lot of times The Way Things Are Done is for good reason like laws and best practices and such.

So the only way to "disrupt" is to either do something illegal and just bull forward with it (Uber, etc.) and wait for the legal apparatus to catch up then dare them to do anything about it (Uber, again).

Or you have to actually solve a problem or provide a product or service that people want, but that's much harder.
posted by Ghostride The Whip at 10:22 AM on May 26, 2021 [3 favorites]


Technophrenology: robotic arm examines the contours of your skull, to a four on the floor dance beat.
posted by otherchaz at 10:26 AM on May 26, 2021 [2 favorites]


I have spent the last eight months working as a paralegal in a personal injury firm. Every day, I have to deal with insurance companies. I have to check the names and contact information for the adjusters working out clients’ claims. I have to field phone calls from adjusters who want case status updates. I have to watch the attorneys negotiate with insurance companies. The one thing that all insurance companies could do to delight their customers is to just pay all of their customers’ bills. Period.*

For-profit insurance is a blight upon the world, and it should be catapulted into the stellar void.

*Yes, I’d be out of a job, but I can live with that.
posted by RakDaddy at 10:27 AM on May 26, 2021 [11 favorites]


Ignoring the handwavy AI, Lemonade's "disruption" is similar to Uber in that they seem to just be burning venture capital and IPO proceeds in order to maintain low rates and gain market share. They've been growing, but their operating expenses have been more than double their revenue in each of the past three years. In 2020, they had a net loss of $122M on revenues of $94M. (financials)
posted by bassooner at 10:27 AM on May 26, 2021 [7 favorites]


Dear Amazon can you speed up that Gramsci anthology? Thanks!
posted by jquinby at 10:28 AM on May 26, 2021


So, folks who don't want to grapple with the broader system that produces crashes and lines up all those holes of risk (i.e., we have a confluence of bad speed limit laws, motorization monoculture, and mismatches between land development and roadway design) parrot that misquoted study as a simple solution, and those folks get funding and media because we humans are much more attracted to simple solutions than efforts that require changing our whole system.

Is this also why current attempts at self-driving car technology are so stubbornly set on being a completely drop-in replacement for the human?

It seems like earlier attempts at autonomous vehicles all included infrastructure upgrades (magnets in roadways to keep cars in lanes, machine readable labels embedded in asphalt to control speed) to make things work. While developments in AI might make current technology not so dependent on specialized infrastructure, surely it's easier to design self-driving cars when you also have some say on what kind of roads they're driving on, like instead of relying on fallable image recognition to detect stop signs in all light levels, in all weather when you can just embed rfid tags in the asphalt.

I know there's a business argument to be made that infrastructure costs money and it's pointless to develop cars that can't drive on most roads, but I always get the sense that the current crop of self-driving car people are deliberately ignoring anything that can be done to make the problem easier because they don't want to admit that infrastructure is important and don't want to admit that getting rid of humans is only part of the problem.
posted by RonButNotStupid at 10:28 AM on May 26, 2021 [13 favorites]


AI development is unregulated. Period. Full Stop.

There is no oversight that can provide *binding* fixes to bad practices.

Tech believes tech is the answer.

The low hanging fruit are the regulated organizations (finance, taxis, insurance companies, etc.).

Big money hedges against regulated industries in which they invest.

Marketing is the death of us all.

/end rant
posted by zerobyproxy at 10:34 AM on May 26, 2021 [5 favorites]


Lemonade isn't that new. I looked at them a few years ago when looking to change insurers. All their marketing material was obnoxious and it was really hard to find information that other companies make easy to find. In conclusion, garbage company does garbage thing.
posted by nestor_makhno at 10:36 AM on May 26, 2021 [2 favorites]


RonButNotStupid, I'll try make my response connect to the broader topic of insurance in this thread so as not to completely derail things (that wasn't my intent with my original comment). In short, I think you're right. A lot of those pushing AVs are coming from the technology sector rather than the transportation sector, and with minimal NHTSA oversight-the Trump admin pretty much did nothing related to regulation, and I even heard Elaine Chao quote that 94% stat in person at a conference-they really weren't forced to look at things systemically by considering infrastructure.

So, to tie this back a bit more to the topic, insurance certainly has a big, but often understudied, role in how infrastructure and safety improvements are implemented. A lot of my current research focuses on the Safe System(s) approach, but I've recently heard practitioners verbalize that they "can't" do things like just plunk down separated bike lanes or make all intersections roundabouts because they don't want to be liable. Factor that in with the way insurance claims are linked to fault in crashes, something pretty arbitrarily assigned when you consider that it's the system that allows crashes to happen (not that bad behaviors, like speeding, don't contribute), and you have a fair bit of resistance to changing the way things are done beyond flashy, tech-y solutions.
posted by TheKaijuCommuter at 10:39 AM on May 26, 2021 [7 favorites]


I was sent an email recently from my insurance company inviting me to download an exciting new iOS app. Having this app on my phone will give me a discount on my insurance. So what is this app? It tracks your driving in terms of locations, speed, and time. They explicitly said that it tracks “aggressive deceleration” implying you’re going too fast and not paying attention. It also requires that you have this app on for a specified length of time for them to start gathering data before any discounts kick in. So I have to always have my phone and turn on this app every time I get in my car. How does it know the difference between being a driver or a passenger? What does it consider aggressive deceleration to be? How long are they going to keep records of where and when I go places? No clue. One of the fine print footnotes pointed out that the discounts aren’t available in my state. I wonder why… I called my agent and said no thank you.
posted by njohnson23 at 11:45 AM on May 26, 2021 [7 favorites]


went to the website and through the whole process until i got to "call us to confirm all this!" sigh. i did not.

Is there some law that requires insurance companies to demand a phone call? It's deeply weird. It can't be for fraud prevention, since fraudsters love telephony. (Maybe I just answered my own question, hmm...) And it really seems to be universal -- there's some company that markets disability insurance that doesn't require a phone call to sign up, but that is also a lie.
posted by Not A Thing at 12:21 PM on May 26, 2021 [1 favorite]


I've signed up for renters' insurance with both Liberty and State Farm without talking to anyone.
posted by praemunire at 12:45 PM on May 26, 2021 [1 favorite]


Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can't,

Amusing, since they've recently been pushing their pet insurance.

Who's a good boy? Oh, really?
posted by ChurchHatesTucker at 12:52 PM on May 26, 2021 [4 favorites]


....Well, I was going to sign up with Lemonade for renters' insurance this coming weekend, but now maybe not.

Praemunire, do you have a favorite amongst the two you've used?
posted by EmpressCallipygos at 1:11 PM on May 26, 2021


Two years ago my mom had a major issue at her house while she was on vacation. A hot water faucet malfunctioned and sprayed boiling hot water for several days/hours, basically destroying every single thing in the house. Floors were warped, walls were covered in mold, ceilings collapsed.

Her fully traditional insurance company pushed back on any payout for the first 6 months - they looked for any potential loophole to not fulfill their portion of the policy that she had been paying for for more than a decade. In the meantime she had to move in with my aunt and make exhaustive list of everything she could remember owning.

It's 2 years and thousands of phone calls later and she is just getting back to normal. She had to pay out of pocket for most of the items she lost and track receipts and submit for judgement on whether or not it could be reimbursed.

Basically my opinion is insurance is necessary but actually using it when you need it is a horrible, garbage experience and anything that it makes it easier to deny claims is evil.
posted by elvissa at 1:21 PM on May 26, 2021 [9 favorites]


It tracks your driving in terms of locations, speed, and time. They explicitly said that it tracks “aggressive deceleration” implying you’re going too fast and not paying attention. It also requires that you have this app on for a specified length of time for them to start gathering data before any discounts kick in. So I have to always have my phone and turn on this app every time I get in my car. How does it know the difference between being a driver or a passenger? What does it consider aggressive deceleration to be?

It's very possible they don't care and are simply relying on the fact that you are being watched to encourage you to be a better driver.
posted by Tell Me No Lies at 2:12 PM on May 26, 2021 [2 favorites]


Just finished a really interesting book that discusses these types of issues in an accessible and thoughtful manner - The Alignment Problem: Machine Learning and Human Values by Brian Christian published a few weeks ago. Very highly recommended and I would not be surprised if this shemozzle becomes a case study in the next edition.
posted by dangerousdan at 2:41 PM on May 26, 2021 [3 favorites]


Blog post: We have never, and will never, let AI auto-reject claims.
VS
SEC Filing: AI Jim is our claims bot, and, as of March 31, 2020, 96% of the time, it is AI Jim that will take the first notice of loss from a customer making a claim. AI Jim handles the entire claim through resolution in approximately a third of cases, paying the claimant or declining the claim without human intervention

Denied claims are claims that were received and processed by the payer and deemed unpayable. A rejected claim contains one or more errors found before the claim was processed.

It seems like they are hoping that normal folks won't know the difference between an AI 'rejecting" a claim and an AI "declining" it and assume that a promise not to do one means they aren't doing the other.
posted by metaphorever at 2:57 PM on May 26, 2021 [11 favorites]


pls link to the twitter thread calling them out on the SEC filing, thx!
posted by ryanrs at 4:24 PM on May 26, 2021


>In this case I think the dunking is an active societal good in that hopefully it'll make the next company think twice before trying to peddle pseudoscientific nonsense with the potential to harm real people.
Plenty more oil / in the snake.

>For-profit insurance is a blight upon the world, and it should be catapulted into the stellar void.
I'm sure you know but, I figure that insurance pools, as a mechanism using economy of scale, really achieve optimal efficiency with everyone is in the insurance pool and it can't work if insurers are competing to have less risky people fighting to pay less.
posted by k3ninho at 4:39 PM on May 26, 2021 [1 favorite]


It seems like they are hoping that normal folks won't know the difference between an AI 'rejecting" a claim and an AI "declining" it and assume that a promise not to do one means they aren't doing the other.

Dumb question: what's the difference? Is declining refusing to evaluate the claim at all, while rejecting is evaluating a claim but deciding not to pay it?
posted by star gentle uterus at 5:20 PM on May 26, 2021 [1 favorite]


Ryanrs, I’m not Twitter savvy enough to make a thread. Feel free to ask them.

Star gentle uterus, I think it’s actually the other way around where rejecting is when the claim is not processed at all because of incomplete or inaccurate information like entering the wrong name or account number and declined or denied claims are ones that are accepted and processed but not decided in the customers favor. In any case the part where they talk about the GDPR makes it clear that whatever you call it their business model depends on “automated decision making”.

The GDPR prohibits automated decision making, i.e. a decision evaluating a data subject's personal aspects based solely on automated processing that produces legal effects or other significant effects for that data subject, except where such decision making is necessary for entering into or performing a contract or is based on the data subject's explicit consent. There is not yet any clear precedent as to whether use of artificial intelligence to make insurance offers to individuals will be considered necessary even though it is integral to our business model. If our automated decision making processes cannot meet this necessity threshold, we cannot use these processes with E.U. data subjects unless we obtain their explicit consent. Relying on consent to conduct this type of processing holds its own risks because consent must be considered freely given (commentators argue that seeking consent by tying it to a service may be problematic) and consent can be withdrawn by a data subject at any time. We are continually monitoring for updates to guidance in this area, however, if subsequent guidance and/or decisions limit our ability to use our artificial intelligence models, that may decrease our operational efficiency and result in an increase to the costs of operating our business. Automated decision making also attracts a higher regulatory burden under the GDPR, which requires the existence of such automated decision making be disclosed to the data subject including a meaningful explanation of the logic used in such decision making, and safeguards must be implemented to safeguard individual rights, including the right to obtain human intervention and to contest any decision
posted by metaphorever at 5:42 PM on May 26, 2021 [1 favorite]


Not super inclined to help them out here, but there are classes of claims that may be reasonable to deny in an automated way. Things like "policy not in effect at the time of incident" or denials based on certain policy terms (like rejecting a vandalism claim on a liability-only auto policy).
posted by ryanrs at 5:57 PM on May 26, 2021


Okay, sure. But maybe not, "Our computer doesn't like your face."
posted by evidenceofabsence at 7:57 PM on May 26, 2021 [1 favorite]


Body shops already use IBM Watson to automate collision repair quotes. That's AI, right?

Actuary science is just data science reduced to statistics. :D When you can crunch data and mine them for insights, actuary is doomed and insurance have to change with the times.

But claiming that they can detect fraud with voice and tone and all that is really bull****.
posted by kschang at 1:06 AM on May 27, 2021


Algorithmic engines inherit so many biases off their training data

Maybe rename it Algorithmic Parroting.
posted by filtergik at 1:14 AM on May 27, 2021 [2 favorites]


I don’t work in insurance but, like many people, I’ve experienced ‘computer says no’ often enough to catch the red herring in that Lemonade press release.

It doesn’t matter if someone (a human) has to confirm these automated claim denials or not. Someone (a human) picking out which pre-denied claims to overrule is forced to prove to their boss why the company should profit less from a customer.

The goal is to pay out less and that’s it. Fraud detection is a joke.
posted by romanb at 3:23 AM on May 27, 2021 [5 favorites]


pls link to the twitter thread calling them out on the SEC filing, thx!

Don't know if you still want this, but I've seen many folks passing along CNN tech reporter Rachel Metz's tweet citing page 128 in the SEC filing, and someone in the comments linked directly to that page.
posted by mediareport at 4:33 AM on May 27, 2021 [1 favorite]


I'd kill for a Max Headroom-esque deepfake to read my insurance claim then...
posted by Nanukthedog at 4:33 AM on May 27, 2021


seems an especially bad idea to have this kind of behavior-based judgment pushed at a community with a higher-than-average incidence of autism.

I first heard of this episode via disability and neuro-divergent communities on Twitter, like this response to Lemonade's thread about deleting the original tweet:

You still haven’t addressed how requiring people to submit a claim video isn’t already ableist af. Not everyone is able to speak or formulate explanations of what happened verbally. You’re still discriminating against ADHD, autism, deaf/mute, non-native speakers.
posted by mediareport at 4:33 AM on May 27, 2021 [7 favorites]


Wait, you HAVE to make a video, you can't just email them a a story with a bunch of pics attached? Lol.
posted by ryanrs at 11:27 AM on May 27, 2021 [1 favorite]


Wait, you HAVE to make a video, you can't just email them a a story with a bunch of pics attached? Lol

Delightful!
posted by aubilenon at 12:23 PM on May 27, 2021 [1 favorite]


"Artificial intelligence" (which doesn't exist yet, at best it's machine vision developed for extremely specific purposes) doesn't invent itself out of thin air. It is programmed. By human beans. So this is a dumpster idea from square one.
posted by turbid dahlia at 9:04 PM on May 27, 2021 [2 favorites]


« Older TikTok teen points to inside elbow, bites lip:...   |   Covid 19's Origins Re-considered by Scientific... Newer »


This thread has been archived and is closed to new comments