Accidental robot impacts: elaborations on the first law of robotics
October 7, 2014 8:09 AM Subscribe
Though we're not (yet) to the point of actually implementing any strict laws of robotics, the limits for how much workplace robots can accidentally harm their human co-workers are now being discussed by standards-setting agencies. It's easy enough to say robots and humans cannot work in the same space, but once robots start collaborating with people (aka: COBOTS), large companies look to standards for safety measures and risk assessment. That's where the Institute for Occupational Safety and Health of the German Social Accident Insurance (German: Institut für Arbeitsschutz der Deutschen Gesetzlichen Unfallversicherung, IFA) come in, with BG/BGIA risk assessment recommendations according to machinery directive (36 page PDF), based on real-world tests with robots on human subjects.
The National Association for the Protection of Robot Entities wants to talk about the laws governing humans hitting robots.
Oh there aren't any, except when viewing the robot as property? Yes, that is a problem.
posted by Brandon Blatcher at 8:44 AM on October 7, 2014 [1 favorite]
Oh there aren't any, except when viewing the robot as property? Yes, that is a problem.
posted by Brandon Blatcher at 8:44 AM on October 7, 2014 [1 favorite]
OK, all jokes aside: while testing a pre-release CMM (computerized measurement machine), I nearly beheaded myself (or at least risked serious neck injury). In released form, what I was doing wouldn't have been possible without at least one safety feature being bypassed, but: this is no longer a what-if scenario at all.
I guarantee you that, sooner or later, that safety feature will be at least temporarily bypassed in the field. Once it is temp-bypassed, someone will forget to reinstate it.
Robots that can kill humans while doing their jobs already exist.
posted by IAmBroom at 8:50 AM on October 7, 2014 [2 favorites]
I guarantee you that, sooner or later, that safety feature will be at least temporarily bypassed in the field. Once it is temp-bypassed, someone will forget to reinstate it.
Robots that can kill humans while doing their jobs already exist.
posted by IAmBroom at 8:50 AM on October 7, 2014 [2 favorites]
Robots that can kill humans while doing their jobs have already killed!
We used to think that the first person killed by a robot was Kenji Urada, in 1981:
We used to think that the first person killed by a robot was Kenji Urada, in 1981:
Urada was maintenance engineer at a Kawasaki Heavy Industries plant.[1] While working on a broken robot, he failed to turn it off completely, resulting in the robot pushing him into a grinding machine with its hydraulic arm. He died as a result.But it turns out an American was killed two years before that:
On 25 January 1979, Robert Williams (USA) was struck in the head and killed by the arm of a 1-ton production-line robot in a Ford Motor Company casting plant in Flat Rock, Michigan, USA, becoming the first fatal casualty of a robot. The robot was part of a parts-retrieval system that moved material from one part of the factory to another; when the robot began running slowly, Williams reportedly climbed into the storage rack to retrieve parts manually when he was struck in the head and killed instantly.posted by jjwiseman at 8:59 AM on October 7, 2014 [4 favorites]
Well, stuff like that is why they started putting robot assembly lines behind plexiglass at auto plants. The new generation of robots are expected to work with human much more closely --- something like Baxter, the whole idea of him is that you're going to have relatively unskilled operators moving him around the factory floor as needed like a shop vac, manipulating his arms to demostrate new tasks. Stuff like the Amazon warehouse bots I think they've taken steps to limit human robot interaction out of precisely these concerns, but it seems clear that their facilities would be more efficient if they could safely have human picker/packagers and robots working in the same space --- so doubtless that's what they're pushing for. That's really part and parcel of the whole promise of the next generation of this tech---at least the Germans are thinking through the implications.
posted by Diablevert at 9:10 AM on October 7, 2014 [1 favorite]
posted by Diablevert at 9:10 AM on October 7, 2014 [1 favorite]
I'm a few hours into playing Portal 2 for the first time, so stuff like
In all seriousness, what does it mean to have tolerance for causing substantial pain built into the specs? That, with all safety mechanisms intact and the user following their training perfectly, the workers should just expect the machines to hurt them at a rate of x incidents per person-hour? Or is it something closer to unexpected events (the worker being in the wrong place, or a sensor failure) being handled less gracefully?
I understand that all manual labour has its hazards (my lab is pretty safe, but today I could plausibly have slipped up and poisoned, infected, stabbed, burned, or frozen myself), but that seems... not great. We can't spend infinite money to make everything perfectly safe, but there's something about a robotics manufacturer saying [heavily paraphrasing:] "we could make these cheaper if you lower your expectations of worker safety" that sounds a bit off to me.
posted by metaBugs at 9:11 AM on October 7, 2014 [5 favorites]
But it would be acceptable if a worker received a “substantially painful” blow in the case of an accident. He and others argue that more restrictive guidelines would unnecessarily increase the compliance burden on companies that want to employ collaborative robots and limit the usefulness of their ability to work with humans.and
...based on real-world tests with robots on human subjects.makes for hilariously chilling reading. Video games bleeding into real life, but not in a fun way.
In all seriousness, what does it mean to have tolerance for causing substantial pain built into the specs? That, with all safety mechanisms intact and the user following their training perfectly, the workers should just expect the machines to hurt them at a rate of x incidents per person-hour? Or is it something closer to unexpected events (the worker being in the wrong place, or a sensor failure) being handled less gracefully?
I understand that all manual labour has its hazards (my lab is pretty safe, but today I could plausibly have slipped up and poisoned, infected, stabbed, burned, or frozen myself), but that seems... not great. We can't spend infinite money to make everything perfectly safe, but there's something about a robotics manufacturer saying [heavily paraphrasing:] "we could make these cheaper if you lower your expectations of worker safety" that sounds a bit off to me.
posted by metaBugs at 9:11 AM on October 7, 2014 [5 favorites]
So, I was having a conversation the other day (it being Yom Kippur) about how one thing I really like about Judaism is that while we have a shit-ton of laws, they are all superseded by the imperative to preserve human life. No access to kosher food? Eat what you have so you don't starve. Someone needs CPR on the Sabbath? A-OK. "Actually, it's kind of like the First Law of Robotics," I mused.
And then I went, "Wait a minute. Asimov was Jewish. That... is probably not a coincidence."
I am sure I am not the first person to make this connection, but it was the first time it had occurred to me, and it kind of felt like the whole history of science fiction tilted on its axis in my head. In a good way, but still. Weird.
posted by nonasuch at 9:26 AM on October 7, 2014 [21 favorites]
And then I went, "Wait a minute. Asimov was Jewish. That... is probably not a coincidence."
I am sure I am not the first person to make this connection, but it was the first time it had occurred to me, and it kind of felt like the whole history of science fiction tilted on its axis in my head. In a good way, but still. Weird.
posted by nonasuch at 9:26 AM on October 7, 2014 [21 favorites]
That, with all safety mechanisms intact and the user following their training perfectly, the workers should just expect the machines to hurt them at a rate of x incidents per person-hour?
All safety systems have tradeoffs built into them. There isn't any industrial system — hell, any system where people and machines are both involved — that probably couldn't be made safer but at the expense of making it more cumbersome to use, or run more slowly, or more expensive.
So at some point, somebody decides "good enough!" and we move on.
I mean, if you open up the hood of your car when it's running and stick your hands in the wrong place, you're liable to lose a few fingers in the serp belt (or worse). You could easily prevent this with a cover over the belt, or an interlock that shut off the engine when the hood was opened. But the first thing would cost money, and the second would be inconvenient in a lot of situations. So we do neither, and occasionally people lose their fingers. In other words, we have tacitly accepted those people's fingers as an acceptable cost, both in exchange for not putting belt covers inside car engine compartments and being able to pop the hood and work on the running engine.
So, yeah, somewhere in the calculations that drive the robots safety features, there's going to be an acceptable rate of nonfatal injuries. But that's the exact same type of calculation that gets made any time a safety feature gets added, or isn't added, to a piece of industrial or consumer equipment. We shouldn't be surprised just because it involves "robots."
posted by Kadin2048 at 10:23 AM on October 7, 2014 [2 favorites]
All safety systems have tradeoffs built into them. There isn't any industrial system — hell, any system where people and machines are both involved — that probably couldn't be made safer but at the expense of making it more cumbersome to use, or run more slowly, or more expensive.
So at some point, somebody decides "good enough!" and we move on.
I mean, if you open up the hood of your car when it's running and stick your hands in the wrong place, you're liable to lose a few fingers in the serp belt (or worse). You could easily prevent this with a cover over the belt, or an interlock that shut off the engine when the hood was opened. But the first thing would cost money, and the second would be inconvenient in a lot of situations. So we do neither, and occasionally people lose their fingers. In other words, we have tacitly accepted those people's fingers as an acceptable cost, both in exchange for not putting belt covers inside car engine compartments and being able to pop the hood and work on the running engine.
So, yeah, somewhere in the calculations that drive the robots safety features, there's going to be an acceptable rate of nonfatal injuries. But that's the exact same type of calculation that gets made any time a safety feature gets added, or isn't added, to a piece of industrial or consumer equipment. We shouldn't be surprised just because it involves "robots."
posted by Kadin2048 at 10:23 AM on October 7, 2014 [2 favorites]
And then I went, "Wait a minute. Asimov was Jewish. That... is probably not a coincidence."
What is a robot, if not a better shabbos goy?
posted by leotrotsky at 10:27 AM on October 7, 2014 [5 favorites]
What is a robot, if not a better shabbos goy?
posted by leotrotsky at 10:27 AM on October 7, 2014 [5 favorites]
nonasuch: I am sure I am not the first person to make this connection, but it was the first time it had occurred to me, and it kind of felt like the whole history of science fiction tilted on its axis in my head. In a good way, but still. Weird.
This is also news to me. Looking around the 'net, I found some interesting articles on Asimov and Judaism. A quick example: Folly of Faith, Folly of Reason – Isaac Asimov & Judaism.
posted by filthy light thief at 10:40 AM on October 7, 2014 [2 favorites]
This is also news to me. Looking around the 'net, I found some interesting articles on Asimov and Judaism. A quick example: Folly of Faith, Folly of Reason – Isaac Asimov & Judaism.
posted by filthy light thief at 10:40 AM on October 7, 2014 [2 favorites]
That, with all safety mechanisms intact and the user following their training perfectly, the workers should just expect the machines to hurt them at a rate of x incidents per person-hour?
This is actually a huge step up from the last time the human race did a major retooling. For example, while I suppose it's possible to suffer a disabling injury while using a hand saw, they're extremely uncommon. A claim which table saws cannot make. And yet, virtually every semi-serious woodworker out there has a table saw in their shop.
posted by Kid Charlemagne at 10:51 AM on October 7, 2014 [1 favorite]
This is actually a huge step up from the last time the human race did a major retooling. For example, while I suppose it's possible to suffer a disabling injury while using a hand saw, they're extremely uncommon. A claim which table saws cannot make. And yet, virtually every semi-serious woodworker out there has a table saw in their shop.
posted by Kid Charlemagne at 10:51 AM on October 7, 2014 [1 favorite]
Kadin2048: So, yeah, somewhere in the calculations that drive the robots safety features, there's going to be an acceptable rate of nonfatal injuries. But that's the exact same type of calculation that gets made any time a safety feature gets added, or isn't added, to a piece of industrial or consumer equipment. We shouldn't be surprised just because it involves "robots."
Personally, I am interested because it is robots. From the Technology Review article (the second link in the OP):
Kid Charlemagne: ... virtually every semi-serious woodworker out there has a table saw in their shop.
And some have invested the money to get SawStops (previously). My father-in-law has one, due in part to ease the concerns of my mother-in-law who imagines him getting a serious injury on the blade and bleeding out before anyone notices that the work shop is rather quiet.
posted by filthy light thief at 11:03 AM on October 7, 2014
Personally, I am interested because it is robots. From the Technology Review article (the second link in the OP):
Existing guidance from regulators such as the U.S. Occupational Safety and Health Administration assumes that robots operate only when humans aren’t nearby. That has meant it’s mostly small manufacturers that have adopted collaborative robots, says Esben Ostergaard, chief technology officer at Universal Robots, a Danish company that sells robot arms designed to collaborate with humans.Little companies can get away with internal procedures, or people generally knowing what they're doing and understanding the limitations of their tools. Big companies add bureaucracy and more strict concerns for liability. With that come the push to add more safety mechanisms, and the limitations to what is "safe enough." Does this mean limiting the speed of a mechanism, adding padding and shields, or intelligent sensors to alter the robot to the sudden presence of a person? Can a robot move faster if its parts have less mass, and are less likely to cause serious pain or injury? These are questions that can go above and beyond the safety scoping of other technologies, which I find quite intriguing.
For collaborative robots to really change manufacturing, and earn significant profits, they must be embraced by large companies, for whom safety certification is crucial. “We lived a happy life until we reached the big companies—then we got all these problems about standards,” says Ostergaard. “It’s not the law, but the big companies have to have a standard to hold onto.” Universal is working with car manufacturers including BMW, and with major packaged goods companies, says Ostergaard.
Kid Charlemagne: ... virtually every semi-serious woodworker out there has a table saw in their shop.
And some have invested the money to get SawStops (previously). My father-in-law has one, due in part to ease the concerns of my mother-in-law who imagines him getting a serious injury on the blade and bleeding out before anyone notices that the work shop is rather quiet.
posted by filthy light thief at 11:03 AM on October 7, 2014
And then I went, "Wait a minute. Asimov was Jewish. That... is probably not a coincidence."
All of Asimov's robot stories are essentially midrash on the Three Laws.
posted by straight at 1:20 PM on October 7, 2014
All of Asimov's robot stories are essentially midrash on the Three Laws.
posted by straight at 1:20 PM on October 7, 2014
To come up with safety standards for situations where robots and people are working together, you first need to conduct soft-tissue injury studies using robots (warning: video of robots stabbing meat). Proper robotic collision detection allows us to limit kitchen knife penetration to 1 mm--as demonstrated by the very brave human volunteer who lets a robot attempt to stab him at 1:29.
posted by jjwiseman at 3:28 PM on October 9, 2014 [1 favorite]
posted by jjwiseman at 3:28 PM on October 9, 2014 [1 favorite]
« Older When Martian war machines hit the Western Front | I don't mind stealin' bloop Newer »
This thread has been archived and is closed to new comments
posted by Popular Ethics at 8:18 AM on October 7, 2014