I Was Shared on Hacker Newsby Ray Patrick (other posts)
A post I wrote called I Conscientiously Object to Self-Driving Cars was apparently shared on Hacker News last July by Joshua Liu. It’s probably a good thing I didn’t know at the time; I would probably have been tempted to wade in there and start arguing.
The comments contained the usual assortment of drive-bys with little to no reading comprehension, but also some great discussions. Let’s look at a few of the highlights, starting with the bad ones.
(Aside: this comment thread is empirical validation of John Walker’s Strike Out paradigm.)
Other comments addressed the self-driving car part, but I somehow can’t get over the part where the author’s experience at NORAD is used to explain why we have humans in the loop. According to Wikipedia , US-operated drones have killed 910-2,200 civilians, of which 283-454 were children. This all happened with multiple humans in the loop. Those humans decided occasionally killing children is a perfectly acceptable cost of doing business.
Which is, when you think about it, not too far from how human-driven cars and the car infrastructure operates. We just decided some people are going to die and that’s an acceptable cost of business. At least, in case of cars, the population that benefits from having cars are (more or less) the same population that’s killed by them.
But don’t tell me it’s superior because there’s an (incompetent, sometimes sleepy, sometimes drunk) human in the loop. That’s just lame “At least there’s always someone we can blame!” BS.
Yongjik doesn’t get the difference between negligence (letting your Tesla blow through intersections where people are walking) and human error (time-sensitive targeting will always carry the risk of collateral deaths.) It’s irritating to see people argue from ignorance that US military members are bloodthirsty baby killers who think “occasionally killing children is a perfectly acceptable cost of doing business.” I have seen the suicides that prove otherwise.
At least another commentor weighs in to tell him his argument is a non-sequitur:
US-operated drones have killed 910-2,200 civilians, of which 283-454 were children. This all happened with multiple humans in the loop. Those humans decided occasionally killing children is a perfectly acceptable cost of doing business.
That’s correct, but it doesn’t refute the point of the article, which is that having humans in the loop to make life or death decisions is better than punting those decisions to a computer. Suppose the US had fielded autonomous drones that required no human intervention to release lethal weapons. Would you prefer that to what actually happened?
(And no, saying that you’d prefer that the US not operate the drones at all is not an answer for this discussion, because the analogy to that would be not allowing driving at all. And that’s obviously a non-starter.)
We just decided some people are going to die and that’s an acceptable cost of business.
Again, while this is true, it doesn’t refute the point of the article, which is that the cost is less with humans in the loop. Would you prefer that automated Teslas running over pedestrians in crosswalks were the new normal, and we just wrote that off as a “cost of doing business”?
What is Reading Comprehension?
Ok there are concerns about self-driving cars, and clearly the tech is not quite there yet, but I fear the premises at the heart of this article is poor.
Firstly, the tech of the 70s and 80s which he’s referring to is clearly irrelevant when talking about the state of the art today.
Secondly, the penalty for failure for incorrectly launching nuclear missiles is, well, somewhat severe. You are pretty much ending the world.
Granted driverless cars will kill people. I think one has to start there. I dontvthinkvthe standard can be, or should be, zero deaths. The question is whether they’ll kill fewer people than human drivers (currently around a million a year.)
So, the argument is: “I worked on old tech that didn’t work without human’s in the loop therefore no technology will ever work without human’s in the loop”? I don’t follow.
These guys fail to get the point I was making. They think that the US missile warning network (orders of magnitude more complicated than a self-driving car) is not a fair comparison because some of its components were first fielded in the 1970s and 1980s. (Never mind that digital autopilots, TCAS, and several other safety-critical systems also originate from that era and would have illustrated my point just as well.) Their caricature of my argument is “Late 20th-century radar networks can’t be trusted to work alone and therefore neither can cars.” Actually, the whole point of my argument is that it doesn’t matter how capable a computerized system is. Ceding all control or supervision is morally negligent if risk to human life is involved.
See also: he has no problem with driverless cars killing people as a matter of course. I don’t think I could sway this guy even if he had managed to understand my point.
Hmm. I watched the video a couple times. Didn’t understand the issue at first, second watch I saw it.
This isn’t proof, it’s the author selectively choosing a particular video that supports their point of view. Maybe the video actually does demonstrate a problem with the self driving software, but without more supporting data it’s simply cherry picking.While I support a person in the middle kind of system, I don’t think the author supports his case well. It’s an opinion piece, basically.
Feel free to read up on accidents, even fatal ones, caused by self-driving cars. I showed one example of unsafe behavior by a self-driving car in order to supplement my argument, the validity of which does not rest upon any specific self-driving car incident, but upon reasoning from analogy about how industry and defense have converged on best practices for life-critical computer systems.
Apparently he also thinks that something being an “opinion piece” necessarily means the author’s case is poorly supported. That’s just, like, your opinion, man.
“I sifted through this guy’s page and turns out he’s a Christian! Checkmate, fundies!”
This guy failed to understand the problem shown by the video. He also really got his fedora in a bunch when he (evidently) decided to flip through the rest of my site:
This isn’t very well reasoned. In the depicted situation the cars and pedestrians paths don’t intersect if the car simply keeps driving and the person keeps walking and if the car slams on the brakes it could hurt the driver by stopping abruptly or slow down enough to turn a non-intersecting path into a crash without coming to a stop soon enough depending on conditions. EG the car needs 3 seconds to pass a given position and the person can only cover 60% the distance in that time but slowing makes it take 5 and the now the pedestrian ends up under the wheel or on the ground.
As an aside he’s an obnoxious religious fundamentalist who believes the current state of the country is caused not by concrete dysfunction from one side but by lack of Jesus and this is the third individual of questionable character on these pages today, all from you, and none of them have much interesting to say in the blog posts you have shared. Bad people sometimes have ideas worth sharing but you aren’t exactly batting a thousand here. Please find more interesting things.
His argument fails because he points out that nothing would happen if both the car and the pedestrian kept moving at their same speeds. Nothing did happen – because that pedestrian was lucky. Pedestrians have the right-of-way, period. He’s right that playing chicken makes you unpredictable to pedestrians and therefore more unsafe, but he fails to connect the dots all the way to the conclusion: that’s why pedestrians have the right-of-way in the first place! What would have happened if the pedestrian fell? Or decided to speed up or slow down? Drivers (human or otherwise) making ad-hoc decisions about what to do in those situations is dangerous due to the element of chance. That’s why pedestrians have the right-of-way.
Regarding his second paragraph, I will say only two things:
- Attacking someone personally is unmanly behavior indulged in by those unable to answer the argument.
- Being called an “obnoxious religious fundamentalist” by someone like this is to be expected. The preaching of the Cross is foolishness to them that are perishing (1 Cor 1:18).
But it will kill statistically fewer people than you will.
A post from another article claims that these vehicles have undergone 3 million miles of testing without causing death or serious injury. The annual loss of lift due to human operated vehicles is estimated to be 50 thousand, across 3 trillion miles of driving. There has not been enough testing done to prove that self-driving cars are any safer in this regard. If you have data suggesting otherwise, please provide it.
The responsible thing is to know when you are not suited for a job and get out of the way, so lives can be saved.
The introduction to this article suggests that it is people who must make the final decision because machines are fallible. They may have super-human attention spans and be able to make precise calculations regarding a scenario, but they lack judgement. They cannot determine whether something is reasonable, like 300 missiles being launched from the Arctic ocean.
Humans may be imperfect, and they certainly make bad judgement calls, but the reality is that we don’t have sufficient evidence to suggest that self-driving cars will be any safer. Until we have enough evidence to prove their safety, there should be a human making that final call.
II2II is more optimistic on this front than I am. He also seems to have actually read what I wrote! Another commentor restates my point even more succintly than I did:
The author isn’t demanding proof that it is statistically better before switching. They object to giving up control and view that as a moral imperative, “no matter how “safe” the computerized features are."
Bingo – I’m not convinced it will ever be better to switch. Following this was an actual, honest conversation about the correct topic!
True, which doesn’t invalidate the author’s point. I’m merely adding a second set of considerations to the discussion. I think it’s worth drilling down a little harder on the author’s point though. They correctly note that there are no life-or-death systems that are wholly computerized with no human supervision. To me that speaks rather strongly to either a lack of awareness or lack of ethics on the part of drive AI advocates, as they’re cheerleading something the military, aerospace, and medical industries have rejected on merit.
Yes. Fanboys can scream all they want. Human supervision of computerized systems that have life-and-death functions is a highly coupled sociotechnical problem that has been tackled by bright minds in government and industry for many decades before it became visible to consumers.
Thanks for the link, Joshua!