I used to be a missile warning crew commander. My crew’s job was to operate an early-warning radar that could detect ballistic missiles flying toward North America from thousands of kilometers away. While in normal configuration, we tracked satellites and that’s how we spent the majority of our time. However, if the radar ever detected objects whose trajectories intersected the Earth, it would automatically shift into missile warning mode. It would direct a lot of energy towards measuring the position and velocity of those objects, then compute projections for where they would impact the Earth and at what time. It would also propagate the object’s motion backward in time to compute its launch point and the time at which it must have launched. The radar would do this for every Earth-impacting object in track within milliseconds.
All the above-mentioned information would be automatically flashed to NORAD’s Cheyenne Mountain Complex and other locations to be acted upon by American and Canadian military leadership. Neither I nor my crew would have done anything to help; the radar and its software would have done everything necessary to find, fix, and track every one of those targets and to predict their impact locations automatically. At first glance, it would seem that we weren’t even necessary. We were, though: the reason the radar required a human crew is because we were able to do something that no one in NORAD would trust a computer to do.
Besides the trajectory data, the most important piece of information transmitted to the Mountain from our radar site was my determination of whether or not the data was trustworthy. If radar measurements were to indicate 300 SLBMs originating from the Arctic Ocean with impact points scattered throughout North America, the outlook would be grim. But if I reported that data to be erroneous (say, as the result of a technical mishap) then things would be fine. The radar data would look frightening indeed, but when the NORAD commander heard that the radar crew’s verdict was “mishap,” he would know that no attack was actually in progress.
The indications from computer systems (particularly when those indications feed decisions such as “Shall we order the release of nuclear weapons?”) are simply never to be trusted by themselves. Respect for human life demands this.
This leads me to self-driving cars, the most (in)famous of which are currently Teslas equipped with “FSD.”
As you can see in this video (4.8M mp4), FSD-equipped Teslas have no qualms about driving through crosswalks with people in them. Apologists are quick to say that it’s just driving the way people do (i.e. flouting safety rules when it’s convenient) – I thought avoiding this was the whole point of having self-driving cars?
If you’re driving a two-ton vehicle, you have a higher responsibility than other road users at all times because of your greater deadliness.[note 1] If you don’t like that level of responsibility, then ride a bike or walk. For heaven’s sake don’t just pass the buck to a computer! Don’t get me wrong – I love having a computer in my car. It controls my ignition timing, regulates the voltage draw of all my components, and even keeps my brakes from locking up if I have to step on them hard. You know what it doesn’t do for me, though? Keep my car from hitting people. Ultimately, that’s up to me, no matter how “safe” the computerized features are.
In summary: Never trust a computer to make life-or-death decisions for you[note 2] and be extra cautious around Teslas now that it’s impossible to know whether the driver is negligently beta testing car-driving software in public!