The Perils of Removing Human Error

Human advances can be a wonderful thing. By driving to an airport, I could find myself in another state, or possibly even another country, in the span of just a few hours (delays notwithstanding). I sit at my keyboard with a music mix of nearly 7500 songs – more songs than I could listen to in the span of 23 days – playing in the background. Later, I can carry that same collection in my pocket. At no point during my musical enjoyment do I reflect upon travel by horse-drawn carriage or the steamboat. I don’t ponder the compact disc, the vinyl record, or the prospect of attempting to carry an entire symphony orchestra in my pocket – not even in my baggy “relaxing pants.” We really only ever reflect on technology when the ‘next best thing’ comes around. We don’t “stop to smell the roses,” to appreciate the fruits of our labor, or to consider its consequences. We create more devices, applications, and autonomous constructs to remove the element of disadvantage, risk, or human error. What else are we losing?

Modern amenities like texting, and social networking sites like Facebook and Twitter, correct the ‘error’ of distance. How was that distance created in the first place? We choose to live where we live, to meet new people and challenges, at the expense of proximity to our past. More simply put, you can’t be close to everybody. Posts to inform our contacts of daily activities and plights are marriages of convenience, sending a blanket statement to everyone within earshot. “Friend” lists are reduced to ego boosts. We fill a list of acquaintances with those we are less likely to make any effort to contact meaningfully, and more likely at whom to stare on a digital bulletin board. People text someone miles away with disregard for the person right in front of them at the dinner table. What happens when you derive more esteem from the number of people you know, than from the activities you share with those around you? What happens to true community?

In a recent article from the Military Times, McLeary quoted Gen. Robert Cone regarding the continued use of robots in combat. The popular ethical questions raised by other sources, such as, focus on whether humans will be okay sharing the field with robots, and whether the robots will watch where they’re aiming. The military has gradually been reducing its force, allotting more tasks to autonomous machines for both safety and budgetary concerns. Human beings are just too expensive, and keeping them out of harm’s way is even more costly. Kill efficiency and firepower, Cone argues, are lost as a result. To these observations, I can only say this: It’s supposed to be expensive! It’s supposed to be difficult. There is nothing more valuable than a life. Taking one should not be easy. Making it easier increases it’s likelihood of occurring again. Accomplishing it by proxy removes the idea’s initiator from responsibility, and from the consequences. The first army with gunpowder at its disposal probably stood in awe at the ease of their victory. The first person to repel a home invader breathed more easily knowing they had defended their loved ones or property. Victory in single instances at the cost of the whole? I’m getting sick of marveling at how expeditiously students can shoot up their schools.

According to another recent article from Politico, the National Highway Traffic Safety Administration wants to mandate ‘talking cars’ as a precaution for vehicles. They believe cars that ‘communicate’ with each other will reduce traffic accidents. Presumably, many of those prevented accidents would have been caused by drivers using their cellphones, music devices, curling irons, and other innovations. This comes on the heels of earlier discussions, weighing the benefit of fully automated, driver-less cars. People apparently can’t be trusted to operate their own motor vehicles any more. Will this technology work? Let’s accede that it will. Without question, we’ll argue, this technology will be completely effective in preventing accidents – when it’s used. Reducing everything to the simple push of a button still demands that the user plays the game. You have to take responsibility for pushing that button. Failing to do that, who are we to blame? Technology? Do we hold the user accountable, or do we simply advance technology – create another device that kicks in whether the user fails to use the device? When and where does it end?

We live in a technologically advanced world. Whether that will be a positive or negative outcome depends upon us. The search for solutions to human error do not begin or end with technology, but with ourselves. The stench of ignoring those around us does not dissolve by surrounding our online presence with more ‘neighbors.’ We cannot wish away the onus of our actions, desires, or murderous intentions with new inventions and innovative thinking. Failing to consider the consequences of our actions, via proxy or otherwise, is what causes accidents. We must live in the moment, not in the moment of the next big thing. If not, we are in a state of complete situational unawareness of our surroundings. That would be our human error.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments