Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

The Trouble With Computers?
By
September/October 2016 Issue

If that is a question, the answer is simple: people. Not just the people who design and program them inadequately or unbecomingly, but the people who use them every day in every way. Computers can perform difficult, multi-phased tasks with blinding, almost magical, speed—and they’re only getting better at it. But people and computers have some basic incompatibilities that make problems and—with the rise of the Internet of Things—perils almost inevitable.

Anyone who has ever dealt with computers can tell you a story. It may be an amusing anecdote. It may be a horrific tragedy. But computers are always capable of surprising humans with their failures. My earliest story occurred several decades ago when I was searching the ERIC educational database for studies on how teachers would handle siblings in the same classroom and up jumped an article on fish. A piscatorial response to my search seemed extremely off-topic but, upon reading the abstract, I noticed relationship terms and “schools.” Sheesh.

In this issue, Irene McDermott’s Internet Express column is all about the new computerized cars—automobiles capable of making their own driving decisions. When I mentioned this to my chief technical adviser, Cousin Dave, he pondered for a moment and then stated firmly, “I have worked with Microsoft products for many years now. And if you think I’m going to let them or anything like them drive my car for me, you’re crazy—but I’m not.” Speaking of stories, Dave had one of his own. He has a car with a rear-view camera that alerts the driver to possible rear-end collisions. The other day his car started beeping vociferously as he began backing up. He looked around, saw nothing; got out of the car, walked around, still saw nothing. Returning to the car, he began backing up again, and again the car grew hysterical with warnings. In the course of another trip around the car, he began an examination of the rear-view camera. Apparently, during the night, a spider in the garage had crawled down and built its web right in front of the rear-view lens, frightening the lens into thinking a giant tarantula was endangering its world. What if the camera had been controlling the car?

Speaking of a horror story, a devoted fan of the Tesla was killed by his trust in the Tesla’s Autopilot function. So much did he trust the car to protect him, that he bought a portable DVD player—Tesla does not allow its in-car screen to be used for entertainment—and was watching a Harry Potter movie when the Tesla miscalculated a white truck as a cloudy sky. Tesla is now considering introducing a way to calculate when a driver’s hands have been off the steering wheel too long.

Driverless cars are a technological dream, and technological dreamers have a habit of making their dreams come true. On the other hand, the psychology of people using equipment must be taken into account. Lord knows how irritating it is to have anyone—including a computer—nagging at one, but the “nanny computer” may be safer. It lets the user know what the computer is perceiving and verify whether that matches the user’s reality.

Bottom line: Both computers and people make mistakes—a logical inevitability since people (“To err is human”) are the ones making the computer systems. But the mistakes computers make are often not the same types of mistakes that people would make. While science fiction books and films may offer countless examples of robots taking over the world with omniscient, omnipotent effectiveness, one still tends to worry more about those super-klutz mistakes, the hackable obedience to childish pranks, the vulnerability of too much power reaching devices with no common sense.

After all, the highest example of cognitive computing today is IBM’s Watson. Would you trust the welfare and survival of yourself, your family, your business, or your community to anyone who would respond to a Final JEOPARDY! category entitled U.S. Airports with “Toronto”? Ten-year-old humans wouldn’t make such a gross mistake. If you think about it, the real fear behind “Robophobia” isn’t so much fear of being controlled by a behemoth genius machine as fear of being crushed by a behemoth lummox blundering across the landscape like a drunken Yeti.

In a happier world, we might all be living with loving robotics trained to serve and preserve humans at all cost to themselves by keeping with Isaac Asimov’s Three Laws of Robotics:

1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.  A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

On the other hand, since computers and their derivatives—drones come to mind—have found their primal funding from the Defense Department, well—SIGH—we better just watch out for our own interests as people. Better a cautionary computer, even if it be an occasional nag, than a cautionary tale.


The late Barbara Quint was senior editor of Online Searcher, contributing editor for ITI's NewsBreaks, and a columnist for Information Today.

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top