Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

AI and Ethics
By
January/February 2020 Issue

Some innovations spring upon the world with little apparent warning. Most, however, are incremental changes that have been many years in the making—an evolution that builds upon prior versions that had little immediate impact. It is not merely the most recent release, but the convergence of market demand, technology readiness, and new invention that is necessary for the world to take notice. The sewing machine, the combustion engine, or the automobile needed the insight of entrepreneurs as well as technologists to change the world.

That’s also the case with machine learning and AI. Inklings of their components have been around for decades. Suddenly, they are head lines. The interesting thing about AI, though, is that many of the headlines today are less about the technical advances and more about the impact the invention will have on people and on society.

Information professionals are uniquely positioned to consider the ethical implications of AI and related technologies. We are concerned about privacy, bullying, meddling with elections, hacking of corporate and public systems, and algorithmic bias. We demand technology solutions, but perhaps no clear solutions are possible. Still more abstract are the effects that technology has on children, adults, and society at large. In a world where ethics and societal norms differ from one culture to another, can we come up with generally accepted societal norms for good behavior?

In May 2018, I joined Tom Wilde, CEO, Indico ; Steve Cohen, EVP and COO, Basis Technology ; and David Bayer, then-executive director of the Cognitive Computing Consortium , at Information Today, Inc.’s Data Summit conference to tackle the topic “Beyond Do No Harm—The Complex Considerations of Ethical AI” (dbta.com/datasummit/2019). Despite the panelists’ technology background, the issues discussed were mainly legal and societal.

TRUST

The issue of trust—how much to trust the recommendations and predictions of “black box” AI and cognitive computing systems—is central to the issue of AI ethics because it raises the question of expectations. After years of watching both the software industry and buyers of software, the panel was convinced that vendors’ and buyers’ expectations of software perfection—or the need for perfection—simply don’t match. The archives of computation journals are rife with discussions about how to develop software without bugs. Complaints by users are also common. How do we reconcile technologists’ and users’ expectations? To avert widespread frustration and potential lawsuits, both sides need to work together to develop a commonly accepted approach.

The issue of trust touches both software use and development. Can we understand where and why recommendations are made by AI systems without being able to audit and validate algorithmic results or test repeatability? Should we trust the content on which recommendations are made? Are the recommendations based solely on the popularity of clicks with regard to validity? How do we know that suggestions are based on sources that are authoritative and relatively unbiased?

Should we try to understand the motivation for recommendations? Is the vendor’s profit motive well-aligned with what we are looking for? Often, a search for products is influenced more by the retailer’s inventory than the searcher’s needs. Recommendation engines can easily prioritize profit over utility, value, or truth. That’s true, of course, in the physical world as well, but users often trust what comes out of a website somewhat blindly.

BIAS

Bias is another pervasive concern. Training sets can easily contain misinformation, old information, or incomplete information that skews the results of a search. These sets can reflect bias in data and source selection. The content administrator may be unaware that other data exists outside the organization’s control. There may be hidden assumptions in taxonomies, ontologies, or schemas.

As a result, algorithmic discrimination based on a rigid schema may make it difficult to present the “best” information for an individual’s needs. The vastness of large collections can make it difficult to “stumble” on more pertinent results.

SLOPPY THINKING

Algorithms and biased training sets are not the only culprits in our quest to develop better information systems. We see a growing reliance on accepting AI and cognitive computing recommendations without interjecting the human ability to test the reality of the recommendations, given the context in which the user is seeking information.

Context is a new concept in developing cognitive applications, and we are still experimenting with how to filter results by the user’s context without invading his or her privacy. Another danger in our interaction with information systems is the system’s tendency to deprecate or eliminate exceptions, rather than highlight them.

COMPLEXITY AND CURATION

While human interaction with information systems has always been a problem for users, there are new quandaries that have arisen because of the volume and availability of uncurated data. These stumbling blocks are societal as well as legal. Social media, for instance, coupled with a lack of governance, invites manipulation by users, by organizations, and by governments. The drive to regulate malefactors is admirable, but who will have the power to decide whether a social message is dishonest, inaccurate, or evil? This pertains to challenges such as poor design in Boeing’s 737 MAX software, as well as acceptance of bail and sentencing recommendations, election interference, and the spread of hate and vio lence ideologies. In yesterday’s more homogeneous societies, especially with non-porous borders, it might have been possible to enforce social norms. This is no longer feasible.

The complexity of risks and choices has flummoxed technologists as well as regulators. What is the correct decision for a self-driving car to make when faced with a dilemma of injuring one person or a group of bystanders? Autonomous car designers tell us that there will be far fewer traffic deaths with more autonomous vehicles on the road, but this means ceding decisions of whom to injure to a vehicle. Do we want to do this?

PRIVACY (OR LACK THEREOF)

Discussions of privacy issues are perhaps the most prevalent in the media and press. Do we need to trade privacy for the results and benefits of AI and cognitive computing? This is anathema to information professionals. Will we resolve issues of data ownership, data (and device) access, and data control? Copyright ownership plays into this area as well, as do transparency and the right to know.

These are rarely technical problems, nor are they likely to spawn effective technical solutions that appeal to all interested parties. Rather, they invite legal or regulatory solutions. And yet, most technologists debate these issues fiercely as if they could arrive at technical solutions.

Perfection eludes us all, every day. Instead, we must develop coping strategies for dealing with imperfections in terms of selecting better training sets, understanding how search technology works, and designing systems in which disastrous consequences are a possibility. The trick will be to achieve “good enough” solutions to give humans the ultimate control to correct egregious errors.


The late Sue Feldman was CEO of Synthexis (www.synthexis.com), a business advisory service for search, text analytics and cognitive computing software vendors and buyers.

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top