Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Information Today > January/February 2023

Back Index Forward
SUBSCRIBE NOW!
Information Today
Vol. 40 No. 1 — Jan/Feb 2023
AI ETHICIST
Insights on Content

The AI Bill of Rights: A Small Step in the Right Direction
by Kashyap Kompella

LINK TO THE SOURCE

Blueprint for an AI Bill of Rights
whitehouse.gov/ostp/ai-bill-of-rights

The White House’s Office of Science and Technology Policy (OSTP) released Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People in October 2022. This AI Bill of Rights lays down five principles to guide the design, deployment, and use of automated systems. It also contains an objective to safeguard the rights of citizens and ensure that the use of such systems does not curtail their access to opportunities or resources. The following is a summary of the five principles.

#1: SAFE AND EFFECTIVE SYSTEMS

Automated systems should include proactive and ongoing safeguards to protect the public from harms and avoid the use of low-quality data. They should demonstrate their safety and effectiveness through independent evaluation and reporting.

#2: ALGORITHMIC DISCRIMINATION PROTECTIONS

Automated systems lead to discrimination when they enable inequitable outcomes based on factors such as race, ethnicity, gender, religion, or disability. Automated systems should undergo testing to ensure they are free from such biases prior to their sale or usage. Recommendations to prevent algorithmic discrimination include assessing equity during the design phase, the usage of representative data, and the assessment and mitigation of disparate outcomes.

#3: DATA PRIVACY

The data privacy principle calls for agency on how an individual’s data is used and protects them from abusive data practices. Automated systems should limit the scope of their data collection, give users the ability to provide use-specific consent and easily withdraw their consent, and strive for privacy by design and default.

#4: NOTICE AND EXPLANATION

Users should be made aware that an automated system is being used, and the system should provide an easy-to-understand explanation of why a particular decision was made or an action was taken.

#5: HUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK

Users should be able to opt out of automated systems and seek human alternatives when appropriate and reasonable. Clear instructions should be given on how to opt out, and a human alternative should be given in a timely and convenient manner. It is recommended that sensitive domains (such as criminal justice, health, and education) include human oversight and human consideration in high-risk situations.

WHAT’S NEXT

The AI Bill of Rights is neither a law nor a binding regulation. It’s a call to action, and it joins a rather long list of lofty statements and standards about responsible AI and AI ethics principles. Furthermore, the OSTP has signaled that this is just the beginning.

The recommended principles will be followed by federal government agencies as they design, develop, procure, and operate AI systems. I expect the procurement guidelines for public sector AI systems to be aligned with these principles too, meaning that private sector AI vendors desiring to sell to public agencies must follow the recommendations. I imagine the vendors will eventually incorporate similar best practices into their offerings for the private sector. However, such a spillover will take longer without a legal requirement to do so.

Independent assessment, monitoring, and reporting of automated systems are essential to ensure that the principles in the AI Bill of Rights are actually getting implemented. So, credible and qualified third-party AI auditors will have a key role to play to realize the vision of bias-free and safe AI. Sector-specific (or agency-specific) guidelines can provide granularity and better-targeted interventions.


Linda PophalKASHYAP KOMPELLA is an award-winning industry analyst, a bestselling author, an educator, and an AI advisor to leading companies and startups in the U.S., Europe, and the Asia-Pacific region. Find out more on LinkedIn (linkedin.com/in/kashyapkompella). Send your comments about this column to itletters@infotoday.com or tweet us (@ITINewsBreaks).