Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research

Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (

Magazines > Information Today > January/February 2023

Back Index Forward
Information Today
Vol. 40 No. 1 — Jan/Feb 2023
Insights on Content

The AI Bill of Rights: A Small Step in the Right Direction
by Kashyap Kompella


Blueprint for an AI Bill of Rights

The White House’s Office of Science and Technology Policy (OSTP) released Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People in October 2022. This AI Bill of Rights lays down five principles to guide the design, deployment, and use of automated systems. It also contains an objective to safeguard the rights of citizens and ensure that the use of such systems does not curtail their access to opportunities or resources. The following is a summary of the five principles.


Automated systems should include proactive and ongoing safeguards to protect the public from harms and avoid the use of low-quality data. They should demonstrate their safety and effectiveness through independent evaluation and reporting.


Automated systems lead to discrimination when they enable inequitable outcomes based on factors such as race, ethnicity, gender, religion, or disability. Automated systems should undergo testing to ensure they are free from such biases prior to their sale or usage. Recommendations to prevent algorithmic discrimination include assessing equity during the design phase, the usage of representative data, and the assessment and mitigation of disparate outcomes.


The data privacy principle calls for agency on how an individual’s data is used and protects them from abusive data practices. Automated systems should limit the scope of their data collection, give users the ability to provide use-specific consent and easily withdraw their consent, and strive for privacy by design and default.


Users should be made aware that an automated system is being used, and the system should provide an easy-to-understand explanation of why a particular decision was made or an action was taken.


Users should be able to opt out of automated systems and seek human alternatives when appropriate and reasonable. Clear instructions should be given on how to opt out, and a human alternative should be given in a timely and convenient manner. It is recommended that sensitive domains (such as criminal justice, health, and education) include human oversight and human consideration in high-risk situations.


The AI Bill of Rights is neither a law nor a binding regulation. It’s a call to action, and it joins a rather long list of lofty statements and standards about responsible AI and AI ethics principles. Furthermore, the OSTP has signaled that this is just the beginning.

The recommended principles will be followed by federal government agencies as they design, develop, procure, and operate AI systems. I expect the procurement guidelines for public sector AI systems to be aligned with these principles too, meaning that private sector AI vendors desiring to sell to public agencies must follow the recommendations. I imagine the vendors will eventually incorporate similar best practices into their offerings for the private sector. However, such a spillover will take longer without a legal requirement to do so.

Independent assessment, monitoring, and reporting of automated systems are essential to ensure that the principles in the AI Bill of Rights are actually getting implemented. So, credible and qualified third-party AI auditors will have a key role to play to realize the vision of bias-free and safe AI. Sector-specific (or agency-specific) guidelines can provide granularity and better-targeted interventions.

Linda PophalKASHYAP KOMPELLA is an award-winning industry analyst, a bestselling author, an educator, and an AI advisor to leading companies and startups in the U.S., Europe, and the Asia-Pacific region. Find out more on LinkedIn ( Send your comments about this column to or tweet us (@ITINewsBreaks).