Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

Pages: 1| 2| 3| 4
Is Technology Getting the Better of Us? Welcome to the Algorithmic Society
By
January/February 2020 Issue

FINDING HOPE FOR A BETTER PATH: PRIVACY BY DESIGN

In a 2018 blog posting (brewster.kahle.org/2018/02/25/big-data-versus-humanity), Internet Archive founder Brewster Kahle notes, “I never used the tools of Big Data and AI to build profiles of masses of people. And on purpose. I realized what could be done—the power in knowing too much, the temptation to manipulate. But now others have crossed that line. Total Information Awareness; Facebook, Equifax. Now we see the problems, now we have a problem, and we need ideas on where to go from here.”

A relatively new international standard, Privacy by Design, has emerged that shows just how much a single individual can do to effect change. Ann Cavoukian, information and privacy commissioner for Ontario, Canada, developed the standard as she came to believe that “design”—nothing short of building privacy right into system—would proactively embed privacy directly into information technology, business practices, physical design and networked infrastructures, thereby making it the default (ipc.on.ca/wp-content/uploads/Resources/7foundationalprinciples.pdf). Her proposed system is based on seven foundational principles:

  • Proactive not Reactive – Preventative not Remedial
  • Privacy as the Default Setting
  • Privacy Embedded into Design
  • Full Functionality — Positive-Sum not Zero-Sum
  • End-to-End Security — Full Lifecycle Protection
  • Visibility and Transparency – Keep it Open
  • Respect for User Privacy – Keep it User-Centric

This framework was released in 2009 and adopted by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010. In 2018, the European Data Protection Supervisor (EDPS), an independent institution of the EU, conducted a study and dismissed the idea of critics that this was too vague for a standard, noting that “this analysis is in context with the growing and widespread need for grounding technological development on human values and ethics. An effective implementation of the principle of privacy by design and by default can represent an outstanding milestone towards a human values based technology design” (iapp.org/media/pdf/resource_center/Preliminary_Opinion_on_Privacy_by_Design.pdf).

STANFORD’S INSTITUTE FOR HUMAN-CENTERED ARTIFICIAL INTELLIGENCE

In the Summer of 2016, Fei-Fei Li, Stanford computer science professor, addressed Stanford’s then-provost and humanities professor John Etchemendy: “You’ve led an effort to draw an arrow from technology to the humanities, to help humanists innovate their methodology. It’s time to build another arrow coming back the other direction. It should become a complete feedback loop. We need to bring the humanities and social thinking into tech” (techheadlines.us/how-a-driveway-chat-resulted-in-the-institute-for-human-centered-artificial-intelligence-the-dish-stanford-university-news). The two then set about to create the Institute for Human-Centered Artificial Intelligence (HAI; hai.stanford.edu), which aims to “fundamentally change the field of AI by integrating a wide range of disciplines and prioritizing true diversity of thought.”

According to Li and Etchemendy, “If guided properly, AI could have a profound, positive impact on people’s lives: It could help mitigate the effects of climate change; aid in the prevention and early detection of disease; make it possible to deliver quality medical care to more people; help us find ways to provide better access to clean water and healthy food; contribute to the development of personalized education; help billions of people out of poverty and help solve many other challenges we face as a society” (hai.stanford.edu/news/opening-gate).

Through symposia, special events, and a wide-ranging series of interdisciplinary courses, HAI is at the forefront of bringing together professionals and scientists across the disciplines to examine and create new potential opportunities for moving AI, Big Data, and our evolving new social landscape in ways that benefit all and work to solve existing—and future—problems that naturally occur during such a dramatic technological shift as we are experiencing.

BERKMAN SEEKS TO CREATE NEW SYNERGIES TO WORK OUT SOLUTIONS

The Berkman Klein Center for Internet & Society at Harvard has taken a policy/solutions approach to the challenges of AI and Big Data (cyber.harvard.edu/research/assembly). Its Assembly: Disinformation model is providing not only forums for discussing and surfacing issues but also working to test options for dealing with the problems that arise. The 2018 Assembly, for example, looked deeply at the issue of testing data quality used for developing AI applications: “AI systems matter, and so does the data on which they are modeled. One way to improve the accuracy and fairness of these models—models that determine everything from navigation directions to mortgage approvals—is to make it easier for data scientists to quickly assess the viability and fitness of datasets used to train them. Current methods of data analysis and assessment are not standardized; they vary greatly across industry and domains, and are also costly in time and expertise” (dash.harvard.edu/bitstream/handle/1/37184705/2018-07_AGTechForum.pdf?sequence=1&isAllowed=y).The work resulted in the development of the Dataset Nutrition Label that provides data scientists with a “distilled yet comprehensive overview of dataset ‘ingredients’ before AI model development” (arxiv.org/abs/1805.03677).

Technology is moving so quickly that all we can truly depend on is working to monitor these systems and to improve or expose issues or errors that are found. Legal and political institutions and laws are unable to keep up, and there is no clear antidote. AI and other technologies are offering nearly daily challenges to professionals across the board. As a profession that is committed to providing the best possible information in this era of change, information professionals need to better understand and engage in finding answers and solutions to the challenges of Big Data.

PROTECTING PRIVACY AS A FUNDAMENTAL HUMAN RIGHT

“Good privacy practices are symptomatic of a company that takes care of data and security as well,” Arlo Gilbert, co-founder and CEO at Osano, asserts: “We believe that data privacy is a fundamental human right, and the companies that embrace privacy will become the next wave of trusted brands” (darkreading.com/endpoint/osano-launches-data-privacy-and-compliance-platform/d/d-id/1335992). Osano has developed a SaaS-based data privacy and compliance software “to comply with General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and other privacy regulations.” Osano’s dataset measures data privacy practices of more than 7,000 companies. “Using a 163-item ontology, Osano’s algorithms evaluate details like transparency, whether a company collects unnecessary data, how long they store data, and how easy they make it to remove any data.” Today, more than 750,000 companies use Osano’s free tool, generating more than 2 billion consents per month.

Whether developing free tools, integrating humanistic and legal rights into algorithmic design, or working to find better diagnostics and repair, we are in a game of catch-up. The very tools that have created this environment of deceit and control can hopefully bring about change, tough ethical/legal sanctions, and a future that is not clouded by fear or deception.

Pages: 1| 2| 3| 4


Nancy K.Herther is a research consultant and writer who recently retired from a 30-year career in academic libraries. 

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top