Advertisement

The AI Risk Repository, Filter Bubbles, and Isaac Asimov

A man? How could a man be a teacher?” (Asimov, 1954)

In Isaac Asimov’s “The Fun They Had,” set in the year 2155, Margie and Tommy stumble upon an old book revealing a time when human teachers, not machines, conducted education. This discovery prompts them to reflect on their own experiences with mechanical teachers. As AI increasingly integrates into modern education, Asimov’s Sci-Fi narrative offers a timely reflection on AI’s potential risks to our current systems. Margie’s skepticism towards human teachers, rooted in her conditioned trust in mechanical systems, mirrors concerns the author faces today. Just as Margie unquestioningly accepts her mechanical teacher’s authority, we too might soon uncritically embrace AI.

A comprehensive understanding of AI’s risks is essential for educators and institutions alike. The AI Risk Repository, a comprehensive living database of over 700 AI risks categorized by their cause and risk domain is an excellent new toolkit for AI risk assessment. According to Slattery et al. (2024), a systematic evidence review has identified over 700 risks associated with AI, classified across various domains. These risks are not just theoretical; they present challenges that can impact individuals, organizations, and society as a whole. The AI Risk Repository has three parts:

Advertisement

  • A. 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • B. The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • C. The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”. (Slattery, P., Saeri, A., Grundy, E. A. C., & Thompson, N., et al. (2024).

As AI continues to permeate educational systems, institutions must equip themselves with the knowledge as these risks are not merely hypothetical; they are challenges that can affect individuals, organizations, and societies. The AI Risk Database: A Comprehensive Resource for Understanding AI-Related Risks can enhance curricula, inform research, and ensure that institutions, faculty, and students are prepared to navigate the complex landscape of AI risks by:

1. Preparing Future Leaders

Educational institutions have a responsibility to prepare students not just for the opportunities AI presents but also for the challenges. The AI Risk Database offers a structured way to introduce students to the ethical, social, and technical risks associated with AI. By integrating these resources into the curriculum, educators can ensure that future leaders are equipped to handle the ethical dilemmas and societal impacts of AI technologies. As highlighted in the discussion from the provided document, “A systematic evidence review has identified over 700 risks associated with AI classified into various domains and subdomains.” This extensive categorization underscores the importance of educating faculty and students on the diverse and complex nature of AI risks, enabling them to make informed decisions​.

2. Enhancing Curriculum and Research

The AI Risk Database can be a valuable resource for enhancing academic curricula. By incorporating real-world examples and case studies from the database, educators can enrich the learning experience and encourage students to engage in research that addresses these risks. The database’s living nature means that it can continuously evolve, providing up-to-date information that reflects the latest developments in AI risk research. For instance, students can use the database to explore specific risk categories, such as the “Diffusion of Responsibility” identified in certain AI systems. This can lead to research projects that investigate how these risks manifest in different contexts and what mitigation strategies might be effective​.

3. Developing a Common Frame of Reference

One of the key benefits of the AI Risk Database is that it offers a common frame of reference for understanding AI risks. This shared understanding is crucial for fostering collaboration among various stakeholders, including researchers, developers, policymakers, and educators. By adopting this common framework, educational institutions can contribute to a more coherent and unified approach to addressing AI risks on a global scale.

AI’s Hidden Risks: Privacy, Control, and the Importance of Ethical Oversight

“He smiled at her and gave her an apple, then took the teacher apart.” (Asimov, 1954)

Just as the Inspector examines and adjusts Margie’s educational experience without her involvement, AI systems often operate in ways that users do not fully understand or control. These systems require copious amounts of personal data to function effectively, which can lead to significant privacy risks. For instance, AI algorithms can infer sensitive details about users based on their digital footprints, sometimes without the users’ explicit consent. Moreover, the centralization and processing of such data can result in security vulnerabilities, where unauthorized access could lead to identity theft, exposure of private information, or loss of intellectual property. This scenario underscores the importance of implementing robust security measures and transparent practices in AI deployment to protect users’ privacy.

The taxonomy of AI risks, (Table 6, p. 33) provides a comprehensive framework for understanding the various dangers associated with artificial intelligence across multiple domains. These risks range from discrimination and toxicity, where AI systems may unfairly discriminate or expose users to harmful content, to significant concerns in privacy and security, where AI might compromise sensitive information or be exploited through vulnerabilities. Additionally, the spread of misinformation and the potential misuse by malicious actors raise ethical and societal concerns, while the implications on human-computer interaction highlight the risk of overreliance and loss of human autonomy. The socioeconomic and environmental impacts of AI, such as the centralization of power and increased inequality, further exacerbate existing societal challenges. Lastly, AI system safety, including the risk of AI pursuing goals misaligned with human values, emphasizes the need for robust governance and ethical oversight to mitigate these multifaceted risks effectively.

On the Disposable Nature of Knowledge

 “Gee,” said Tommy, “what a waste. When you’re through with the book, you just throw it away, I guess.”

Tommy’s comment, “What a waste. When you’re through with the book, you just throw it away, I guess,” reflects a perception of the disposable nature of physical books in a world increasingly dominated by digital media. This observation can serve as a metaphor for broader concerns about how information is consumed and discarded in the digital age, particularly in the context of AI-generated content. In the traditional sense, books are valued for their permanence; they are physical objects that contain curated and verified knowledge, often subjected to rigorous editorial processes. However, as society moves towards digital and AI-generated information, this permanence is at risk. The digital content, much like Tommy’s discarded book, can be easily manipulated, altered, or even erased. This transient nature of digital information introduces significant challenges, particularly when it comes to maintaining the integrity of truth.

AI, while offering vast potential in generating and disseminating information, also introduces the risk of spreading misinformation. Algorithms designed to maximize engagement can prioritize sensational or misleading content, reinforcing the consumption of information that aligns with existing beliefs. This phenomenon, known as the creation of “filter bubbles,” can result in individuals being exposed only to information that confirms their biases, further isolating them from a shared reality. Internet expert Eli Pariser argues that filter bubbles emerge due to the personalization of online content, which is thought to intellectually isolate users and limit the diversity of the information they encounter. Tommy’s remark, therefore, can be seen as an allegory for the contemporary issues surrounding AI and information dissemination. Just as he laments the perceived wastefulness of discarding a book after use, we must also consider the potential waste – or even harm – that arises when AI-generated misinformation replaces objective truth. In both cases, there is a danger of undervaluing the permanence and reliability of knowledge, leading to a fragmented society where shared understanding is undermined.

Tommy’s simple observation about books opens a broader discussion about the implications of AI on the integrity of information. As AI continues to shape our information landscape, it is crucial to ensure that the content it generates and disseminates upholds the same standards of accuracy and reliability that have traditionally been associated with books. Otherwise, much like a discarded book, objective truth may become something easily thrown away, with profound consequences for our collective understanding of fact vs “alternative facts.”

The Environmental Impact of AI

AI’s rapid development also raises significant concerns about its environmental impact, particularly regarding the carbon footprint associated with training large models. A study by Strubell et al. (2019) highlighted that training a single AI model can emit as much carbon as five cars over their lifetimes. Specifically, they estimated that the training process for a model like Transformer can result in up to 284 tonnes of CO2 emissions, which starkly illustrates the potential environmental impact of developing advanced AI systems. Moreover, the energy consumption of data centers, crucial for storing the vast amounts of data required for AI, is significant. According to a paper by Jones (2018), data centers accounted for about 200 terawatt-hours (TWh) per year, which is roughly 1% of global electricity use. This consumption is expected to grow, with some estimates suggesting that AI could account for as much as 10% of the world’s electricity use by 2025. These figures underscore the importance of developing more energy-efficient AI models and prioritizing green AI strategies that focus on reducing the environmental impact of AI technologies.

Human-Computer Interaction

“Margie was disappointed. She had been hoping they would take the teacher away altogether.”

Margie’s wish for the removal of her mechanical teacher foreshadows AI system limitations and potential failures. AI can be unreliable, misaligned with human values, and lack the transparency necessary for trust, just as Margie experiences dissatisfaction with her robot teacher. Perhaps one day soon we too will be with Margie:  Margie was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had. (Asimov, 1954)

References

  • Asimov, I. (1954). The Fun They Had. Retrieved from https://www.bartlettschools.org/pdf/TheFunTheyHad.pdf
  • Jones, N. (2018). How to stop data centres from gobbling up the world’s electricity. Nature, 561(7722), 163-166. https://doi.org/10.1038/d41586-018-06610-y
  • Pariser, E., (2011) The Filter Bubble: What the Internet is Hiding from You, Penguin Press
  • Slattery, P., Saeri, A. K., Grundy, E. A. C., Graham, J., Noetel, M., Uuk, R., Dao, J., Pour, S., Casper, S., & Thompson, N. (2024). A systematic evidence review and common frame of reference for the risks from artificial intelligence. http://doi.org/10.13140/RG.2.2.28850.00968
  • Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.

This article has been produced by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024) As a Columnist for Stankevicius she writes on Nicomachean Ethics – Insights at the Intersection of AI and Education. Get in touch via LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement