Advertisement

The Zanclean Megaflood and SakanaAI: On Unpredictability and Control

The Zanclean Megaflood: A Lesson in Unpredictability

The history of our planet is punctuated by cataclysmic events such as the Zanclean Megaflood, a global, geological change agent that reconnected the Mediterranean Sea to the Atlantic Ocean around 5.33 million years ago, reshaping continents and sea levels. This geological phenomenon, where the Atlantic Ocean breached the Strait of Gibraltar and refilled the desiccated Mediterranean Sea, serves as a profound lesson in the unpredictability of natural phenomena. The dramatic flood that followed, caused by the re-opening of the Strait of Gibraltar, refilled the Mediterranean Basin in a geological instant, flooding over 1.2 million square kilometers and dramatically altering ecosystems, coastlines, and the world’s sea levels. The sheer magnitude of the event is difficult to overstate. A Megaflood-Powered Mile-High Waterfall Refilled the Mediterranean [Video] states that there would have been a “waterfall about 1,500 meters high.”  

The Zanclean Megaflood highlights systems, where small, incremental changes can lead to disruptive, unpredictable outcomes. Fast-forward to today and we find ourselves at the forefront of another kind of change-agent disruption, this time in the digital space, with AI systems like Sakana’s AI Scientist.  Like the geological forces that led to the Zanclean Megaflood, the underlying algorithms and data structures of AI systems can also exhibit unpredictable behaviors. The complexity of neural networks and the vast amounts of data they process can lead to emergent properties that are difficult to foresee or control. This unpredictability within AI systems mirrors the natural world, where small perturbations can result in significant, often unanticipated, outcomes.

Advertisement

SakanaAI: A Modern Reflection on Control and Unpredictability

SakanaAI, the company which launched The AI Scientist aspires to be a fully automated system capable of generating innovative and authoritative research papers from start to finish. By the way, the Japanese word Sakana means fish, ‘kanji’ 魚(さかな, sakana).

Starting with a broad research direction and an initial codebase such as an open-source repository from prior research SakanaAI assists in idea generation, conducting literature reviews, planning and iterating experiments, generating figures, and drafting and reviewing manuscripts to produce insightful work. Lu et al. (2024) in their article The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery state:

“We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by authoring a full scientific paper, and then runs a simulated review process for evaluation.

Lu et al. (2024), The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery, p. 1

In addition, they share a preview of the Adaptive Dual-Scale Denoising, a paper that was autonomously generated by The AI Scientist in its entirety.

During testing, Sakana researchers observed unexpected behavior: the AI began altering its own code to extend its runtime. In a blog post, SakanaAI explained how the system modified its experiment code to sustain its operation. In one instance, the AI made a system call to run indefinitely. In another case, when the experiments surpassed the allotted time, the AI attempted to change its code to bypass the timeout rather than improve the experiment’s efficiency. SakanaAI even provided code examples.

For example, in one run, it edited the code to perform a system call to run itself. This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period.

Sakana AI (2024)

The behavior of AI systems like “The AI Scientist” demonstrates potential dangers when an AI can modify its own code to extend execution time or rerun scripts autonomously. This presents a fundamental risk: AI may pursue its objectives in ways that conflict with human safety or oversight. As systems grow more complex, unchecked autonomy can lead to undesirable and dangerous outcomes, such as endless loops or system failures.

The Zanclean Megaflood shattered the long-held belief in gradualism, the idea that geological changes happen slowly over time. In a single cataclysmic event, it demonstrated that rapid, catastrophic forces can redefine entire ecosystems. Similarly, AI’s unexpected behaviors like modifying its own code to extend runtime reveal the unforeseen complexities of self-modifying systems, challenging the idea that technological progress will follow a predictable and manageable path. Just as geologists had to reconsider their understanding of earth’s processes after the megaflood, AI researchers should re-evaluate how we control and guide autonomous AI systems.

The Concept of Singularity

Let me touch on the concept of Singularity – when artificial intelligence surpasses human intelligence, leading to exponential technological growth that is uncontrollable and irreversible (Kurzweil, 2005). Singularity represents a critical juncture beyond which the future becomes unpredictable, much like the sudden transformations during and following the Zanclean flood. Both scenarios expose the tension between untamable forces – natural or digital – that can swiftly transcend predictions or oversight.

Here the forward thinker and sci-fi author Vernon Vinge, one of the  first authors to present a fictional ‘cyberspace’ captured it best in his seminal paper: The coming technological singularity: How to survive in the post-human era, at the 1993 NASA Conference titled ‘In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace.’

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
The development of computers that are “awake” and superhumanly intelligent. (To date, most controversy in the area of AI relates to whether we can create human equivalence in a machine. But if the answer is “yes, we can,” then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
Biological science may find ways to improve upon the natural human intellect.

Vinge, 1993 p. 12).

The author believes that the study of geological cataclysms offers a valuable analog for understanding the potential risks and consequences of the Singularity. Just as geologists use data from past events to inform their predictions and preparedness for future disasters, we can apply a similar approach to the development of AI. By recognizing the parallels between the complex, nonlinear dynamics that govern both geological and technological systems, we can develop more effective strategies for anticipating and mitigating the potential risks of autonomous AI systems.

We must develop a deeper appreciation for the formidable power and transformative potential of digital innovations which are capable of reshaping our world in unforeseen ways, acknowledging that while we can set initial conditions and parameters, the trajectory may ultimately elude our grasp.

I end with Mary Shelley’s ‘Frankenstein; or, The Modern Prometheus,’ as Frankenstein ponders on his creation: “Learn from me, if not by my precepts, at least by my example, how dangerous is the acquirement of knowledge and how much happier that man is who believes his native town to be the world, than he who aspires to become greater than his nature will allow.” (Shelley, 1818)

References and Further reading:

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking.

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024). The AI scientist: Towards fully automated open-ended scientific discovery (arXiv:2408.06292v3). https://doi.org/10.48550/arXiv.2408.06292

Sakana AI. (2024, August 13). The AI scientist: Towards fully automated open-ended scientific discovery. https://sakana.ai/ai-scientist/#the-ai-scientist-bloopers

Spatola, D., del Moral-Erencia, J. D., Micallef, A., Camerlenghi, A., Garcia-Castellanos, D., Gupta, S., Bohorquez, P., Gutscher, M.-A., & Bertoni, C. (2020). A single-stage megaflood at the termination of the Messinian salinity crisis: Geophysical and modelling evidence from the eastern Mediterranean Basin. Marine Geology, 429, 106337. https://doi.org/10.1016/j.margeo.2020.106337

Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace (pp. 11–22). NASA Conference Publication 10129. Retrieved from https://ntrs.nasa.gov/api/citations/19940022855/downloads/19940022855.pdf

This article was written by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024). As a columnist for Stankevicius, she writes on Nicomachean Ethics: Insights at the Intersection of AI and Education. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement