Advertisement
Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
go.stankeviciusmgm.com

On Narwhals, Unicorns, SB 1047 and Big Tech’s Messiah Savior Complex

Jackie Niam/shutterstock

This column explores the symbolic meaning of unicorns, narwhals, Big Tech’s Messiah Savior Complex (a phrase coined by the author), and the pressing need for heightened situational awareness coupled with the ever more elusive common sense. This article was written exclusively for Stankevicius.

The Californian Frontier Artificial Intelligence Models Act

On September 29th, 2024, Governor Gavin Newsom vetoed SB 1047, called ‘Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,’ a bill introduced by State Senator Scott Wiener to regulate the development of artificial intelligence (AI). SB 1047 aimed to impose rules on AI developers to mitigate potential risks.

Advertisement

In Chapter 22.6. of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, AI is defined as (b) “Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” Governor Newsom’s vote was a pivotal moment, not only for Silicon Valley but also for the future of AI safety, ethics, and common sense.

But what does this article and bill have to do with narwhals, unicorns, and ancient mythologies? As it turns out, more than you might think. Unicorns, the mythical creatures known for their single, spiraling horn, have long been a symbol of purity, power, and wonder. In contrast, narwhals, often referred to as the “unicorns of the sea,” are very real, though their mysterious tusks have stirred the imaginations of many for centuries. As AI technologies surge forward, the future remains as uncertain and enigmatic as the myths surrounding narwhals and unicorns – one real, the other imagined. Narwhals and unicorns serve as apt metaphors for how a lack of common sense can blur the lines between reality, fantasy, and magical thinking.

Big Tech’s Messiah Savior Complex and Robber Barrons

Just as the narwhal and unicorn represent different forms of mythical and real-world power, the debate over AI oscillates between Big Tech’s Messiah Savior Complex, dystopian fears and the need to rein in Silicon Valley’s modern-day ‘Robber Barons.’ The author uses the term Tech’s Messiah Savior Complex to highlight the prevailing big tech narrative that technology, particularly advanced forms like AGI, will single-handedly “save” humanity from its greatest challenges – whether those be climate change, health, longevity (but only for the chosen few), poverty, or global inequality. This mindset treats technology as a kind of messianic guide, endowed with delivering utopian outcomes. Yet, there seems little consideration for the complexities of human society, ethics, or unintended consequences such as unstrained water and power use. It reflects an overly optimistic, almost religious faith in technological solutions, often sidelining critical concerns about governance, accountability, and social context.

The term ‘Robber Baron’ conjures the image of powerful figures, much like the titans of Silicon Valley today. According to Britannica Money, the term ‘robber baron’ can be understood in two ways: One interpretation is a “pejorative term for one of the powerful 19th-century American industrialists and financiers who made fortunes by monopolizing huge industries through the formation of trusts, engaging in unethical business practices, exploiting workers, and paying little heed to their customers or competition.” Another interpretation is ”..alternatively, those who credit the explosive growth of American capitalism during this period to the indefatigable pursuit of success and material wealth are likely to celebrate these entrepreneurial tycoons as “captains of industry.” The author is sure that some names on either side of this coin come to her readers mind.

The Basics of SB 1047

SB 1047 outlined specific safety protocols for developers, including the ability to shut down AI models and mandatory safety documentation accessible to authorities, aiming to prevent AI from causing “critical harms” by making companies responsible for implementing safety protocols. Think of this as a Talos defense. Talos in Greek mythology, was a giant bronze automaton that served as a guardian of the island of Crete. If Talos’ vein, which carried his life force (usually described as molten metal or ichor), was damaged, it would result in his death. In some versions of the myth, the vein was sealed by a single bronze nail or plug, and if this was removed or damaged, the ichor would leak out, causing Talos to lose his life force and eventually collapse.

SB 1047 would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to, among other things, require that “…a developer, before beginning to initially train a covered model, as defined, comply with various requirements, including implementing the capability to promptly enact a full shutdown, as defined, and implement a written and separate safety and security protocol, as specified. ” (SB 1047, p. 3)

The idea behind this bill is both noble and necessary: to avoid catastrophic events that could stem from AI misuse, such as cybersecurity breaches or even the creation of autonomous weapons. The bill targeted the largest AI models, specifically those that cost at least $100 million and used a massive amount of computing power.

SB 1047: Chapter 22.6. Safe and Secure Innovation for Frontier Artificial Intelligence Models states:

(e)  (1)  “Covered model” means either of the following: 

(A)  Before January 1, 2027, “covered model” means either of  the following: 

(i)  An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer. 

(ii)  An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.

Big Tech Concerns

Influential tech leaders voiced concerns over the bill. For example, Dr. Fei-Fei Li, a prominent AI researcher, warned that the bill’s strict requirements could stifle innovation. As featured in Fortune, Dr. Fei-Fei Li, ‘The Godmother of AI’ stated California’s well-intended AI bill would harm the U.S. ecosystem.  “First, SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model. It is impossible for each AI developer – particularly budding coders and entrepreneurs – to predict every possible use of their model. SB-1047 will force developers to pull back and act defensively – precisely what we’re trying to avoid.”

One might pause for a second to reread her statement that “It is impossible for each AI developer – particularly budding coders and entrepreneurs – to predict every possible use of their model,” and ponder on those words.

Leopold Aschenbrenner, Situational Awareness and the AI Race: SB 1047’s Missed Opportunity

Looking at California’s SB 1047 debacle, the concept of situational awareness takes on new significance. Designed to regulate the development of AI systems, ensuring safety protocols for large-scale models, the debate failed to grasp the broader implications of situational awareness as articulated by thinkers like Leopold Aschenbrenner, formerly at the Superalignment team at OpenAI.

In Aschenbrenner’s view, expressed in his seminal 165-page paper “Situational Awareness: The Decade Ahead,” situational awareness goes far beyond mere regulation – it’s about maintaining a vigilant understanding of the broader context in which AI is developing. This includes recognizing the importance of algorithmic secrets and the very real threat of industrial espionage, particularly from adversarial global actors. As Aschenbrenner points out, the lax security in American AI labs is akin to “swiss cheese” – full of holes that make it trivially easy for adversaries to steal crucial algorithmic breakthroughs. With thousands of people having access to cutting-edge AI developments, the risk is not just economic but existential.

Aschenbrenner also stresses that the next few years are pivotal. American AI labs are sitting on algorithmic advancements that could be the key to building systems far beyond human intelligence. But without proper situational awareness – without understanding the significance of securing these breakthroughs – the U.S. risks losing them to espionage.

The failure of SB 1047 adoption also highlights a larger issue: while regulatory efforts are focused on preventing AI from causing direct harm, there is insufficient attention to the security of the very technologies that will shape the global AI landscape. Aschenbrenner’s concept of situational awareness emphasizes the need for “supersecurity,” or government-level security measures, to protect these algorithmic secrets. He suggests that only with the help of institutions like the NSA or government-backed infrastructure can the U.S. adequately safeguard the developments that will determine its future dominance in AI​. Without situational awareness, we risk underestimating the importance of safeguarding not just AI applications, but the very algorithms and data that power them. In failing to prioritize these concerns, we leave the door wide open for adversaries to exploit our technological advancements, effectively neutralizing any regulatory safeguards we might put in place​.

Big Tech and the Power of Influence

The influence of big tech in this debate cannot be overstated. Just as the mythical unicorn was seen as a rare and powerful creature, so too are AI giants like Google, Microsoft, and Meta. These companies are not just creators of technology; they are gatekeepers, wielding enormous influence over how AI evolves. The author wonders if they have read Edmund Burke.

Society is indeed a contract. … It is to be looked on with other reverence, because it is not a partnership in things subservient only to the gross animal existence of a temporary and perishable nature. It is a partnership in all science; a partnership in all art; a partnership in every virtue and in all perfection. As the ends of such a partnership cannot be obtained in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born. Each contract of each particular state is but a clause in the great primeval contract of eternal society, linking the lower with the higher natures, connecting the visible and invisible world, according to a fixed compact sanctioned by the inviolable oath which holds all physical and all moral natures, each in their appointed place.

Edmund Burke, Reflections on the Revolution in France (p. 108)

On Vikings, Emporers and Big Tech’s Magical Thinking

The story of narwhal tusks being traded in Europe as “unicorn horns” serves as a reminder of how perception can be manipulated. Vikings harvested narwhal tusks off the coast of Greenland, trading them as unicorn horns, and the rarity and mystery surrounding them fueled myths and legends. These tusks were thought to cure illnesses and protect monarchs from poison, with figures like Ivan the Terrible calling for his narwhal horn staff on his deathbed, hoping for salvation. (Authors note: How many of my readers think that Ivan’s magical thinking worked? ) Similarly, Danish kings built thrones from these tusks, and an English sailor and privateer Martin Frobisher gifted Queen Elizabeth I narwhal tusks, valuable enough for her to purchase a castle. Even Austria’s Kaiser Karl V used them to pay off national debt.

Just as the Vikings shaped a profit-oriented narrative around their “magical unicorns,” big tech companies are shaping their narrative around AI. These tusks once valued more than gold due to their perceived magical powers, parallel today’s naïve expectations and buy-in to Tech’s Messiah Savior Complex, placing unrealistic expectations on AI/AGI innovation to resolve deeply entrenched issues without considering the nuances of implementation or the societal and ecologigcal shifts required for meaningful, sustainable change.

This article was written by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024). As a columnist for Stankevicius, she writes on Nicomachean Ethics: Insights at the Intersection of AI and Education. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement