In a previous article by the author for Stankevicius on DeepSeek, Italy’s Privacy Guarantor & OpenAI’s Allegations, some critical questions were raised: Was everything developed as it should be, following the expected trajectory of AI innovation? Or is DeepSeek’s model simply a distilled version of U.S. technological advancements? Moreover, with its rapid deployment and open-source approach, how does DeepSeek manage user data, and what are the broader implications for data privacy, security, and potential surveillance? If it seems too good to be true, is it really too good to be true? These questions form the backbone of our ongoing discussion about the future of technology and national leadership in artificial intelligence. Today, as these questions continue to fuel the debate, a new battle line has emerged in the high-stakes arena of global AI innovation. OpenAI has taken an assertive stance by labeling China’s DeepSeek as both “state-subsidized” and “state-controlled.”
Open AI’s policy proposal – submitted under the Trump administration’s “AI Action Plan” – urges U.S. officials to ban AI models linked to DeepSeek and similar operations allegedly under the sway of the People’s Republic of China (PRC). At the heart of this controversy is the quest for secure, unbiased artificial intelligence that can safeguard national interests while propelling technological progress. With President Trump’s Executive Order 14179 stressing the removal of barriers to American AI leadership, the stage is set for an intense clash over innovation, security, and geopolitical influence. As the U.S. pushes to tighten its technological supply chain and curb potential risks of espionage and ideological manipulation, this proposal challenges the global community to rethink the delicate balance between open innovation and safeguarding national security.
In Their Own Words
Let’s read: President Trump’s Executive Order 14179 of January 23, 2025, which states:
Removing Barriers to American Leadership in Artificial Intelligence
President Trump’s Executive Order 14179 of January 23, 2025
By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:
Section 1 – Purpose.
“The United States has long been at the forefront of artificial intelligence (AI) innovation, driven by the strength of our free markets, world-class research institutions, and entrepreneurial spirit. To maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas. With the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans. This order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence….”
On March 13, 2025, Open AI send the following proposal to Faisal D’Souza, NCO Office of Science and Technology Policy, 2415 Eisenhower Avenue, Alexandria, VA 22314 From: Christopher Lehane OpenAI 1455 3rd Street San Francisco, CA 9415
“1. Preemption: Ensuring Freedom to Innovate
We propose creating a tightly-scoped framework for voluntary partnership between the federal government and the private sector to protect and strengthen American national security. This framework would extend the tradition of government receiving learnings and access, where appropriate, in exchange for providing the private sector relief from the 781 and counting proposed AI-related bills already introduced this year in US states. This patchwork of regulations risks bogging down innovation and, in the case of AI, undermining America’s leadership position. Overseen by the US Department of Commerce and in coordination with the AI Czar, perhaps by reimagining the US AI Safety Institute, this effort would provide domestic AI companies with a single, efficient “front door” to the federal government that would coordinate expertise across the entire national security and economic competitiveness communities.” Lehane, C. (2025, March 13). Notice—Request for information on the development of an artificial intelligence (AI) action plan. OpenAI.
The Security Debate vs DeepSeek
Kyle Wiggers, in his TechCrunch article, DeepSeek claims its ‘reasoning’ model beats OpenAI’s o1 on certain benchmarks discusses DeepSeek statements that its ‘reasoning’ model beats OpenAI’s o1 on certain benchmarks. OpenAI argues that DeepSeek’s models, including its R1 “reasoning” model – are inherently insecure because Chinese law requires companies like DeepSeek to comply with governmental data requests. According to this view, using technology developed under such constraints could expose users to privacy breaches and even intellectual property theft. In an era where virtually every nation grapples with the balance between innovation and security, these measures are being proposed as a means to mitigate significant risks.
Ambiguities and Real-World Implications
One point of contention is what OpenAI means by “models.” Whether the critique targets DeepSeek’s Application Programming Interface (API), its openly available models, or both remain unclear. Companies like Microsoft, Perplexity, and Amazon are hosting DeepSeek’s open models on their own secure infrastructures. According to Asha Sharma, Corporate Vice President, Microsoft AI Platform, DeepSeek R1 is now available on Azure AI Foundry and GitHub.
In light of DeepSeek’s origins as a Chinese model, should organizations critically assess its potential for data privacy violations, heightened by China’s regulatory environment, which may not align with US standards for data protection? For example, earlier this year, OpenAI told the Financial Times that it found evidence linking DeepSeek to the use of distillation – a common technique developers use to train AI models by extracting data from larger, more capable ones. However, the new allegations extend beyond intellectual property disputes, suggesting instead that DeepSeek might be operating under the direct influence of the PRC. Given that DeepSeek originated as a spin-off from a hedge fund, High-Flyer, and considering its rising profile in Chinese tech circles, the proposal fuels ongoing debates about technology transfer and national security.
Looking Ahead: Balancing Innovation and Security
As AI and policy experts debate these issues, OpenAI’s proposal underscores a broader dilemma: how to harness AI’s paradigm-shifting power while ensuring robust security standards. For international stakeholders – from policymakers to tech innovators – the current fluidity in AI standards and policy frameworks serves as a reminder that safeguarding data and intellectual property in the digital age is a collective challenge.
Leadership, Security and Competitiveness
Ultimately, the conversation surrounding AI is not just about security; it also encompasses US leadership and competitiveness. As AI and policy experts debate these issues, OpenAI’s proposal underscores a broader dilemma: how to harness AI’s transformative power while ensuring robust security standards. For stakeholders – from policymakers to tech innovators – the current fluidity in AI standards and policy frameworks reminds us that safeguarding data, intellectual property, and US competitiveness in the digital age is a collective challenge. In a world where computational power is as much a strategic asset as it is a driver of progress, questions about state influence, regulatory oversight, and the process of open innovation have never been more urgent.
Whether this latest proposal will reshape rules or spark a wider reassessment of international tech collaborations remains to be seen. What is clear is that the dialogue surrounding AI, sovereignty, and security is entering a new and highly charged phase, best captured in Open AI’s words: “Applying the fair use doctrine to AI is not only a matter of American competitiveness – it’s a matter of national security.” Lehane, C. (2025, March 13). Notice—Request for information on the development of an artificial intelligence (AI) action plan. OpenAI.
Dr. Jasmin (Bey) Cowin, a columnist for Stankevicius, employs the ethical framework of Nicomachean Ethics to examine how AI and emerging technologies shape human potential. Her analysis explores the risks and opportunities that arise from tech trends, offering personal perspectives on the interplay between innovation and ethical values. Connect with her on LinkedIn.