Policymakers and stakeholders in the United States face pressing questions about how to structure AI governance to foster innovation while ensuring human interests remain paramount. The Bipartisan House Task Force Report on Artificial Intelligence (AI) published in December 2024, proposes a framework that addresses these complex challenges through a policy-driven approach. The report emphasizes the importance of developing flexible governance structures that can adapt to rapid technological change while maintaining core democratic values. By weaving together broad principles with practical policy recommendations, the framework seeks to create an environment where AI development can flourish while remaining aligned with the public interest. The task force’s guiding principles reflect the 118th Congress’ vision for integrating AI into American society, emphasizing the need for policies that balance technological advancement with ethical considerations. The task force’s principles articulate the 118th Congress’ overarching goals and moral intentions to:
“developing a U.S. vision for AI adoption, innovation, and governance. The AI Task Force gathered information on salient AI issues from domain experts in industry, government, civil society, and academia to provide 66 key findings 85 recommendations. In summary, this report encapsulates a targeted approach that balances the need to promote vibrant AI innovation while safeguarding Americans from potential harms as we enter an era of widespread adoption of AI.”
(AI Task Force Report, p. 4)
Building on the Task Force’s recommendations, policymakers now face the crucial task of translating these suggestions into concrete governance frameworks. As Congress and other stakeholders move forward in developing actual policies, these complementary insights from the Task Force and academic analysis provide essential guideposts for ensuring AI development serves genuine human needs rather than falling prey to technological solutionism.(Cowin, 2024)
Principles vs Rules
A principle is a conceptual foundation, a general guideline that shapes decision-making in flexible, value-driven ways. Principles serve as touchstones, informing how policymakers interpret and respond to the evolution of AI-related challenges. They establish a normative stance, reflecting what is most important – such as safeguarding human welfare, encouraging responsible innovation, or ensuring equitable access to educational resources. In contrast, a rule is narrower and more prescriptive. It provides specific instructions that must be followed, creating clear boundaries and standards for compliance. While a principle might assert the importance of fostering innovation, a rule might specify that companies investing a certain percentage of revenue in AI research receive tax incentives. Thus, principles supply the “why” behind policies, and rules detail the “how.”
The Bipartisan House AI Task Force Report focuses on principles to first identify what values and considerations should shape AI governance, and then implement enforceable rules that realize those values in practice. The following principles, preserved as originally listed, encapsulate the guiding aims put forth by the Task Force:
Principles Established by the Bipartisan House AI Task Force (p.7):
- 1. Identify AI Issue Novelty: Policymakers should determine whether AI issues are genuinely new, significantly changed by AI, or remain unaffected, ensuring appropriate regulatory responses and modernization.
- 2. Promote AI Innovation: Policies should foster an environment that enables innovation, ensuring the U.S. economy benefits from AI advancements while maintaining global leadership.
- 3. Protect Against AI Risks and Harms: A risk-based governance approach should address both accidental and malicious uses of AI while promoting innovative solutions for the challenges AI creates.
- 4. Empower Government with AI: The federal government should adopt AI to improve services, build public trust, and lead by example in responsible AI use.
- 5. Affirm the Use of a Sectoral Regulatory Structure: Federal agencies should regulate AI within their expertise, leveraging centralized AI resources and interagency coordination to address sector-specific challenges.
- 6. Take an Incremental Approach: Congress should adopt adaptable policies that evolve with AI advancements, emphasizing vigilance and humility in addressing this rapidly changing technology.
- 7. Keep Humans at the Center of AI Policy: Policymakers should prioritize human input, talent development, and the impact of AI on labor markets, focusing on human freedoms and societal benefits.
By articulating these principles, the Task Force does not merely declare broad intentions; it sets the stage for rules that might one day bring those intentions into the tangible space of legislation and practice.
Education and AI
The Task Force’s vision extends deeply into structural questions for education. Education is a critical domain where AI can either exacerbate inequality or serve as a transformative equalizer. AI stretches into every corner of society. As the author wrote in her Stankevicius article “AI’s Potemkin Vision: The Seduction of Educational Techno-Solutionism,” “The rapid ascent of artificial intelligence (AI) has captivated Higher Education Institutions (HEIs), heralding promises of unprecedented technological advancements in education.” The report states that “The United States will also need to attract, train, and retain the talent to remain competitive in this technology.” (p. viii) Without a robust educational strategy, the workforce of tomorrow will be ill-equipped to navigate an economy increasingly reliant on AI-driven systems.
According to the Taskforce, “Training and educating American learners on AI topics will be critical to continuing U.S. leadership in AI technology and to the country’s economic and national security.” (p. 87) Yet, for “… students seeking other AI-relevant courses, such as computer science, only 57% of U.S. public high schools offer them as opposed to countries where they are widely taught, like China and South Korea.” (p. 108) This gap in AI education is not limited to K-12. At the university level, “One key challenge facing the development of domestic AI talent is the lack of access to AI resources, particularly computational power and data, at institutions of higher education. “(p. 89)
The Task Force’s Key Findings
Maintaining U.S. leadership in AI development and implementation requires immediate action to address educational and workforce development needs. Success depends on a multi-faceted approach that combines traditional education enhancement with innovative training pathways and public engagement initiatives. The report (p. 126) states that “It is likely that workers will increasingly work with or alongside AI systems, which will require pathways to upskill an AI-enabled workforce. Fostering domestic AI talent and continued U.S. leadership will require significant improvements in basic STEM education and training. Other nations are ahead of the U.S. in K–12 mathematics and science education. Addressing the future needs of the American AI industry will require that AI skills be bolstered by both workforce training and K–12 education. Furthermore, “K–12 educators need resources to promote AI literacy. To achieve AI literacy and education for students, teachers need knowledge of AI technology, including AI training on using AI in the classroom.” Instead of presuming that a single set of static regulations will suffice indefinitely, policymakers are encouraged to remain vigilant, revisiting and updating policies as AI technology and its societal context change.
The US will need a pipeline of skilled professionals who can adapt and apply their knowledge to real-world problems. Universities must ensure that their graduates possess not only advanced technical skills but also the capacity for critical thinking, ethical reasoning, and interdisciplinary collaboration. These qualities will be particularly valuable as AI permeates traditionally non-technical fields such as law, healthcare, and public policy.
The Task Force recommendations to address educational challenges:
Recommendation: “Invest in K–12 STEM and AI Education and broaden participation. U.S. K–12 STEM education is lagging, especially in comparison to other nations. Continued U.S. leadership in AI will require a targeted focus on improving K–12 STEM and AI education. The federal government has several initiatives to improve STEM education in rural and underserved communities. Congress should continue to support those efforts, especially in areas related to AI.” (p.127)
Recommendation: Bolster U.S. AI skills by providing needed AI resources. One key challenge facing the advancement of the United States AI talent pipeline is a lack of access to AI resources, particularly computational power and data, at institutions of higher education. One potential solution is the NAIRR Pilot (discussed in the Research, Development, &Standards chapter), which connects U.S. researchers and educators with computational, data, software, model, and training resources. (p. 127)
The Human in the Loop
The Task Force also highlights the importance of building public trust in AI. Crucial is the recommendation “Have humans in the loop to actively identify and remedy potential flaws when AI is used in highly consequential decision-making. This is necessary to safeguard the constitutional rights and civil liberties of Americans affected by AI systems, whether those systems are used by government or private entities. (p. 105) It reiterates that AI should not be used to supplant human values or erode human agency but to augment our capacities, address challenges, and open new avenues of opportunity.
Appendix III: Key Government Policies
Executive Order 13859: Maintaining American Leadership in Artificial Intelligence
Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government
National Artificial Intelligence Initiative Act of 2020
Incorporated into the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021 (Public Law 116–283)
AI in Government Act
Incorporated into the Consolidated Appropriations Act, 2021 (Public Law 116–260)
Appendix IV
Unfortunately, in Appendix IV: Areas for Future Exploration (p.268) education is completely lacking as an area for “potential areas of future exploration related to AI that a future Congress may consider investigating: 1. Global Development and International Cooperation 2. Export Control Policy 3. Manufacturing, Supply Chain, and Industrial AI 4. Antitrust and Competition Policy 5. Critical Infrastructure and Security 6. Environmental Impact of AI 7. Law Enforcement 8. The Intelligence Community 9. Transportation 10. Election Integrity 11. State and Local Governments 12. Biotechnology 13. Law and the Courts 14. AI Adoption Across Sectors, including Entertainment, and Business to Business. 15. Other Industries.”
Nicomachean Ethics & Practical Wisdom
The Task Force’s recommendations establish a novel framework for AI governance that echoes Aristotelian wisdom while addressing modern complexities. This “new alignment” demonstrates how ancient philosophical principles – particularly Aristotle’s emphasis on practical wisdom (phronesis) and the golden mean – can illuminate our approach to AI governance. Just as Nicomachean Ethics emphasizes the development of virtue through practice, the Task Force’s framework recognizes that effective AI governance requires both guiding principles and practical experience.
The framework’s innovation lies in its dynamic nature, reminiscent of Aristotle’s understanding that ethical judgment must be contextual rather than purely rule-based. Rather than presenting rigid prescriptions, it creates an adaptive architecture that balances competing interests—much as Aristotle’s doctrine of the mean seeks balance between extremes. By emphasizing education, equity, and ethics, the Task Force demonstrates that AI development, like Aristotelian virtue, must be oriented toward human flourishing (eudaimonia) rather than merely technical achievement. As Aristotle reminds us, “For the things we have to learn before we can do them, we learn by doing them” (Book II, 1103a32-33).
References:
Aristotle. (2009). The Nicomachean Ethics (D. Ross, Trans.). Oxford University Press. (Original work published ca. 350 B.C.E.)
Bipartisan House Task Force on Artificial Intelligence. (2024). AI Task Force Report.https://www.speaker.gov/wp-content/uploads/2024/12/AI-Task-Force-Report-FINAL.pdf
This article was written by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024). As a columnist for Stankevicius, she writes on Nicomachean Ethics: Insights at the Intersection of AI and Education. Connect with her on LinkedIn.