Our Priorities
Home » Issues
Artificial intelligence is advancing faster than public policy. This creates real and immediate challenges for safety, accountability, economic stability, and national security. How these challenges in AI are addressed will shape public trust, economic opportunity, and U.S. leadership for decades.
The Alliance for Secure AI is working to help ensure AI is built safely, responsibly, and in the public interest. Our focus is on organizing support for bipartisan policy solutions in the following areas:
Safety and Evaluation
Right now, there are no consistent requirements for safety testing and evaluation in advanced AI. As capabilities advance, we need baseline standards for evaluating advanced models before and after deployment: testing systems to make sure they behave as intended, identifying potential catastrophic risks, and ensuring meaningful human control over consequential decisions.
Strong evaluation frameworks benefit both the public and responsible developers. They create clear expectations, enable competitive advantage for companies that invest in safety, and build the confidence necessary for AI adoption at scale. The goal is to innovate with the conditions for building AI that Americans can trust – and to ensure the United States leads in developing both cutting-edge AI and the methods that prove those systems work as promised.
accountability and liability
When AI systems cause harm, responsibility is often unclear. Advanced AI can harm consumers through erroneous advice, discriminatory decisions, or defective products. Existing liability frameworks were not designed for systems that learn, adapt, and operate with limited human oversight. This accountability gap leaves users vulnerable and creates misaligned incentives for developers.
Effective accountability requires updating legal frameworks to reflect how AI products actually work: clarifying product liability standards so companies face appropriate responsibility for foreseeable harms, establishing guardrails for high-risk applications like AI chatbots that interact with vulnerable users, and preventing AI systems from being treated as legal persons rather than tools subject to human responsibility. Clear rules protect both users and responsible innovators. When companies know what is expected, they can design accordingly, and when users know they have recourse, trust grows.
Societal and Environmental Impact
Companies are increasingly using AI to automate skilled work and eliminate entry-level positions, raising serious questions about workforce displacement, economic security, and pathways into the workforce for the next generation. Effective workforce policy requires understanding the scope and pace of these changes – transparency about how AI is affecting employment, investment in workforce transitions, and ensuring that economic gains from automation are broadly shared. The goal is to ensure workers and communities have the tools and support to navigate it.
Additionally, AI’s infrastructure demands are growing rapidly. Training and operating advanced models requires enormous amounts of energy and natural resources, straining power grids and raising questions about resource allocation and environmental impact. Data centers are being built now, and the decisions being made about their energy sources and environmental footprint will have lasting consequences. Policymakers need frameworks that account for these demands while supporting continued innovation.
National Security and U.S. Leadership
AI presents both strategic opportunities and significant national security risks. The same advanced chips and systems that drive American innovation can be diverted to adversaries, enabling the development of AI capabilities that threaten U.S. interests. Without effective controls, American technology and expertise could fuel autonomous AI-driven military applications, cyberattacks, or surveillance systems misaligned with democratic values.
Maintaining U.S. leadership requires preventing adversaries from accessing the most advanced U.S. technology while supporting responsible domestic innovation. This means limiting adversaries’ access to our most advanced chips, closing loopholes that allow diversion of restricted technology, and securing supply chains against theft and espionage. The challenge is finding the right balance – controls strong enough to protect national security but targeted enough to preserve American competitiveness and innovation.