Day One 2025 Idea Open Call: Emerging Technologies and Artificial Intelligence
As the United States navigates the ever-accelerating realm of technological progress and artificial intelligence, it faces a critical imperative: steering innovation towards socially positive, equitable outcomes while mitigating potential harms to consumers and the public. This entails a multifaceted approach, encompassing the promotion of competition in evolving markets, leveraging robust policy frameworks, stringent enforcement mechanisms, and continued investment in research and development. Moreover, cultivating multidisciplinary expertise is essential to comprehensively address the complex interplay between technology and society. By embracing these strategies, the U.S. can proactively shape the trajectory of technological advancement, ensuring it serves the collective good and fosters inclusive prosperity. To meet these challenges, the Day One Project is interested in addressing the following areas:
Artificial Intelligence R&D, Opportunities and Risk Mitigation
The next administration must prioritize responsible AI innovation to maintain the U.S. leadership in technology, while adhering to ethical guidelines and standards for AI development and deployment, and ensuring accountability for AI systems’ outcomes to address current and potential harms and uphold public trust. Our key questions here include:
- What steps can specific federal agencies take to ensure responsible, efficacious AI innovation and deployment across a variety of uses and sectors? What sector or use-case specific actions are necessary to make responsible advancements?
- How can government support advancements for the full AI tech stack, including computer microprocessors and cloud infrastructure to data and models to consumer applications, and the talent that drives it all?
- What key investments should the federal government make to advance AI for the public good in major domains of opportunity? How should priorities be set? How can we ensure that a diverse set of actors are involved in this process, including academia, smaller industry players, civil society, and others?
- What new policy tools or approaches need to exist that can guide AI development, safety, transparency, and accountability? How can we operationalize and scale existing policy tools and frameworks, and ensure effectiveness, such as the NIST AI risk management framework or DHS AI Roadmap?
- What kinds of regulation or rulemaking around AI risk mitigation would be counterproductive? E.g. safety-washing, inhibiting low-risk beneficial uses, etc.
AI and National Security
AI technology can enhance national security by improving threat detection, intelligence analysis, and strategic decision-making, as well as pose unique threats. There are also security considerations posed by AI across supply chain vulnerabilities, cyber vulnerabilities, and more beyond the military context. The U.S. needs to invest in AI research and development for security purposes while also addressing potential risks such as adversarial attacks and the proliferation of autonomous weapons. Collaboration with international partners is crucial to ensure AI strengthens national security without compromising human rights or escalating conflicts. Our key questions here include:
- How can we craft specific standards for AI procurement and investment within military operations? And what guardrails and limitations need to be crafted?
- How can we evaluate institutional maturity (e.g. effective internal governance, cybersecurity capabilities) for developers who want to engage in the national security sector but lack accurate and responsive guidance?
- How should we deal with dual-use technologies that leverage AI (and U.S. citizens’ data)? Eg. potential chemical, biological, radiological, and nuclear (CBRN) risks, cybersecurity risks, etc.
- What kind of international engagements are needed to help establish global standards for the development and use of AI and emerging technologies?
Tech & Competition
Protecting competition and innovation in rapidly changing tech markets requires proactive antitrust enforcement, promoting interoperability and data portability where suitable, and fostering a level playing field for startups and smaller firms. A lack of robust competition in the tech industry stifles innovation, reduces accountability, harms consumers and has led to greater social externalities. We must prioritize robust competition for the next wave of emerging technologies, so that the market is not dominated by a small number of entrenched incumbent companies. Our key questions here include:
- How should antitrust policy and enforcement be enhanced or changed to address rapidly evolving technologies and markets, such as AI?
- What regulatory approaches have proven most promising for preventing anti-competitive practices and promoting innovation and consumer choice, and what would it take to implement those approaches further?
- How can startups and smaller firms be supported to compete against tech giants, and prevent market consolidation? What barriers to entry inhibit new ideas and innovations that should be better understood by regulators, and could be addressed through novel policy or enforcement approaches?
- Can we build on existing open data and data sharing initiatives to make training and testing data available for all kinds of developers?
- What should be the federal government’s role in incubating and accelerating new developers, particularly those that demonstrate public interest value?
Privacy, Safety, and Online Ecosystems
As our online technology ecosystems evolve and enter a new AI-driven era, new risks are emerging both to consumers and to the integrity of broader democratic processes. Advancing privacy and online safety in an era of widespread commercial surveillance practices necessitates creative and ambitious ideas about the future of the online world. This might include comprehensive privacy and data protection regulation, robust enforcement mechanisms, and user empowerment through transparency and control over personal data, as well as new ways of incentivizing better business practices and design features. There is a need for policies that will protect individuals’ rights, establish clear guidelines for data collection, usage, and sharing practices, and steer the online ecosystem to benefit the public and our democracy. Our key questions include:
- How are threats to privacy and online safety in the current digital landscape evolving, and how can they be mitigated through regulation, enforcement, or other means?
- What kind of specific design features and products should be the focus of regulation and enforcement action in order to mitigate harm to consumers? What is needed in order to enable meaningful action?
- What sectors and use cases are currently under-addressed or posing significant risk to consumers’ privacy, safety, and civic life, such as AI-generated disinformation, and how can sector or use-case specific harms be addressed?
- How can we best ensure the Internet maintains its spirit as a public good in the generative AI era? For instance, how can we protect IP and small-scale creators (e.g. artists, authors, journalists, voice actors, etc) who currently do not see benefits from AI models trained on their work, and may now be incentivized to limit access to information to protect their IP and livelihoods?
- How can new kinds of online technologies, such as decentralized platforms, privacy enhancing technologies (PETS), or shared content moderation tools, be better supported to create stronger protections for consumers and positive social impact?