Briefly
Program description
Artificial intelligence is accelerating the pace of discovery across science and technology. But today’s AI ecosystem risks centralizing compute, talent, and decision-making power – concentrating capabilities in ways that could undermine both innovation and safety.
To counter this development, the AI for Science & Safety Nodes program establishes decentralized network of nodes dedicated to AI-powered science and safety. Each node combines grant funding with office and community spaces, programming and in-house compute to accelerate project development. The goal is to empower researchers with a mission-aligned ecosystem where AI-driven progress remains open, secure, and aligned with human flourishing.
Main Information
Eligibility
The AI for Science & Safety Nodes program accepts applications from individuals, teams, and organizations. Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
The organizer strongly prefers applicants who intend to actively use the nodes in San Francisco or Berlin; “funding-only” projects are accepted only in exceptional cases.
Financing
The AI for Science & Safety Nodes program awards around $3M in total funding annually. Grants typically range from $10,000 to $100,000, with higher amounts being awarded to the AI safety-oriented focus areas, and smaller to longevity biotech and molecular nanotech projects.
The funding terms are:
- The AI for Science & Safety Nodes program funds both short-term and longer projects. Grants are typically paid in one lump sum. However, for larger projects spanning multiple years, payments may be made in tranches, with each subsequent tranche contingent upon the successful completion and reporting of previous milestones.
- The AI for Science & Safety Nodes program can fund overhead costs up to 10% of direct research costs, where these directly support the funded work.
- Successful applicants must pass program`s due diligence process, which includes confirming your connections to Foresight Institute, sharing any ongoing criminal proceedings, bankruptcy, etc., and sharing an itemized budget, project plan and organizational documents.
- By accepting funding, grantees agree that the AI for Science & Safety Nodes program may list their project on its website and share the project title and project lead on for example social media. If you prefer for your project to remain private, please inform the AI for Science & Safety Nodes program.
- Grants are subject to basic reporting requirements. Grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Tax obligations vary by country and organization type. Applicants are responsible for understanding and complying with any applicable tax requirements.
Supported Activities
AI-first projects
To keep up with and leverage increasing AI capabilities, The AI for Science & Safety Nodes program gives priority to projects that use AI as the primary engine for progress across our focus areas. The goal is to enable science and safety to accelerate in tandem with AI – for the safe and beneficial evolution of intelligence.
Focus Areas
The organizer is excited to fund and support work in the following areas:
1. AI for Security. Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. The AI for Science & Safety Nodes program seeks to support self-improving defense systems where AI autonomously identifies vulnerabilities, generates formal proofs, red-teams, and strengthens the world’s digital infrastructure.
2. Private AI. To ensure that AI progress occurs openly without sacrificing privacy, the AI for Science & Safety Nodes program wants to support work that applies AI to enhance confidential compute environments, scale privacy mechanisms for handling data, and design infrastructure that distributes trust.
3. Decentralized & Cooperative AI. The organizer funds work that builds decentralized intelligence ecosystems – where AI systems can cooperate, negotiate, and align – so societies remain resilient in a multipolar world. The program is especially interested in projects that enable peaceful human–AI co-existence and create new AI-enabled mechanisms for cooperation.
4. AI for Science & Epistemics. In addition to applying AI to specific problems, the AI for Science & Safety Nodes program needs better platforms, tools and data infrastructure to accelerate AI-guided scientific progress generally. Similarly, to get sense-making ready for rapid change, the program is interested in funding work that applies AI to improve forecasting and general epistemic preparedness.
5. AI for Neuro, Brain-Computer Interfaces & Whole Brain Emulation. The organizer is interested in work that uses frontier models to map, simulate, and understand biological intelligence – building the foundations for hybrids between human and artificial cognition, from brain-computer interfaces to whole brain emulation. The program cares about this domain specifically for its potential to improve humanity’s defensive position as AI advances.
6. AI for Longevity Biotechnology. The AI for Science & Safety Nodes program wants to fund work that applies AI to make progress on scientific frontiers in longevity biotechnology – from biostasis and replacement, to gene therapy and exosomes.
7. AI for Molecular Nanotechnology. The organizer program supports work that uses AI to make progress on scientific frontiers in molecular nanotechnology – from design and simulation, to construction and assembly of nanomachines.
Roadmap
- Deadlines: last day of every month; applications are reviewed monthly until the nodes reach capacity.
- Approximate review time: about two months after each application deadline; review may be faster for smaller funding amounts. Applicants may request fast processing, but it cannot be guaranteed.
- Selection process: initial in-house review for fit and quality; strong proposals are sent to technical advisors; further written questions or a short call may follow; individual feedback is generally not provided to unsuccessful applicants due to the volume of applications.
- The AI Nodes in San Francisco and Berlin are expected to open in early 2026.
How to Apply
Complete the online application form (Airtable) via the link.
Evaluation Criteria
- Impact on reducing existential risks from AI: the extent to which the project can reduce existential risks associated with AI, focusing on achieving significant advancements in AI safety within short timelines.
- Feasibility within short AGI timelines: the project’s ability to achieve meaningful progress within the anticipated short timeframes for AGI development. The program prioritizes projects that can demonstrate concrete milestones and deliverables in the next 1-3 years.
- Alignment with our focus areas: the degree to which the project addresses one or more of the focus areas outlined on this page.
- Capability to execute: the qualifications, experience, and resources of the applicant(s) to successfully carry out the proposed work. Strong teams with proven expertise in AI safety or related fields will be prioritized.
- High-risk, high-reward potential: the level of risk involved in the project, balanced with the potential for substantial, transformative impact on the future of AI safety. The program encourages speculative, high-risk projects with the potential to drive significant change if successful.
- Preference for open source: The program prefers open source projects, unless there are specific reasons preventing it.
Please note that the AI safety criteria do not apply to the AI for longevity biotechnology and molecular nanotechnology focus areas.
Required Documents
At the successful-applicant and due diligence stage, the organizer requires:
- confirmation of connections to Foresight Institute;
- information on any ongoing criminal proceedings, bankruptcy, etc.;
- an itemized budget;
- a project plan;
- organizational documents.
FAQ
Preference is given to applicants who plan to make active use of the nodes in San Francisco or Berlin; funding-only projects that do not engage with the spaces are supported only in exceptional cases
The approximate review time is about two months after each deadline; smaller grants may be processed faster, but expedited review cannot be guaranteed. Proposals are first screened in-house for fit and quality, then strong submissions are sent to technical advisors. Organizers may follow up with written questions or a short call, and due to application volume they generally cannot provide individual feedback to unsuccessful applicants
Reporting and Compliance
- Successful applicants must pass a due diligence process, including confirmation of their connections to Foresight Institute, disclosure of any ongoing criminal proceedings or bankruptcy, and submission of an itemized budget, project plan, and organizational documents.
- Grants come with basic reporting requirements: grantees are expected to submit brief progress updates at regular intervals, describing use of funds and progress against agreed milestones.
- Grantees are responsible for complying with tax obligations, which vary by country and organization type.
Legal Terms
- Successful applicants must undergo due diligence, including submission of the specified documents.
- By accepting funding, grantees agree that the organizer may list their project on the website and share the project title and project lead (for example on social media); applicants can request that their project remain private.
- Grants are subject to basic reporting requirements.
- Tax obligations vary by country and organization type; applicants are responsible for understanding and complying with them.