
Grants
$1,000 - $15,000 for original reporting on artificial intelligence and its impacts.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.
Overview
As artificial intelligence grows more advanced, the technology and the people building it grow increasingly consequential. We believe journalism plays a crucial role in helping the public understand AI — and in holding companies and policymakers to account.
Tarbell offers grants of $1,000 - $15,000 to support original reporting published in established outlets, whether from freelancers or staff. We primarily focus on written journalism, but we also fund journalism in other formats.
We seek to fund forward-looking stories, examining how today’s technical advancements and policy decisions lay the groundwork for how artificial intelligence will shape our future. In particular, we seek to fund reporting on five focus areas:
-
Investigations into frontier AI companies
-
National and international AI policymaking
-
Integration of AI in governments and militaries
-
AI capabilities, safeguards, and evaluations
-
Future of work and society in an age of advanced AI
Applications for the previous round closed September 14, 2025. Subscribe to be notified of future grant rounds.
Editorial independence is guaranteed. Grantees are free to publish whatever they and their editors decide, without interference from Tarbell or Tarbell's donors. Tarbell does not review, edit, or approve stories before publication. Our donors do not have any involvement in selecting which stories we fund, and do not review or influence the content, conclusions, or framing of any stories we fund.
Supporting journalism on five critical AI beats
Investigations into frontier AI companies
National and international AI policymaking
Integration of AI in governments and militaries
AI capabilities, safeguards, and evaluations
Future of work and society in an age of advanced AI
Who are grants for?
We welcome submission from all experienced journalists and media creators: both staff writers/editors/producers and freelancers are welcome to apply. Journalists with an investigative background are particularly encouraged to apply. A background in reporting on AI and/or technology is desirable, but not mandatory.
You do not need to have an outlet secured before applying, although we encourage it. If you can’t secure a letter of interest from an editor before your application, we’ll ask you to obtain one before we distribute any funding (see template). Your publication or distribution platform of choice must be able to accept grant-funding for stories.
When evaluating applications, we look for:
Potential for Impact. We care about stories that make a difference. We want the reporting we enable to help society navigate the challenges ushered in by increasingly capable and widespread AI systems, whether that be by raising awareness of an underdiscussed harm or catalyzing policy change.
Reach. We hope grantees' stories will be read, heard, or viewed widely, and by people with decision-making power. We prefer to fund stories that are likely to be published in established and influential outlets.
Journalistic experience. We look to fund journalists with a trackrecord of impressive reporting. We tend to only fund journalists who have written at least one piece like the original journalism they're proposing, whether that's an ambitious investigation or a captivating explainer.
Feasibility. We look for pitches and budgets that demonstrate a realistic plan for how to pull off the proposed story.
If you have any questions about the application process, please contact grants@tarbellcenter.org.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.
Judging Panel

Madhumita Murgia
Madhumita Murgia leads coverage about artificial intelligence at the Financial Times. She was previously a reporter and editor at WIRED and The Daily Telegraph. She is also the author of Code Dependent: Living in the Shadow of AI.

Scott Rosenberg
Scott is the managing editor of technology at Axios. He was previously an editor at Wired (Backchannel) and the co-founder of Salon. He is also the author of Dreaming in Code and Say Everything.

Yi-Ling Liu
Yi-Ling was the first China Editor at Rest of World, and is currently working on a book about the Chinese internet as journalist-in-residence at Tarbell. Her work has been published in The New York Times Magazine, WIRED & The New Yorker.

Casey Newton
Casey is the founder and editor of Platformer, a newsletter about the intersection of tech and democracy, as well as the co-host of the Hard Fork podcast. He previously spent a decade covering Silicon Valley for The Verge and CNET.

Timothy B. Lee
Timothy has written about technology, economics, and public policy for more than a decade. He is the editor of Understanding AI, a leading AI newsletter. He previously wrote for the Washington Post, Vox, and Ars Technica.

Shakeel Hashim
Shakeel writes and edits Transformer, an AI policy newsletter. He also supports Tarbell programs, like grants and the fellowship. Previously, he was a News Editor at The Economist.
What are this round's focus areas?
Investigations into frontier AI companies
Companies at the forefront of AI development—such as OpenAI, Anthropic, Google DeepMind, xAI, and Meta—are racing to build evermore capable systems behind closed doors. With growing influence but little regulatory oversight, there’s an urgent need for accountability journalism. We're seeking journalism that pierces the corporate veil to examine how these influential companies actually operate, who shapes their decisions, and what safeguards and ethical codes exist—or don't.
Pieces we admire on this topic can be found here, here, and here.
. Possible story directions: → Track compliance gaps between companies' safety commitments and their actual development and deployment practices. → Investigate potential misconduct by AI companies or their executives that contradicts public messaging or commitments. → Investigate how companies respond to documented instances of harm from their systems, from deepfakes to algorithmic bias. → Document whistleblower experiences and the systems (or lack thereof) in place to protect internal critics. → Uncover employee perspectives on AI promises, and perils. → Examine how frontier labs are using AI internally to accelerate further AI development. → Investigate the decision-making process for when companies will delay development and deployment because of dangerous capabilities, such as hacking and bioterrorism.
National and international AI policymaking
AI is being governed through a complex web of emerging state laws, national legislation, international agreements, and strategic competition. As governments race to balance innovation with control, questions about who shapes AI's future—and in what ways—remain largely hidden from public view. We're seeking journalism that reveals the inner workings of AI policymaking, examining how domestic lobbying, global forums, supply chain dynamics, and back-channel diplomacy determine the rules for increasingly powerful technologies.
Pieces we admire on this topic can be found here, here, and here.
. Possible story directions: → Document the revolving door between government agencies and AI companies, examining how connections between industry and government officials shape policy decisions in the US, UK, Europe, and China. → Track AI lobbying expenditures and strategies at national, state, and local levels, and signs of their influence. → Analyze proposed AI regulations and responses to these proposals, such as US state-level regulation, US congressional AI action, and the EU AI Act’s Code of Practice. → Investigate the dynamics of international AI forums and agreements, examining which nations and organizations are steering global governance initiatives like the upcoming India AI summit, who's interests are empowered, and what tangible outcomes are being realized. → Report on the Trump administration’s AI policy moves and which coalitions are vying for influence. → Translate and amplify Chinese academic and industry debates around AI ethics and governance, and how they change over time. → Investigate resource gaps at US, UK, European, and Chinese government agencies tasked with AI oversight, in terms of technical expertise, funding, and enforcement mechanisms (e.g., enforcing chip export controls).
Integration of AI in governments and militaries
As AI systems move from research labs to real-world applications, governments and military organizations worldwide are adopting these technologies for everything from administrative efficiency to battlefield advantage. We're seeking journalism that investigates this rapidly evolving landscape, examining the entanglements between public institutions, defense strategies, and AI companies.
Pieces we admire on this topic can be found here, here, and here.
. Possible story directions: → Document the AI capabilities being actively developed or acquired by military and intelligence agencies around the world, as well as their corresponding safeguards and testing protocols. → Follow the funding streams from defense departments to AI companies, revealing which technologies are being prioritized for which purposes. → Investigate the ethical frameworks (or lack thereof) governing military AI applications and how they're being implemented in practice. → Track the arms race dynamics emerging between nations as they compete for AI advantages in defense and intelligence. → Track the evolution of government AI usage, exploring how agencies' strategies and policies are adapting to increasingly capable systems and who this affects.
AI capabilities, safeguards, and evaluations
The technical capabilities of AI systems are evolving in ways that sometimes surprise even their creators. We're looking for journalism that documents these emerging capabilities, interrogates the methods used to test and evaluate increasingly powerful models, and examines efforts to make these systems safer and more secure.
Pieces we admire on this topic can be found here, here, and here.
. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on if and when tasks may be automated.
Future of work and society in an age of advanced AI
Where is all this going? Advanced AI will transform society, but how it does so remains uncertain. And malleable. We're looking for forward-looking, high-quality feature pieces that identify what today's AI developments foretell about future possibilities. From information ecosystems to democratic processes, from climate impact to human connection, we want reporting that takes seriously the possibility of significant disruption while critically examining the barriers to and likelihood of such changes.
Pieces we admire on this topic can be found here, here, and here.
. Possible story directions: → Compare competing economic forecasts of AI's impact, ranging from record-breaking growth predictions to modest outcomes due to implementation barriers. → Project the climate impact of AI infrastructure expansion from massive data centers buildouts like OpenAI's Stargate. → Explore how AI may transform information ecosystems, examining what today’s trends suggest about how our relationship with facts, expertise, and media consumption might evolve with more powerful systems. → Chart the future of an industry like law, medicine, and creative fields as AI begins to automate or accelerate cognitive tasks—or not.
Explore stories we've funded
Total Financial Support
$174,200
Projects Supported
25
Program Launch
Nov 2024
FAQ
When will I hear back about my application?
We aim to evaluate all applications within 4-weeks of the application deadline. If your story is time-sensitive, you can ask request that we expedite the evaluation process, although we may not always be able to do that.
Do you consider formats other than written articles?
While our primary focus is on written journalism at established publications, we do consider exceptional proposals in other formats like podcasts, newsletters, and short documentaries. Content in such formats should still be "in the spirit of journalism": it should adhere to journalistic standards like truth-seeking, independence, and fairness. We also need to see how such content will reach a large or important audience.
What can I use the money for?
Your grant money can be used for any costs incurred in producing the story, including the costs of your time and labor. This is true for both freelancers and staff reporters. You can also use the money for other reporting expenses, such as travel, API usage, or purchasing data sources.
Can teams apply?
Yes, teams of journalists are very welcome to apply. Please submit a single application, and in the “about you” section provide information for each team member.
Do I need to be worried about reviewers scooping my idea?
No. Everyone who reads your application is bound by our Pitch Integrity Commitment, which bars them from using or sharing non‑public information in your application until your piece is published or 12 months have passed, unless you explicitly waive the embargo. This safeguard lets you include enough detail for us to fairly judge originality and feasibility without worrying about being scooped.
Can I submit multiple applications?
Yes, you can submit multiple pitches (up to 3).
Do I need to have a publication venue finalized?
We encourage applicants to apply with a letter showing interest from the editor of their publication-of-choice, but this is not strictly required (see template letter). If we want to support your piece but you don’t have a letter of interest, we’ll ask you to secure a letter of interest from a fitting outlet before we disburse funds. We also sometimes award "seed grants" to enable exceptional freelancers to do more digging and sourcing before approaching outlets.
Who funded this grants round?
This round of Tarbell Grants is generously supported by our donors, with the exception of Open Philanthropy, who do not fund our grants. Our donors have no involvement in the grant selection process. If you're interested in supporting the next round, you can donate here or get in touch.
I live in [country]. Can I apply?
We welcome applications from anywhere in the world. While certain countries occasionally present problems when disbursing charitable grants, we make every effort to disburse all grants in full, while following the letter and spirit of all relevant laws and regulations. We do not disburse grants to any individual or organization listed on the OFAC List of Specially Designated Nationals and Blocked Persons.
How do we need to disclose funding?
We ask that all stories published as part of a Tarbell-funded reporting project include the following line: “This story was supported by a grant from the Tarbell Center for AI Journalism.” If you share your Tarbell-funded story with other outlets for co-publication, please request that they also include this disclosure. Let us know if this is an issue.
Can I apply for a series of stories?
Yes. We welcome applications for a reporting series, rather than just a one-off piece.





