top of page

Grants

Up to $20,000 for original reporting on artificial intelligence and its impacts.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

Overview

As artificial intelligence grows more influential, the companies building it and the policymakers regulating it warrant the kind of scrutiny that journalism exists to provide. We believe rigorous, independent reporting serves the public interest—demystifying technical developments, following the money, and documenting what happens when these systems enter the real world.

Tarbell offers grants of $1,000–$20,000 to support original reporting on AI published in established outlets, whether from freelancers or staff. We primarily fund written journalism but are open to supporting other formats.

 

This round, we're seeking applications across six focus areas:

  1. Accountability reporting on frontier AI companies

  2. AI policy and politics

  3. AI explainers and analysis

  4. AI in government and militaries

  5. AI labor impacts

  6. AI developments in China

Applications for this round close March 8, 2026. Subscribe to be notified of future rounds.

Editorial independence is a core value for Tarbell and our grantees maintain complete autonomy over their reporting. Applications are judged by an independent panel of experienced journalists. When the recommendations of judges differ, final funding decisions are made by Tarbell Grants staff. Our donors have no involvement, and staff never see, shape, or approve a grantee’s story content before publication.

Supporting journalism on six critical AI beats

Investigations into frontier AI companies

Integration of AI in governments and militaries

AI policy and politics

AI explainers and analysis

ChatGPT_Image_Jan_21__2026__01_55_53_PM-removebg-preview.png

AI labor impacts

ChatGPT_Image_Jan_21__2026__02_03_22_PM-removebg-preview.png

AI developments in China

Who are grants for? 

We welcome submission from all experienced journalists and media creators: both staff writers/editors/producers and freelancers are welcome to apply. Journalists with an investigative background are particularly encouraged to apply. A background in reporting on AI and/or technology is desirable, but not mandatory.

 

You do not need to have an outlet secured before applying, although we encourage it. If you can’t secure a letter of interest from an editor before your application, we’ll ask you to obtain one before we distribute any funding (see template). Your publication or distribution platform of choice must be able to accept grant-funding for stories. 

When evaluating applications, we look for: 

Potential for Impact. We care about stories that make a difference. We want the reporting we enable to help society navigate the challenges ushered in by increasingly capable and widespread AI systems, whether that be by raising awareness of an underdiscussed harm or catalyzing policy change.​​​​

Reach. We hope grantees' stories will be read, heard, or viewed widely, and by people with decision-making power. We prefer to fund stories that are likely to be published in established and influential outlets.

Journalistic experience. We look to fund journalists with a track record of impressive reporting. We tend to only fund journalists who have written at least one piece like the original journalism they're proposing, whether that's an ambitious investigation or a captivating explainer.

Feasibility. We look for pitches and budgets that demonstrate a realistic plan for how to pull off the proposed story.

If you have any questions about the application process, please contact grants@tarbellcenter.org

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

. Possible story directions: → Survey public perceptions of AI capabilities against technical reality, highlighting discrepancies that may influence policy decisions and adoption patterns. → Profile the ecosystem of AI evaluation organizations, tracking how third-party assessment techniques are evolving to keep pace with increasingly complex systems. → Showcase demonstrations of novel AI abilities that signal important shifts in capability frontiers (e.g., ‘sandbagging’; deceptive chains-of-thought) and examine their implications. → Contrast benchmark performances with real-world deployments, investigating cases where impressive test scores may fail to translate into practical usefulness (or vice versa). → Unpack the uneven capabilities of AI systems across tasks that humans find easy and hard, and the implications this has on the timelines for when tasks may be automated.

2025 Judging Panel

We are still confirming 2026 judges.

Madhu_M.jpeg

Madhumita Murgia

Madhumita Murgia leads coverage about artificial intelligence at the Financial Times. She was previously a reporter and editor at WIRED and The Daily Telegraph. She is also the author of Code Dependent: Living in the Shadow of AI.

Scott_R.webp

Scott Rosenberg

Scott was the managing editor of technology at Axios. He was previously an editor at Wired (Backchannel) and the co-founder of Salon. He is also the author of Dreaming in Code and Say Everything.

 

Yi-Ling Liu profile photo_edited.jpg

Yi-Ling Liu

Yi-Ling was the first China Editor at Rest of World, and is currently working on a book about the Chinese internet as journalist-in-residence at Tarbell. Her work has been published in The New York Times Magazine, WIRED & The New Yorker.

_edited.jpg

Casey Newton

Casey is the founder and editor of Platformer, a newsletter about the intersection of tech and democracy, as well as the co-host of the Hard Fork podcast. He previously spent a decade covering Silicon Valley for The Verge and CNET.

Timothy B Lee Photo.jpg

Timothy B. Lee

Timothy has written about technology, economics, and public policy for more than a decade. He is the editor of Understanding AI, a leading AI newsletter. He previously wrote for the Washington Post, Vox, and Ars Technica.

A photo of Shakeel Hashim, one of the four judges.

Shakeel Hashim

Shakeel writes and edits Transformer, an AI policy newsletter. He also supports Tarbell programs, like grants and the fellowship. Previously, he was a News Editor at The Economist.

Focus Areas

What are this round's focus areas?

Accountability reporting on frontier AI companies

Companies at the forefront of AI development—OpenAI, Anthropic, Google DeepMind, xAI, Meta, and others—are building increasingly capable systems with limited regulatory oversight. We're seeking journalism that pierces the corporate veil of these influential companies: do they follow through on their stated commitments? Who shapes their decisions? How do their products affect users? From voluntary safety pledges to corporate restructurings to content moderation policies, there's a need for sustained reporting on the companies shaping the trajectory of AI development.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Track compliance with voluntary commitments to test models for safety (for example, company policies or international commitments like those at 2024 Seoul Summit) → Investigate how companies respond when users are harmed by their products, from deepfakes to algorithmic bias. → Document the experiences of employees who have raised concerns internally, and what protections exist for them. → Uncover how decisions get made about what safeguards to include or remove from consumer-facing products. → Document different AI companies transparency around issues like training data acquisition, post-deployment monitoring, and environmental footprint. → Examine how frontier labs are using AI internally to accelerate further AI development.

AI policy and politics

Everyone wants to write the rules for AI. Lobbyists, think tanks, advocacy groups, and technical experts are all competing to define how this technology gets regulated, often with starkly different visions for what's at stake. Policymakers are caught in the middle, struggling to keep pace. We're seeking journalism that follows the money and influence shaping AI regulation, from federal legislation to state-level campaign financing to international negotiations. US coverage is a priority, given the midterms and its concentration of leading AI companies, but we're also interested in how China, the EU, and other jurisdictions are navigating similar pressures.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Track the spending and influence of AI-focused super PACs like Leading the Future and Public First. → Document the revolving door between government agencies and AI companies, examining how connections between industry and government officials shape policy decisions, especially in the US, UK, Europe, and China. → Track AI lobbying expenditures and strategies at national, state, and local levels, and signs of their influence. → Investigate resource gaps at US, UK, European, and Chinese government agencies tasked with AI oversight, in terms of technical expertise, funding, and enforcement mechanisms → Report on the Trump administration’s AI policy moves and different coalitions are vying for influence.

AI explainers and analysis

AI is developing fast—fast enough that even close observers struggle to keep up. Important information is scattered across technical papers, company announcements, and specialist forums. We're seeking well-researched explainers and sharp analyses that synthesize what's known: where AI capabilities are improving (and where they're not), how these systems are being deployed and with what effects, and what the trajectory of development might mean for the years ahead. The best pieces will pull together disparate threads into a coherent picture, making complex or contested topics legible without oversimplifying.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Explain how AI companies test their models for dangerous capabilities, and what the limitations of current evaluation methods are. → Clarify what "AI agents" actually means in practice—what they can and can't do, and what's blocking further progress. → Survey frontier labs claims about how they’re using AI to accelerate their own research, and what actual impact this is having on development speed. → Synthesize existing information on a specific impact of AI systems, from AI systems carbon footprint to their impact on elections. → Explain key business aspects of AI companies, from their fundraising tactics to their spending rates.

AI in government and militaries

Governments and military organizations worldwide are adopting AI for everything from administrative efficiency to battlefield decision-making. We're seeking journalism that investigates this rapidly evolving landscape, examining the entanglements between AI companies and the military, immigration enforcement, law enforcement, and other public institutions.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Document the AI capabilities being actively developed or acquired by military, law enforcement, and intelligence agencies around the world, as well as their safeguards and testing protocols. → Follow the funding streams from defense departments to AI companies. → Track the arms race dynamics emerging between nations as they compete for AI advantages in defense and intelligence. → Document the effects of AI implementation across federal, state, and local government agencies.

AI labor impact

Predictions about AI and jobs are abundant. Original reporting is not. We're looking for journalism that generates new evidence about how AI is—or isn’t—affecting work. Strong stories will be data-driven, human, and original, whether through worker interviews, company documents, usage data, or on-the-ground-reporting. 

Pieces we admire on this topic can be found here, here, and here

. Possible story directions: → Investigate a specific profession's encounter with AI tools: who's adopting them, who isn't, and what measurable impact they're having. → Report on the contractors training AI to automate jobs like their own Document whether company announcements about AI-driven productivity gains are backed up by other evidence. → Examine industries where AI adoption has been slower than expected, and what's getting in the way. → Survey hiring managers about whether AI is changing their workforce planning and, if so, how.

AI developments in China

Covering China's AI ecosystem is difficult: access is limited, the landscape moves fast, and nuance often gets lost in translation. But China is home to some of the world’s most influential AI companies and largest real-world deployments, from leading open-source models to nationwide AI education programs and surveillance tech. We’re seeking reporting that treats Chinese AI development and real-world usage as a story worth understanding on its own terms, not just as a foil to American efforts.

Pieces we admire on this topic can be found here, here, and here.

. Possible story directions: → Document how a Chinese city or province is deploying AI, and what citizens actually experience. → Track how Chinese AI labs are adapting to fluctuations in US export controls. → Investigate Chinese researchers' worries and hopes about the future of AI, and how their outlook differs from Western framings. → Uncover what's actually being discussed in US-China AI diplomacy, especially Track 2 or 1.5 diplomacy Investigate the impact of China’s push to embed AI into education programs. → Follow how Chinese AI companies are expanding into other markets—Southeast Asia, Africa, the Middle East—and how those deployments are going.

Explore stories we've funded

Total Financial Support

$346,000

Projects Supported

46

Program Launch

Nov 2024

FAQ

When will I hear back about my application?

We aim to evaluate all applications within 4-weeks of the application deadline. If your story is time-sensitive, you can ask request that we expedite the evaluation process, although we may not always be able to do that.

Do you consider formats other than written articles?

While our primary focus is on written journalism at established publications, we do consider exceptional proposals in other formats like podcasts, newsletters, and short documentaries. Content in such formats should still be "in the spirit of journalism": it should adhere to journalistic standards like truth-seeking, independence, and fairness. We also need to see how such content will reach a large or important audience.

What can I use the money for?

Your grant money can be used for any costs incurred in producing the story, including the costs of your time and labor. This is true for both freelancers and staff reporters. You can also use the money for other reporting expenses, such as travel, API usage, or purchasing data sources.

Can teams apply?

Yes, teams of journalists are very welcome to apply. Please submit a single application, and in the “about you” section provide information for each team member.

Do I need to be worried about reviewers scooping my idea?

No. Everyone who reads your application is bound by our Pitch Integrity Commitment, which bars them from using or sharing non‑public information in your application until your piece is published or 12 months have passed, unless you explicitly waive the embargo. This safeguard lets you include enough detail for us to fairly judge originality and feasibility without worrying about being scooped.
 

Can I submit multiple applications?

Yes, you can submit multiple pitches (up to 3).

 

Do I need to have a publication venue finalized?

We encourage applicants to apply with a letter showing interest from the editor of their publication-of-choice, but this is not strictly required (see template letter). If we want to support your piece but you don’t have a letter of interest, we’ll ask you to secure a letter of interest from a fitting outlet before we disburse funds. We also sometimes award "seed grants" to enable exceptional freelancers to do more digging and sourcing before approaching outlets.


Who funded this grants round?

This round of Tarbell Grants is generously supported by our donors. Our donors have no involvement in the grant selection process. If you're interested in supporting the next round, you can donate here or get in touch.

 

Do you fund reporting from outside the US, Europe, and China?

Yes, we welcome applications from journalists anywhere. Some focus areas (like AI in governments and militaries, or accountability reporting on frontier companies) are global by nature. Others, like AI policy and politics, skew toward jurisdictions where frontier AI companies are headquartered. But we've funded reporting from India, Taiwan, Brazil, and Kenya and remain interested in strong reporting from anywhere
 

How do we need to disclose funding?

We ask that all stories published as part of a Tarbell-funded reporting project include the following line: “This story was supported by a grant from the Tarbell Center for AI Journalism.” If you share your Tarbell-funded story with other outlets for co-publication, please request that they also include this disclosure. Let us know if this is an issue. 

 

Can I apply for a series of stories?
Yes. We welcome applications for a reporting series, rather than just a one-off piece.

 

Subscribe to our newsletter

Tarbell logo
bottom of page