There's been some discussion here of the claim that AI capabilities improvements have been a consequence of unsustainable increases in inference compute. Redwood Research Astra fellow Anders Cairns Woodruff has written a great post analyzing the data and disputing this.
Help me find my replacement doing farmed animal advocacy grantmaking!
I wanted to share a job opening for, in my opinion, one of the coolest jobs to help animals: my job! I'm moving on from Mobius soon, so we're looking for the next person to lead our grantmaking and entrepreneurial projects.
The role: You'd manage the grantmaking portfolio for one of the top ten largest funders of farmed animal welfare work globally, plus lead entrepreneurial projects like incubating new organisations and identifying strategic gaps in the movement. You'd work with a small and nimble team and influence where millions of dollars go.
Some key details:
* Full-time, US-based, remote (Bay Area preferred). We’re open to other countries in exceptional cases, as a contractor.
* $70k – $120k depending on experience and location. We can go higher for exceptional candidates.
* Open to hiring at two levels: Philanthropy Manager (3+ yrs experience) or Director of Philanthropy (5+ yrs)
* Application deadline: Sunday, April 12th
Why I'd recommend it: This role is a great mix of grantmaking and incubating/running important projects. You get to collaborate with other donors in the movement, as well as support high-impact nonprofits. Maybe most importantly, you’ll work with a very supportive team, with plenty of learning opportunities, space for personal development, and regular pickleball.
Full job description and application form here.
Alexander Berger's 2026 CoeffG annual letter describes their shift from marginalism to "inframarginal" funding, emphasis mine:
(I do wish Berger gave a bit more detail than just "we should be intentional about trying to strike the right balance between GM and marginalist approaches", but I suppose the annual letter isn't the right place for this.)
Nan Ransohoff's piece on how there should be more GMs owning delivery of specific outcomes is a great read too (emphasis mine):
As I've gotten more work experience (year 10 now, jeez) I've become increasingly a fan of the DRI approach, and by extension the GM ("super-senior-DRI") approach. You could think of incubators like AIM and SMA as "GM factories for orphaned problems".
Awhile back I came across this slide from the Money for Good project, which I thought was a sobering quantification of the rarity of donor decision-making based on nonprofit outperformance (cost-effectiveness etc). Hope Consulting got this data by surveying 4,000 US individuals with household incomes >$80k (top 30% income back in 2009, comprising 75% of overall individual donations), of which 2,000 were in the >$300k bracket.
Opportunity size for US retail donors in 2009 was ~$45B, so this works out to ballpark $1-1.5B which is still sizeable, e.g. it's more than total annual EA grantmaking has ever been:
How did Hope Consulting get the 3% figure? Top of funnel:
Middle of funnel steep drop-off:
and:
Bottom of funnel has even steeper drop-off, because confirmation bias is the default:
How to raise the 3% figure for donors who give based on nonprofit outperformance? Hope Consulting suggest this framing:
(I disagree with Hope Consulting on that last point, but the rest seems useful.)
What are midsized retail donors like? I used to work in marketing analytics so this piqued my interest. Max / diff to elicit donor value trade-offs -> cluster analysis (a few rounds) yielded these "donor personas":
The lack of demographic variation somewhat surprised me:
As a closing note, the Money for Good project was a major undertaking: 6 months, 4 major funders (including Rockefeller), 4 research orgs (!) partnering with Hope Consulting, etc. This makes me wonder what the 80/20 version of this could look like, with judicious use of Claude Code and such.
Question for AIM folks: what's the thinking behind running a very involved process twice per year, as opposed to recruiting from near-misses from previous rounds?
Are there savings to be made here? Asking as someone deeply concerned with cost effectiveness as a vital principle of EA... and a former finalist!
If AGI goes well for humans, will it go well for other animals?