Inside effective altruism, where the far future counts a lot more than the present

Inside effective altruism, where the far future counts a lot more than the present

Oregon 6th Congressional District candidate Flynn ] elections and $With a stint at Oxford’s Future of Humanity Institute, a track record of voting in only two of the past 30 elections, and $11 million in support from a political action committee established by crypto billionaire Sam Bankman-Fried, Flynn didn’t fit into the local political scene, even though he’d grown up in the state. One constituent called him “Mr. Creepy Funds in an interview. Another said that Flynn was a Russian bot.

The specter of crypto influence and a slew a TV ads raised suspicions that Flynn was a tool of outside financial interest .. And while the rival candidate who led the primary race promised to fight for issues like better worker protections and stronger gun legislation, Flynn’s platform prioritized economic growth and preparedness for pandemics and other disasters. Both are pillars of “longtermism,” a growing strain of the ideology known as effective altruism (or EA), which is popular among an elite slice of people in tech and politics.

Even during an actual pandemic, Flynn’s focus struck many Oregonians as far-fetched and foreign. Perhaps unsurprisingly, he ended up losing the 2022 primary to the more politically experienced Democrat, Andrea Salinas. Flynn’s poor showing was not enough to make history as the first candidate for office in effective altruism.

Since its birth in the late 2000s, effective altruism has aimed to answer the question “How can those with means have the most impact on the world in a quantifiable way?”–and supplied clear methodologies for calculating the answer. Directing money to organizations that use evidence-based approaches is the one technique EA is most known for. EA’s ideas about the “best” way of changing the world have changed over time. It started as an academic philosophy.

Longtermism,” the belief that unlikely but existential threats like a humanity-destroying AI revolt or international biological warfare are humanity’s most pressing problems, is integral to EA today. Of late, it has moved from the fringes of the movement to its fore with Flynn’s campaign, a flurry of mainstream media coverage, and a new treatise published by one of EA’s founding fathers, William MacAskill. It’s an ideology that’s poised to take the main stage as more believers in the tech and billionaire classes–which are, notably, mostly male and white–start to pour millions into new PACs and projects like Bankman-Fried’s FTX Future Fund and Longview Philanthropy’s Longtermism Fund, which focus on theoretical menaces ripped from the pages of science fiction.

EA’s ideas have long faced criticism from within the fields of philosophy and philanthropy that they reflect white Western saviorism and an avoidance of structural problems in favor of abstract math–not coincidentally, many of the same objections lobbed at the tech industry at large. These accusations are growing as EA’s pockets get deeper and its reach reaches into a galaxy far away. The accuracy of the philosophy may limit its influence.

What is EA?

If effective altruism were a lab-grown species, its origin story would begin with DNA spliced from three parents: applied ethics, speculative technology, and philanthropy.

EA’s philosophical genes came from Peter Singer’s brand of utilitarianism and Oxford philosopher Nick Bostrom’s investigations into potential threats to humanity. From tech, EA drew on early research into the long-term impact of artificial intelligence carried out at what’s now known as the Machine Intelligence Research Institute (MIRI) in Berkeley, California. EA is part a growing trend in philanthropy that emphasizes evidence-based giving. This is driven by Silicon Valley nouveau riche who want to put the strategies that made them wealthy to the act of giving it away.

For effective altruists, a good cause is not good enough; only the very best should get funding in the areas most in need.

While these origins may seem diverse, the people involved are linked by social, economic, and professional class, and by a technocratic worldview. Early players–including MacAskill, a Cambridge philosopher; Toby Ord, an Oxford philosopher; Holden Karnofsky, cofounder of the charity evaluator GiveWell; and Dustin Moskovitz, a cofounder of Facebook who founded the nonprofit Open Philanthropy with his wife, Cari Tuna–are all still leaders in the movement’s interconnected constellation of nonprofits, foundations, and research organizations.

To be effective altruists, it is not enough to have a good cause. Only the best should receive funding in the most need areas. These areas are, according to EA calculations, usually developing countries. Distractions, such as personal connections that encourage people to donate to local food banks or hospitals that have treated parents, can be distracting.

Within effective altruism’s framework, selecting one’s career is just as important as choosing where to make donations. EA defines a professional “fit” by whether a candidate has comparative advantages like exceptional intelligence or an entrepreneurial drive, and if an effective altruist qualifies for a high-paying path, the ethos encourages “earning to give,” or dedicating one’s life to building wealth in order to give it away to EA causes. Bankman-Fried has said that he’s earning to give, even founding the crypto platform FTX with the express purpose of building wealth in order to redirect 99% of it. Now one of the richest crypto executives in the world, Bankman-Fried plans to give away up to $1 billion by the end of 2022.

“The allure of effective altruism has been that it’s an off-the-shelf methodology for being a highly sophisticated, impact-focused, data-driven funder,” says David Callahan, founder and editor of Inside Philanthropy and the author of a 2017 book on philanthropic trends, The Givers. Not only does EA suggest a clear and decisive framework, but the community also offers a set of resources for potential EA funders–including GiveWell, a nonprofit that uses an EA-driven evaluation rubric to recommend charitable organizations; EA Funds, which allows individuals to donate to curated pools of charities; 80,000 Hours, a career-coaching organization; and a vibrant discussion forum at Effectivealtruism.org, where leaders like MacAskill and Ord regularly chime in.

Effective altruism’s original laser focus on measurement has contributed rigor in a field that has historically lacked accountability for big donors with last names like Rockefeller and Sackler. Callahan says that it has been a much-needed counterweight for elite philanthropy which has been inefficient.

But where are the altruists who make a difference in their earnings? Who benefits? As with all giving–in EA or otherwise–there are no set rules for what constitutes “philanthropy,” and charitable organizations benefit from a tax code that incentivizes the super-rich to establish and control their own charitable endeavors at the expense of public tax revenues, local governance, or public accountability. EA organizations are able to leverage the practices of traditional philanthropy while enjoying the shine of an effectively disruptive approach to giving.

The movement has formalized its community’s commitment to donate with the Giving What We Can Pledge–mirroring another old-school philanthropic practice–but there are no giving requirements to be publicly listed as a pledger. Tracking the full influence of EA’s philosophy is tricky, but 80,000 Hours has estimated that $46 billion was committed to EA causes between 2015 and 2021, with donations growing about 20% each year. GiveWell calculates that in 2021 alone, it directed over $187 million to malaria nets and medication; by the organization’s math, that’s over 36,000 lives saved.

Accountability is much harder with long-termist causes such as biosecurity or “AI alignment”, which are aimed at ensuring that AI power is harnessed to benefit humanity. “The things that matter most are the things that have long-term impact on what the world will look like,” Bankman-Fried said in an interview earlier this year. “There are trillions upon trillions of people who are yet to be born.”

Bankman Fried’s views are influenced largely by longtermism’s utilitarian calculations that flatten lives into single units. This math shows that the trillions of human beings yet to be born are more morally responsible than the billions. Any threats that could prevent future generations from reaching their full potential–either through extinction or through technological stagnation, which MacAskill deems equally dire in his new book, What We Owe the Future–are priority number one.

In his book, MacAskill discusses his own journey from longtermism skeptic to true believer and urges other to follow the same path. He outlines the existential risks that he sees as: “The future could fall to authoritarians who use surveillance to lock in their ideology for ever or to AI systems that seek power rather than to promote a thriving country.” Or there could be no future at all: we could kill ourselves off with biological weapons or wage an all-out nuclear war that causes civilisation to collapse and never recover.”

It was to help guard against these exact possibilities that Bankman-Fried created the FTX Future Fund this year as a project within his philanthropic foundation. Its focus areas include “space governance,” “artificial intelligence,” and “empowering exceptional people.” The fund’s website acknowledges that many of its bets “will fail.” (Its primary goal for 2022 is to test new funding models, but the fund’s site does not establish what “success” may look like.) As of June 2022, the FTX Future Fund had made 262 grants and investments, with recipients including a Brown University academic researching long-term economic growth, a Cornell University academic researching AI alignment, and an organization working on legal research around AI and biosecurity (which was born out of Harvard Law’s EA group).

Sam Bankman-Fried, one of the world’s richest crypto executives, is also one of the country’s largest political donors. He plans to give away up to $1 billion by the end of 2022.

COINTELEGRAPH VIA WIKIMEDIA COMMONS

Bankman-Fried is hardly the only tech billionaire pushing forward longtermist causes. Open Philanthropy, the EA charitable organization funded primarily by Moskovitz and Tuna, has directed $260 million to addressing “potential risks from advanced AI” since its founding. Together, the FTX Future Fund and Open Philanthropy supported Longview Philanthropy with more than $15 million this year before the organization announced its new Longtermism Fund. Vitalik Buterin, one of the founders of the blockchain platform Ethereum, is the second-largest recent donor to MIRI, whose mission is “to ensure [that] smarter-than-human artificial intelligence has a positive impact.”

MIRI’s donor list also includes the Thiel Foundation; Ben Delo, cofounder of crypto exchange BitMEX; and Jaan Tallinn, one of the founding engineers of Skype, who is also a cofounder of Cambridge’s Centre for the Study of Existential Risk (CSER). Elon Musk is yet another tech mogul dedicated to fighting longtermist existential risks; he’s even claimed that his for-profit operations–including SpaceX’s mission to Mars–are philanthropic efforts supporting humanity’s progress and survival. (MacAskill has recently expressed concern that his philosophy is getting conflated with Musk’s “worldview.” However, EA aims for an expanded audience, and it seems unreasonable to expect rigid adherence to the exact belief system of its creators.)

Criticism and change

Even before the foregrounding of longtermism,effective altruism had been criticized for elevating the mindset of the “benevolent capitalist” (as philosopher Amia Srinivasan wrote in her 2015 review of MacAskill’s first book) and emphasizing individual agency within capitalism over more foundational critiques of the systems that have made one part of the world wealthy enough to spend time theorizing about how best to aid the rest.

EA’s earn-to-give philosophy raises the question of why the wealthy should get to decide where funds go in a highly inequitable world–especially if they may be extracting that wealth from employees’ labor or the public, as may be the case with some crypto executives. Farhad Ebrahimi (founder and president of Chorus Foundation), believes that people don’t make a lot of money without having to pay for others.

Many of the foundation’s grantees are groups led by people of color, and it is what’s known as a spend-down foundation; in other words, Ebrahimi says, Chorus’s work will be successful when its funds are fully redistributed.

EA’s earn-to-give philosophy raises the question of why the wealthy should get to decide where funds go.

Ebrahimi objects to EA’s approach of supporting targeted interventions rather than endowing local organizations to define their own priorities: “Why wouldn’t you want to support having the communities that you want the money to go to be the ones to build economic power? That’s an individual saying, ‘I want to build my economic power because I think I’m going to make good decisions about what to do with it’ … It seems very ‘benevolent dictator’ to me.”

Effective altruists would respond that their moral obligation is to fund the most demonstrably transformative projects as defined by their framework, no matter what else is left behind. In an interview in 2018, MacAskill suggested that in order to recommend prioritizing any structural power shifts, he’d need to see “an argument that opposing inequality in some particular way is actually going to be the best thing to do.”

man in a suit gives money to a robot while homeless men beg for help in the background

VICTOR KERLOW

However, when a small group of individuals with similar backgrounds have determined the formula for the most critical causes and “best” solutions, the unbiased rigor that EA is known for should come into question. While the top nine charities featured on GiveWell’s website today work in developing nations with communities of color, the EA community stands at 71% male and 76% white, with the largest percentage living in the US and the UK, according to a 2020 survey by the Centre for Effective Altruism (CEA).

This may not be surprising given that the philanthropic community at large has long been criticized for homogeneity. Some studies have shown that charitable giving is growing in diversity in the US, which puts EA’s breakdown in a new light. A 2012 report by the W. K. Kellogg Foundation found that both Asian-American and Black households gave away a larger percentage of their income than white households. Research from the Indiana University Lilly Family School of Philanthropy found in 2021 that 65% of Black households and 67% of Hispanic households surveyed donated charitably on a regular basis, along with 74% of white households. Donors of color were more likely than white households to give charitably on a regular basis. EA’s sales pitch doesn’t seem to be reaching these donors.

While EA advocates claim that its approach is data-driven, EA’s calculations are not in line with best practices in the tech industry when it comes to dealing with data. “This assumption that we’re going to calculate the single best thing to do in the world–have all this data and make these decisions–is so similar to the issues that we talk about in machine learning, and why you shouldn’t do that,” says Timnit Gebru, a leader in AI ethics and the founder and executive director of the Distributed AI Research Institute (DAIR), which centers diversity in its AI research.

Ethereum cofounder Vitalik Buterin is the second-largest recent donor to Berkeley’s Machine Intelligence Research Institute, whose mission is “to ensure [that] smarter-than-human artificial intelligence has a positive impact.”

JOHN PHILLIPS/GETTY IMAGES VIA WIKIMEDIA COMMONS

Gebru and others have written extensively about the dangers of leveraging data without undertaking deeper analysis and making sure it comes from diverse sources. In machine learning, it leads to dangerously biased models. It can lead to dangerously biased models in machine learning 365.

The research that EA’s assessments rely on may also be flawed or subject to change; a 2004 study that elevated deworming–distributing drugs for parasitic infections–to one of GiveWell’s top causes has come under serious fire, with some researchers claiming to have debunked it while others have been unable to replicate the results leading to the conclusion that it would save huge numbers of lives. Despite the uncertainty surrounding this intervention, GiveWell directed more than $12 million to deworming charities through its Maximum Impact Fund this year.

The voices of dissidents are growing louder, as EA’s influence grows and more money is directed towards long-termist causes. Luke Kemp, a CSER researcher and long-termist, believes that the EA research community’s growing focus on EA is based on a narrow and minority perspective. He has been disappointed by the lack of diversity and leadership in the field. Last year, he and his colleague Carla Zoe Cremer wrote and circulated a preprint titled “Democratizing Risk” about the community’s focus on the “techno-utopian approach”–which assumes that pursuing technology to its maximum development is an undeniable net positive–to the exclusion of other frameworks that reflect more common moral worldviews. “There are a few key funders who hold a particular ideology and choose to support the ideas that resonate most with them. Kemp states that you must speak the language to get funding and move up in the hierarchy.

Longtermism sees history as a forward march toward inevitable progress.

Even the basic concept of longtermism, according to Kemp, has been hijacked from legal and economic scholars in the 1960s, ’70s, and ’80s, who were focused on intergenerational equity and environmentalism–priorities that have notably dropped away from the EA version of the philosophy. Indeed, the central premise that “future people count,” as MacAskill says in his 2022 book, is hardly new. The Native American concept of the “seventh generation principle” and similar ideas in indigenous cultures across the globe ask each generation to consider the ones that have come before and will come after. Integral to these concepts, though, is the idea that the past holds valuable lessons for action today, especially in cases where our ancestors made choices that have led to environmental and economic crises.

Longtermism sees history differently: as a forward march toward inevitable progress. MacAskill references the past often in What We Owe the Future, but only in the form of case studies on the life-improving impact of technological and moral development. He mentions the abolition and Industrial Revolution of slavery as examples of how important it is for humanity to continue its progress. What are the “right values?” MacAskill has a coy approach to articulating them: he argues that “we should focus on promoting more abstract or general moral principles” to ensure that “moral changes stay relevant and robustly positive into the future.”

Worldwide and ongoing climate change, which already affects the under-resourced more than the elite today, is notably not a core longtermist cause, as philosopher Emile P. Torres points out in his critiques. While it poses a threat to millions of lives, longtermists argue, it probably won’t wipe out all of humanity; those with the wealth and means to survive can carry on fulfilling our species’ potential. Tech billionaires like Thiel and Larry Page already have plans and real estate in place to ride out a climate apocalypse. MacAskill’s new book identifies climate change as a serious concern for people today. However, he believes it is only an existential threat in extreme forms where agriculture will not survive. )

“To come to the conclusion that in order to do the most good in the world you have to work on artificial general intelligence is very strange.”

Timnit Gebru

The final mysterious feature of EA’s version of the long view is how its logic ends up in a specific list of technology-based far-off threats to civilization that just happen to align with many of the original EA cohort’s areas of research. Gebru says, “I am a researcher on the field of AI.” However, it is quite strange to conclude that artificial general intelligence is necessary to do the most good in this world. It’s like trying justify the fact you want to think only about science fiction, but you don’t want think about real people, real problems, and current structural issues. You want to justify how you want to pull billions of dollars into that while people are starving.”

Some EA leaders seem aware that criticism and change are key to expanding the community and strengthening its impact. MacAskill and others have made it explicit that their calculations are estimates (“These are our best guesses,” MacAskill offered on a 2020 podcast episode) and said they’re eager to improve through critical discourse. Both CEA and GiveWell have pages titled “Our Mistakes” on their websites. In June, CEA hosted a contest inviting critiques of the EA forum. The Future Fund has awarded prizes up to $1.5million for critical perspectives on AI. We recognize that the problems EA is trying solve are really, really large and we don’t have any hope of solving them using only a small portion of people,” Julia Wise, CEA community liaison and GiveWell board member, says about EA’s diversity statistics. “We need the talents that lots of different kinds of people can bring to address these worldwide problems.” Wise also spoke on the topic at the 2020 EA Global Conference, and she actively discusses inclusion and community power dynamics on the CEA forum. The Center for Effective Altruism supports a mentorship program for women and nonbinary people (founded, incidentally, by Carrick Flynn’s wife) that Wise says is expanding to other underrepresented groups in the EA community, and CEA has made an effort to facilitate conferences in more locations worldwide to welcome a more geographically diverse group. But these efforts appear to be limited in scope and impact; CEA’s public-facing page on diversity and inclusion hasn’t even been updated since 2020. It may be too late for the movement to change its DNA, as tech-utopian longtermist tenets take a front row seat in EA’s rocket ship and a few billionaire donor chart its course into the future.

Politics and the future

Despite the sci-fi sheen, effective altruism today is a conservative project, consolidating decision-making behind a technocratic belief system and a small set of individuals, potentially at the expense of local and intersectional visions for the future. EA’s successes and community were built around clear methodologies, which may not translate into the more nuanced political arena that some EA leaders or a few large donors are pushing for. Wise says that the community is still divided on how to pursue EA’s goals. Some dissenters believe politics is too polarized to be effective in bringing about change.

But EA isn’t the only charitable organization looking to take political action to change the world. The philanthropic community has been looking to get more impact by getting involved in politics. “Philanthropy must deal with an existential political crisis,” says EA. Other than that, many of its other goals will be difficult to achieve,” says Inside Philanthropy’s Callahan. Callahan uses a different definition of “existential” from MacAskill’s. EA may provide a guideline for how to give charitably, but the political arena is more complex. He says that there is no simple way to gain political power or change politics. “And Sam Bankman-Fried has so far demonstrated himself not the most effective political giver.”

Bankman-Fried has articulated his own political giving as “more policy than politics,” and has donated primarily to Democrats through his short-lived Protect Our Future PAC (which backed Carrick Flynn in Oregon) and the Guarding Against Pandemics PAC (which is run by his brother Gabe and publishes a cross-party list of its “champions” to support). Ryan Salame, the co-CEO with Bankman-Fried of FTX, funded his own PAC, American Dream Federal Action, which focuses mainly on Republican candidates. (Bankman-Fried has said Salame shares his passion for preventing pandemics.) Guarding Against Pandemics and the Open Philanthropy Action Fund (Open Philanthropy’s political arm) spent more than $18 million to get an initiative on the California state ballot this fall to fund pandemic research and action through a new tax.

So while longtermist funds are certainly making waves behind the scenes, Flynn’s primary loss in Oregon may signal that EA’s more visible electoral efforts need to draw on new and diverse strategies to win over real-world voters. Vanessa Daniel, founder and former executive director of Groundswell, one of the largest funders of the US reproductive justice movement, believes that big donations and 11th-hour interventions will never rival grassroots organizing in making real political change. “Slow and patient organizing led by Black women, communities of color, and some poor white communities created the tipping point in the 2020 election that saved the country from fascism and allowed some window of opportunity to get things like the climate deal passed,” she says. Daniel disagrees with the notion that metrics are only for the wealthy, white and male-led approaches. “I’ve spoken to many donors who believe that grassroots organizing is like planting magic beans and expecting them to grow. She says that this is not true. “The data is right before us. And it doesn’t require the collateral damage of millions of people.”

Open Philanthropy, the EA charitable organization funded primarily by Dustin Moskovitz and Cari Tuna, has directed $260 million to addressing “potential risks from advanced AI” since its founding.

COURTESY OF ASANA

The question now is whether the culture of EA will allow the community and its major donors to learn from such lessons. In May, Bankman-Fried admitted in an interview that there are a few takeaways from the Oregon loss, “in terms of thinking about who to support and how much,” and that he sees “decreasing marginal gains from funding.” In August, after distributing a total of $24 million over six months to candidates supporting pandemic prevention, Bankman-Fried appeared to have shut down funding through his Protect Our Future PAC, perhaps signaling an end to one political experiment. (Or maybe it was just a pragmatic belt-tightening after the serious and sustained downturn in the crypto market, the source of Bankman-Fried’s immense wealth.)

Other members of the EA community learn different lessons from Flynn’s campaign. On the forum at Effectivealtruism.org, Daniel Eth, a researcher at the Future of Humanity Institute, posted a lengthy postmortem of the race, expressing surprise that the candidate couldn’t win over the general audience when he seemed “unusually selfless and intelligent, even for an EA.”

But Eth didn’t encourage radically new strategies for a next run apart from ensuring that candidates vote more regularly and spend more time in the area. Eth suggested that EA should double down on its current approach. “Politics might somewhat reduce our typical epistemics, and rigor.” We should guard against this.” Members of the EA community contributing to the 93 comments on Eth’s post offered their own opinions, with some supporting Eth’s analysis, others urging lobbying over electioneering, and still others expressing frustration that effective altruists are funding political efforts at all. Political causes are unlikely to make it to the top of GiveWell’s list at this rate.

Money is a powerful tool for moving mountains. As EA expands its platforms and receives more funding from tech industry insiders and billionaires, the wealth of a few billionaires may continue to lift up pet EA causes. EA leaders might find that their political strategies don’t resonate with people living with current and local problems like food insecurity and insufficient housing. EA’s tech industry and academic roots as a philosophical plan to distribute inherited and institutional wealth may have helped the movement get this far. However, those same roots are unlikely to support its ambitions for expanding its influence.

Rebecca Ackermann is a writer and artist in San Francisco.

Read More