Arg! This post by Matthew Yglesias exhibits perfectly the problem I have with EA. I was first triggered by this:
The classic argument here is that if you were walking down the road and saw a child drowning in a pond, you’d jump in and try to save him. And you’d do that even if you happened to be wearing a nice shirt that would be ruined by the water because saving a child is more important than a nice shirt. So how can you in good conscience be walking around buying expensive shirts when you could be giving the money to save lives in poor countries?
So, this rhetoric completely ignores the “moral scope”, in scare quotes because I’m using my own private language, here. Walking near drowning children and the shirt that’s literally touching your skin is very local, whereas people living & dying in other countries is very non-local. It’s a false equivalence. But then Matthew rescues his rhetoric later:
We are not 100 percent bought in on the full-tilt EA idea that you should ignore questions of community ties and interpersonal obligation, so we also give locally in D.C.
EA’s tendency to think long-term and broadly scoped is fantastic. However, a tendency I’ve noticed in my professional work as a simulationist1,2 is that the effort to generalize/universalize tends to also flatten the discourse. At work, the primary offenders are mathematical modelers. I don’t need to lay it all out, here, because you can simply think of Maslow’s: to a person with a hammer everything looks like a nail. Systems of differential equations are like your well-worn hammer that fits your hand so well you forget it’s even there. You are the hammer. The hammer is you. You forget that the hammer is not you. I.e. you flatten the scope.
EA does this with AI risk and people in the far-flung future as well as with currency-based donations to people in far-flung places. That’s what currencies allow us to do. That’s what “currency” means. But currency is not a crisp replacement for inter-scope interactions. We see this all the time with huge donations and, e.g., pallets full of cash shipped to places like Iraq after we destroyed their communities.
Pfffft. Forgive me if I’m skeptical of all the high falutin’ yammering about “consequentialism” when the actual, richly detailed hairball network ecology of consequences are ham-handedly squashed down to money by the “earn to give” mantra. Anyway, Yglesias rescued his rhetoric there. But then when he fawns all over a billionaire who extractively sucks money from unwitting MLM victims in the cryptocurrency game and then applies that money to pandemic prevention, Yglesias returns to wallow in the flattened discourse again. Yeah, that’s the ticket. Suck money out of all those idiots trading shitcoins so you can (arrogantly) redistribute it to your pet causes. That’s ethical, right? That’s consequentialist, right?
Now, OK. I’m being harsh. I actually do believe there are valid use cases for distributed ledgers ([sigh] “blockchains”). I still have hope for distributed storage and distributed computing. But currency ain’t it. I suspect (proof of stake) cryptocurrency will be part of our future in some lasting sense. But it won’t be what the crypto crowd says it will be … especially not what the exploitative crypto-Billionaires say it will be.
But that’s the problem with EA. In the spirit of greenwashing, EA is altruismwashing. They’ve bought into their own bullshit. And while bullshitting other people is bad. Bullshitting yourself is Evil.
1. Ören, T. The Many Facets of Simulation through a Collection of about 100 Definitions, 2011. [SCS]
2. Simulationist Code of Ethics from the NTSA.