Of course, the economy isn’t really much like a game of prisoner’s dilemma, but some of the outlines are the same: you make a transaction, you both win; you don’t make a transaction, you both lose (at least this time around); if you pay someone and they don’t hand over the goods or services, they win and you lose; if you convince them to hand over the goods/services and then don’t pay, you win and they lose. Economic theories have been built from way more distant analogies than that.
A single instance of the prisoner’s dilemma isn’t such a big deal. Where things get interesting is when you have lots of prisoners and lots of interactions. Way back in 1981, Robert Axelrod’s Evolution of Co-operation established that a simple tit-for-tat- model of cooperation and defection was more effective than almost any other algorithm for behavior in repeated prisoner’s dilemma interactions. If you ran a generational model where the most successful algorithms reproduced and the least successful ones were culled, pretty soon almost the whole population turned into tit-for-tat cooperators. And because tit-for-tat always cooperated on the first turn, the payoffs across the whole population were uniformly positively. (Or in simple-minded economic terms, when everybody does business with everybody else, the economy prospers.) Instead of nature (and society) being red in tooth and claw, it was plausibly hardwired for cooperation to appear from chaos.
Over the next 10 years or so, people with a lot more computing power than Axelrod had available kept looking at prisoner’s dilemma population, and they found something interesting and disturbing. Cooperation evolves, and then it unevolves. They ran simulations like Axelrod’s for tens of thousands of generations, and found that once cooperators had become an overwhelming majority of the population, algorithms based on defection could take root and eventually wipe out almost the entire cooperating population, so that they became the overwhelming majority in turn. Payoffs over the entire population became uniformly negative (defection matched with defection) and stayed that way for arbitrarily long periods, until some random combination of lucky events let enough tit-for-tat cooperators survive to form a critical mass and retake the majority again.
The first half of that sentence is what struck fear into my heart 15-odd years ago, and still does today: Payoffs over the entire population became uniformly negative and stayed that way for arbitrarily long periods. Translated into economic terms, that means you don’t automatically get a recovery after a bad enough crash. Absent government action, economies are supposed to recover from depressions because eventually productive assets become cheap enough that they’re attractive to buy for people who want to put them to work. But that assumes anyone has money to buy things, and when all your asset prices have gone through the floor, then you’re limited to cash on hand…
So we come around to the liquidity trap from a different direction. And obviously we can complicate the algorithms of our cooperators and defectors so that they react to general conditions, or act like bubble speculators or masters of the universe, but the ultimate lesson stays the same: a tanked economy with no one working and no one buying anything is just as stable a state as a prosperous economy with lots of people working and buying.