More Dakka
Epistemic Status: Hopefully enough Dakka
Eliezer Yudkowsky's book Inadequate Eqilibria is excellent. I recommend reading it, if you haven't done so. Three recent reviews are Scott Aaronson's, Robin Hanson's (which inspired You Have the Right to Think and a great discussion in its comments) and Scott Alexander's. Alexander's review was an excellent summary of key points, but like many he found the last part of the book, ascribing much modesty to status and prescribing how to learn when to trust yourself, less convincing.
My posts, including Zeroing Out and Leaders of Men have been attempts to extend the last part, offering additional tools. Daniel Speyer offers good concrete suggestions as well. My hope here is to offer both another concrete path to finding such opportunities, and additional justification of the central role of social control (as opposed to object-level concerns) in many modest actions and modesty arguments.
Eliezer uses several examples of civilizational inadequacy. Two central examples are the failure of the Bank of Japan and later the European Central Bank to print sufficient amounts of money, and the failure of anyone to try treating seasonal affective disorder with sufficiently intense artificial light.
In a MetaMed case, a patient suffered from a disease with a well-known reliable biomarker and a safe treatment. In studies, the treatment improved the biomarker linearly with dosage. Studies observed that sick patients whose biomarkers reached healthy levels experienced full remission. The treatment was fully safe. No one tried increasing the dose enough to reduce the biomarker to healthy levels. If they did, they never reported their results.
In his excellent post Sunset at Noon, Raymond points out Gratitude Journals:
"Rationalists obviously don't *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. And no one I know has even seriously tried it. Do literally *none* of these people care about their own happiness?"
"Huh. Do *you* keep a gratitude journal?"
"Lol. No, obviously."
- Some Guy at the Effective Altruism Summit of 2012
Gratitude journals are awkward interventions, as Raymond found, and we need to find details that make it our own, or it won't work. But the active ingredient, gratitude, obviously works and is freely available. Remember the last time someone expressed gratitude to you and it made your day worse? Remember the last time you expressed gratitude to someone else, or felt gratitude about someone or something, and it made your day worse?
In my experience it happens approximately zero times. Gratitude just works, unmistakably. I once sent a single gratitude letter. It increased my baseline well-being. Then I didn't write more. I do try to remember to feel gratitude, and express it. That helps. But I can't think of a good reason not to do that more, or for anyone I know to not do it more.
In all four cases, our civilization has (it seems) correctly found the solution. We've tested it. It works. The more you do, the better it works. There's probably a level where side effects would happen, but there's no sign of them yet.
We know the solution. Our bullets work. We just need more. We need More (and better) (metaphorical) Dakka - rather than firing the standard number of metaphorical bullets, we need to fire more, absurdly more, whatever it takes until the enemy keels over dead.
And then we decide we're out of bullets. We stop.
If it helps but doesn't solve your problem, perhaps you're not using enough.
I
We don't use enough to find out how much enough would be, or what bad things it might cause. More Dakka might backfire. It also might solve your problem.
The Bank of Japan didn't have enough money. They printed some. It helped a little. They could have kept printing more money until printing more money either solves their problem or starts to cause other problems. They didn't.
Yes, some countries printed too much money and very bad things happened, but no countries printed too much money because they wanted more inflation. That's not a thing.
Doctors saw patients suffer for lack of light. They gave them light. It helped a little. They could have tried more light until it solved their problem or started causing other problems. They didn't.
Yes,people suffer from too much sunlight, or spending too long in tanning beds, but those are skin conditions (as far as I know) and we don't have examples of too much of this kind of artificial light, other than it being unpleasant.
Doctors saw patients suffer from a disease in direct proportion to a biomarker. They gave them a drug. It helped a little, with few if any side effects. They could have increased the dose until it either solved the problem or started causing other problems. They didn't.
Yes, drug overdoses cause bad side effects, but we could find no record of this drug causing any bad side effects at any reasonable dosage, or any theory why it would.
People express gratitude. We are told it improves subjective well-being in studies. Our subjective well-being improves a little. We could express more gratitude, with no real downsides. Almost all of us don't.
On that note, thanks for reading!
A decision was universally made that enough, despite obviously not being enough, was enough. 'More' was never tried.
This is important on two levels.
II
The first level is practical. If you think a problem could be solved or a situation improved by More Dakka, there's a good chance you're right.
Sometimes a little more is a little better. Sometimes a lot more is a lot better. Sometimes each attempt is unlikely to work, but improves your chances.
If something is a good idea, you need a reason to not try doing more of it.
No, seriously. You need a reason.
The second level is, 'do more of what is already working and see if it works more' is as basic as it gets. If we can't reliably try that, we can't reliably try anything. How could you ever say 'If that worked someone would have tried it'?
You can't. If no one says they tried it, probably no one tried it. There might be good reasons not to try it. There also might not. There'd still be a good chance no one tried it.
There's also a chance someone did try it and isn't reporting the results anywhere you can find. That doesn't mean it didn't work, let alone that it can never work.
III
Why would this be an overlooked strategy?
It sounds crazy that it could be overlooked. It's overlooked.
Eliezer gives three tools to recognize places systems fail, using highly useful economic arguments I recommend using frequently:
1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;
2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information
3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.
In these cases, I do not think such explanations are enough.
If the Bank of Japan didn't print more money, that implies the Bank of Japan wasn't sufficiently incentivized to hit their inflation target. They must have been maximizing primarily for prestige instead. I can buy that, but why didn't they think the best way to do that was to hit the inflation target? Alexander's suggested payoff matrix, where printing more money makes failure much worse, isn't good enough. It can't be central on its own. The answer was too clear, the payoff worth the odds, and they had the information, as I detail later.
Eliezer gives the model of researchers looking for citations plus grant givers looking for prestige, as the explanation for why his SAD treatment wasn't tested. I don't buy it. Story doesn't make sense.
If more light worked, you'd get a lot of citations, for not much cost or effort. If you're writing a grant, this costs little money and could help many people. It's less prestigious to up the dosage than be original, but it's still a big prestige win.
If you say they want to associate with high status research folk, then they won't care about the grant contents, so it reduces to a one-factor market, where again researchers should try this.
Alexander noticed the same confusion on that one.
In the drug dosage case, Eliezer's tools do better. No doctor takes the risk of being sued if something goes wrong, and no company makes money by funding the study and it's too expensive for a grant, and trying it on your own feels too risky. Maybe. It still does not feel like enough. The paths forward are too easy, too cheap, the payoff too large and obvious. Even one wealthy patient could break through, and it would be worth it. Yet even our patient, as far as we know, didn't even try it and certainly didn't report back.
The gratitude case doesn't fit the three modes at all.
IV
Here is my model.I hope it illuminates when to try such things yourself.
Two key insights here are The Thing and the Symbolic Representation of The Thing, and Scott Alexander's Concept-Shaped Holes Can Be Impossible To Notice. Both are worth reading, in that order.
I'll summarize the relevant points.
The standard amount of something, by definition, counts as the symbolic representation of the thing. The Bank of Japan 'printed money.' The standard SAD treatment 'exposes people to light.' Our patients' doctors prescribed 'standard drug.' Today, various people 'left with plenty of time,' 'came up with a plan,' 'were part of a community,' 'ate pizza,' 'listened to the other person,' 'focused on their breath,' 'bought enough nipple tops for the baby's bottles,' 'did their job' and 'added salt and pepper.'
They got results. A little. Better than nothing. But much less than was desired.
The Bank of Australia printed enough money. Eliezer Yudkowsky exposed his wife to enough light. Our patient was told to take enough of the drug to actually work. Meanwhile, other people actually left with plenty of time, actually came up with a workable plan, actually were part of a community, ate real pizza, actually listened to another person, actually focused on their breath, bought enough nipple tops for the baby's bottles, actually did their job, and added copious amounts of sea salt and freshly ground pepper.
Some of these are about quality rather than quantity. You could also think of that as a bigger quantity of effort, or willingness to pay more money or devote more time. Still, it's worth noting that an important variant of 'use more,' 'do more' or 'do more often' is 'do it better.'
Being part of that second group is harder than it looks:
You need to realize the thing might exist at all.
You need to realize the symbolic representation of the thing isn't the thing.
You need to ignore the idea that you've done your job.
You need to actually care about solving the problem.
You need to think about the problem a little.
You need to ignore the idea that no one could blame you for not trying.
You need to not care that what you're about to do is unusual or weird or socially awkward.
You need to not care that what you're about to do might be high status.
You need to not care that what you're about to do might be low status.
You need to not care that what you're about to do might not work.
You need to not be concerned that what you're about to do might work.
You need to not care that what you're about to do might backfire.
You need to not care that what you're about to do is immodest.
You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down.
You need to not care about the implicit accusation you're making against everyone who didn't try it.
You need to not care that what you're about to do might be wasteful. Or inappropriate. Or weird. Or unfair. Or morally wrong. Or something.
Why is this list getting so long? What is that answer of 'don't do it' doing on the bottom of the page?
V
Long list is long. A lot of items are related. Some will be obvious, some won't be. Let's go through the list.
You need to realize the thing might exist at all.
One cannot do better unless one realizes it might be possible to do better. Scott gives several examples of situations in which he doubted the existence of the thing.
You need to realize the symbolic representation of the thing isn't the thing.
Scott gives several examples where he thought he knew what the thing was, only to find out he had no idea; what he thought was the thing was actually a symbolic representation, a pale shadow. If you think having a few friends is what a community is, it won't occur to you to seek out a real one.
You need to ignore the idea that you've done your job.
There was a box marked 'thing'. You've checked that box off by getting the symbolic version of the thing. It's easy to then think you've done the job and are somehow done. Even if you're doing this for yourself or someone you care about, there's this urge to get to and think 'job done', 'quest complete', and not think about details. You need to realize you're not doing the job so you can say you've done the job, or so you can tell yourself you've done the job. Even if you didn't get what you wanted, your real job was to get the right to tell a story you can tell yourself that you tried to get it, right?
You need to actually care about solving the problem.
You're doing the job so the job gets done. That's why doing the symbolic version doesn't mean you're done. Often people don't care much about solving the problem. They care whether they're responsible. They care whether socially appropriate steps have been taken.
You need to ignore the idea that no one could blame you for not trying.
Alexander notes how important this one is, and it's really big.
People often care primarily about doing that which no one could blame them for. Being blamed or scapegoated is really bad. Even self-blame! We instinctively fear someone will discover and expose us, and make ourselves feel bad. We cover up the evidence and create justifications. Doing the normal means no one could blame you. If you don't grasp that this is a thing, read as much of Atlas Shrugged as needed until you grasp that. It should only take a chapter or two, but this idea alone is worth a thousand page book in order to get, if that's what it takes. I'm not kidding.
Blame does happen. The real incentive here is big. The incentive people think they have to do this, even when the chance of being blamed is minimal, is much, much bigger.
You need to think about the problem a little.
People don't like thinking.
You need to not care that what you're about to do is unusual or weird or socially awkward.
There's a primal fear of doing anything unusual or weird. More would be unusual and weird. It might be slightly socially awkward. You'd never know until it actually was awkward. That would be just awful. Can't have that. No one is watching or cares, but some day someone might find you out and then expose you as no good. We go around being normal, only guessing which slightly weird things would get us in trouble, or that we'd need to get someone else in trouble for! So we try to do none of them. That's what happens when not operating on object-level causal models full of gears about what will work.
You need to not care that what you're about to do might be high status.
Doing or tying to do something high status is to claim high status. Claiming status you're not entitled to is a good way to get into a lot of trouble. Claiming to usefully think, or to know something, is automatically high status. Are you sure you have that right?
You need to not care that what you're about to do might be low status.
Your status would go down. That's even worse. If it's high status you lose, if it's low status you also lose, and you don't even know which one it is since no one does it! Might even be both. Better to leave the whole thing alone.
You need to not care that what you're about to do might not work.
Failing is just awful. Even things that are supposed to mostly fail. Even getting ludicrous odds. Only explicitly permitted narrow exceptions are permitted, which shrink each year. Otherwise we must, must succeed, or nothing we do will ever work and everyone will know that. I founded a company once*. It didn't work. Now everyone knows rationalists can't found companies. Shouldn't have tried.
* - Well, three times.
You need to not be concerned that what you're about to do might work.
Even worse, it might work. Then what? No idea. Does not compute. You'd have to keep doing weird thing, or advocate for weird thing. How weird would that be? What about the people you'd prove wrong? What would you even say?
You need to not care that what you're about to do might backfire.
It might not only not work, it might have real consequences. That's a thing. Can't think of why that might happen. Every brainstormed risk seems highly improbable and not that big a deal. But why take that risk?
You need to not care that what you're about to do is immodest.
By modesty, anything you think of, that's worth thinking, has been thought of. Anything worth trying has been tried, anything worth doing done. Ignore that there's a first time for everything. Who are you to claim there's something worth trying? Who are you to claim you know better than everyone else? Did you not notice all the other people? Are you really high status enough to claim you know better than all of them? Let's see that hero licence of yours, buster. Object-level claims are status claims!
You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down.
The world won't let you get away with that. It will make this blow up in your face. And laugh. At you. People know this. They'll instinctively join the conspiracy making it happen, coordinating seamlessly. Their alternative is thinking for themselves, or other people might thinking for themselves rather than playing imitation games. Unthinkable. Let's scapegoat someone and reinforce norms.
You need to not care about the implicit accusation you're making against everyone who didn't try it.
You're not only calling them wrong. You're saying the answer was in front of their face the whole time. They had an obvious solution and didn't take it. You're telling them they didn't have a good reason for that. They gonna be pissed.
You need to not care that what you're about to do might be wasteful. Or inappropriate. Or unfair. Or low status. Or lack prestige. Or be morally wrong. Or something. There's gotta be something!
The answer is right there at the bottom of the page. This isn't done, so don't do it. Find a reason. If there isn't a good one, go with what you got. Flail around as needed.
That's what the Bank of Japan was actually afraid of. Nothing. A vague feeling they were supposed to be afraid of something, so they kept brainstorming until something sounded plausible.
Printing money might mean printing too much! The opposite is true. Not printing money now means having to print even more later, as the economy suffers.
Printing money would destroy their credibility! The opposite is true. Not printing money destroyed their credibility.
People don't like it when we print too much money! The opposite is true. Everyone was yelling at them to print more money.
The markets don't like it when we print too much money! The opposite is true. We have real time data. The Nikkei goes up on talk of printing money, down on talk of not printing money, and goes wild on actual unexpected money printing. It's almost as if the market thinks printing money is awesome and has a rational expectations model. The bond market? The rising interest rates? Not a peep.
Printing money wouldn't be prestigious! It would hurt bank independence! The opposite is true. Not printing money forced Prime Minister Shinzo Abe to threaten them into printing more money. They were seen as failures. Everyone respects the Bank of Australia because they did print more money.
This same vague fear, combined with trivial inconveniences, is what stops the other solutions, too.
Not only are these trivial fears that shouldn't stop us, they're not even things that would happen. When you try the thing, almost nothing bad of this sort ever happens at all.
At all. This is low risks of shockingly mild social disapproval. Ignore.
These worries aren't real. They're in your head.
They're in my head, too. The voice of Pat Modesto is in your head. It is insidious. It says whatever it has to. It lies. It cheats. It is the opposite of useful.
If someone else has these concerns, the concerns are in their head, whispering in their ear. Don't hold it against them. Help them.
Some such worries are real. They can point to real costs and benefits. Check! But they're mostly trying to halt thinking about the object level, to keep you from being the nail that sticks up and gets hammered down. When someone else raises them, mostly they're the hammer. The fears are mirages we've been trained and built to see.
You don't have that problem, you say? Great! Other people do have that problem. Sympathize and try to help. Otherwise, keep doing what you're doing, only more so. And congratulations.
VI
My practical suggestion is that if you do, buy or use a thing, and it seems like that was a reasonable thing to do, you should ask yourself:
Can I do more of this? Can I do this better? Put in more effort, more time and/or more money? Might that do the job better? Could that be a good idea? Could that be worth it? How much more? How much better?
Make a quick object level model of what would happen. See what it looks like. Discount your chances a little if no one does it, but only a little. Maybe half, tops. Less if those who succeeded wouldn't say anything. In some cases, the thing you're about to try is actually done all the time, but no one talks about it. If you suspect that, definitely try it.
You'll hear the voice. This isn't done. There must be a reason. When you hear that, get excited. You might be on to something.
If you're getting odds to try, try. Use the try harder, Luke! You can do this. Pull out More Dakka.
It's also worth looking back on things you've done in the past and asking the same question.
I've linked several times to the Challenging the Difficult sequence, but none of this need be difficult. Often all that's needed, but never comes, is an ordinary effort.
The bigger picture point is also important. These are the most obvious things. Those bad reasons stop actual everyone from trying things that cost little, on any level, with little risk, on any level, and that carry huge benefits. For other things, they stop almost everyone. When someone does try them and reports back that it worked, they're ignored.
Something possibly being slightly socially awkward, or causing a likely nominal failure, acts as a veto. Rationalizations for this are created as needed.
Adding that to the economic model of inadequate equilibria, and the fact that almost no one got as far as considering this idea at all, is it any wonder that you can beat 'consensus' by thinking of and trying object-level things?
Why wouldn't that work?