On Negative Feedback and Simulacra
Response to (Elizabeth @ LessWrong): Negative Feedback and Simulacra
Requires/Assumes (Compass Rose): Simulacra and Subjectivity
Epistemic Status: Exploring, thinking out loud, taking a break from Covid-19 stuff, etc. Long post is long because I didn't have the time or insight to make it shorter. Later I hope to write shorter versions.
Simulacrum levels are very important. If you haven't read the Compass Rose post above, please do so, even if you don't read the rest of this post. At a minimum, read part 2 of Elizabeth's post above, which attempts to summarize the central point. Elizabeth's post provides good relatively clean examples of a common problem related to simulacrum levels.
The common problem - in fact, the most common problem - is that there is a desirable action X that you wish to take, or an undesirable action Y that you wish to avoid. Unfortunately, taking action X, or avoiding action Y, has undesirable (at least to someone, in some sense) consequence Z. So far, that is every non-trivial decision ever made. The distinction here is that consequence Z takes the form of conveying true information. Where that true information is, with respect to some aspect or attribute of some person or group, a lower than expected opinion. And that's terrible.
Usually, sharing true information is the opposite of terrible. By default sharing true information (assuming it is new and useful/interesting, or it builds useful common knowledge) is good. What this is not, is a law of nature that says that sharing true and relevant information is automatically and everywhere always good, or is always good for good people or justice.
It sometimes isn't. I've made clear in the past, most notably in Blackmail and Privacy, that I strongly believe that the sharing of the wrong information, at the wrong time, to the wrong person, is in fact sometimes be terrible. Even if everyone is (relatively) well-meaning, sometimes terrible results follow from the sharing of true and useful information. And if someone goes looking for damaging information, and goes looking for damaging formats and times and places to share that information, that you should expect such information to do harm.
It's worth emphasizing how weak this claim is. All it's saying is, that sharing information has consequences, sometimes some of those consequences are bad, and sometimes the net consequences will also be bad, for any reasonable view of what is and is not bad.
Levels of Consequence
Those negative consequences can happen on several different simulacra levels. I'm still trying to sort out how I think about simulacra levels, so consider everything I say about them to be a hypothesis rather than a claim. It's confusing, but I think what I have here is less wrong than my previous understanding. By default, we operate on multiple levels at once. When we are considering crossing the river and someone says "there's a lion across the river," we by default have evidence for all of the following: Level 1: There is a lion across the river. Level 2: The person wants us to believe there is a lion across the river, perhaps so we do not cross it. Level 3: The person is telling us not to cross the river, and/or is attempting to be in a coalition of some sort, to some degree, with others who wish to claim there is a lion across the river (for whatever reason). Level 4: This person thinks that making this statement will advance their interests in some other way, perhaps indirectly. Our default is to want to do some amount of Bayesian updating on all four levels. Most of the time, what changes is the primary way to interpret the statement's actual content, and the primary way we should therefore update. There are two main good reasons to only update on some of the levels. Both come from the same basic source, which is that we already know what is motivating the statement. We might already have sufficiently strong evidence about the object level, or about someone's higher level motivations and situation, such that this new information doesn't change anything. If we can see the lion across the river, someone saying there's a lion makes little difference, even if we know this person would make this statement if and only if they see a lion across the river. If someone is a well-known liar, and their desire to have you cross the river does not depend on whether or not there is a lion, then you can do the same. Or if this person clearly has a higher level reason that screens out the existence of a lion, that would also count. We can similarly find scenarios that screen out higher levels. But more often, even when everyone is mostly level-1 oriented, we can't do this, because at a minimum we now have evidence against the presence of reasons not to claim that a lion is across the river. There is, for example, additional evidence against someone with a spear standing behind them, threatening to kill them if they claim there is a lion across the river. The same applies to less literal motivations, even for those who would (almost) never lie. Thus, you can and should make additional updates, even if you also now think there is a 99%+ chance there is a lion across the river. When someone, or a group, has the default of updating primarily on a given simulacrum level when told statements, and is primarily seeking to cause updates on that simulacrum level when they make statements, we can say that person or group is at that simulacrum level. People on one level vary greatly in the extent to which they even realize that the other levels exist, let alone that those other levels could be considered primary. No, really. There are people, many with great power, whose brains have lost the ability to process claims as being mostly focused on object level reality (they can't see level 1), or even be motivated by attempts to manipulate based on that reality (sometimes they can't see level 2 either). Others are only at level 1, react in righteous indignation to level 2 actions, can't process level 4 or sometimes even level 3, and don't realize that there are those capable of ignoring or not caring about level 1, let alone there being people who can't remember that it exists. When such people (in both the upper and lower camps) do fully realize the other levels and types of people exist, they vary greatly in the extent to which they consider those levels legitimate, and whether such levels can justify statements, and whether people can or should be blamed for consequences on those levels. Now it's time to go into the examples from Elizabeth's post.
Example 1: Hot Sauce
In this r/AmITheAsshole post, a person tries some food their their girlfriend cooked, likes it, but tries another bite with hot sauce. Girlfriend says this “…insults her cooking and insinuates that she doesn’t know how to cook.”
Elizabeth's take was this:
As objective people not in this fight, we can notice that her cooking is exactly as good as it is whether or not he adds hot sauce. Adding hot sauce reveals information (maybe about him, maybe about the food), but cannot change the facts on the ground. Yet she is treating him like he retroactively made her cooking worse in a way that somehow reflects on her, or made a deliberate attempt to hurt her.
Note that, in the spirit of everything updates us about everything, this action clearly has information about both the boyfriend's food preferences and the food, in addition to any more abstract implications. This perspective in Elizabeth's post was (if I am interpreting it correctly) written to be intentionally naive. It represents a Level-1 perspective that denies the existence and/or legitimacy of Level 2-4 actions and statements. Under this ethos, if you take an object-level action designed to improve the world (for example, to try spicing up one's food to enjoy it more) then the last thing you need to worry about is the logical inferences observers might draw from your action about the state of the world. This ethos presumably believes at least one of: (A) no one should update based on that action if you didn't intend them to and they are to blame if they do so, or (B) you are not blameworthy for the consequences of such unintended updates (or only would be if they were the primary effect of the action, and that's prima facie rediculous), or (C) conveying true information is always good, so there can't be bad consequences These are not straw man arguments - we see all three in the wild. A) We definitely see frequent cases in which information is revealed, and someone who discriminates on the basis of that information is blameworthy. In many cases, legally blameworthy. In other cases, socially blameworthy sufficiently to be ousted from polite society. In neither case is "I properly applied Bayes' rule" remotely an available defense. It would be weird to apply this standard here, but far from impossible. B) We also definitely see frequent cases where people claim they have a right to do something and therefore aren't responsible for the consequences, or a responsibility, or some other overriding motivation. And we certainly can't have a system that holds everyone everywhere responsible for each negative consequence of all actions they take, including consequences of the form "by observing my action someone figured out a true fact about the world, and this had consequences someone didn't like." Such a system essentially bans most actions of all kinds, including net useful ones. C) I have had to defend myself against this claim in extensive discussions on previous posts, and seen it explicitly made by several people I otherwise highly respect. This claim has even been treated like it should be the default, and the burden on me to prove otherwise. What's really weird is the thought that adding hot sauce could retroactively make the GF's cooking worse. Or that her actions reflect her thinking that he is doing this, or attempting to do this. That's clearly a confusion. Does it insult her cooking, or insinuate that she doesn't know how to cook? YES! Of course. A little. Hot sauce has higher expected value when cooking is worse, therefore the action of adding hot sauce reveals a lower mean estimate of his evaluation of her cooking, thus treating her cooking with (some amount of) disrespect, which is an insult to her cooking. It should lower her assessment of his assessment of this dish, and with it, to some extent, her cooking in general. Again. A little. But for her not to do this, is to not be paying attention, or to not propagate the updates from her new information, or to have overriding other information that makes the resulting update of size zero or almost zero. And because he knows this, or at least in her mind because he should know this, he is choosing an action that has this consequence, which means he is choosing to insult her cooking. Or he's oblivious to the fact that he's insulting her cooking, which might even be worse. So either way, she's mad. This is unfortunate, because he'd rather she not be mad, and he'd also like to be able to try putting hot sauce on things to see how they taste. What to do? I see essentially four choices. Choice 1: Provide overriding general information that he likes to try putting hot sauce on lots of things even when they taste great, thus solving the general problem. Could work, might not. Choice 2: Provide overriding specific information in this case that he really likes her dish, in the form of telling her this, and then say 'but I think it might be even better with hot sauce.' Again, could work, might not. Choice 3: Add the hot sauce anyway because it's her problem. At least you get hot sauce. You will also have favorable selection in relationships. Choice 4: Don't add the hot sauce because it's not worth the trouble. Always works, but no hot sauce. And once you start down this path, smaller and smaller actions will potentially provide evidence that must be hidden, because you've revealed how much you consider such consequences when taking actions. You may find yourself making more and more similar compromises. Many of which will be for nothing. Choices 3 and 4 are available, but don't solve the problem, so we focus on those first two. They work as a team. You use specific information in the moment, and establish general information over time. Over time, the more you show that you are operating mostly or entirely on the object level, the less your actions imply about other levels. Thus, you can plausibly take more object-level-beneficial actions with less higher-level consequences. This can take the form of an honor code of sorts, a set of values, a pattern of behavior, and/or of ignorance being bliss. There are (at least) two objections to the first two choices. Objection one is that it places an undue burden on taking object level action. It's not reasonable to tax such actions, as the power to tax is the power to destroy. Objection two is that these solutions don't work if the justifications aren't true. If her cooking actually was bad, you'd have to lie or stomach the food without hot sauce, when you most need that hot sauce. This not only means such choices sometimes are not available, it means that your silence can become insulting in other situations. Glomarization is hard. In my experience, your goal is to stay on level one as often and as hard as possible, so you can take actions that have good physical benefits in the world. But many people care about other things, too, and you often care about them or need them not to be mad at you. Often both. So you need to be careful and strategic about how to diffuse such problems, and bring the incentives correctly back to where one can act, to the maximum extent possible, at level one. I'd also like to take a look at the top answers on the reddit post itself, to see how they shed light on how others see this situation. In that reddit, "NTA" means "You are not the asshole." Top answer:
Is your girlfriend not from a western country? This is a cultural thing some places.
Nta, you even tried it without. Some people just like spice.
2nd answer:
NTA you tried it as is first and decided you'd like a little heat. If she can't take it, she can get out of the kitchen.
3rd answer:
NAH. People have different tastes, and you're allowed to use salt, pepper, or hot sauce as desired. Though, to some people it may be seen as insulting. It would make you the AH if you'd said it tasted better with the addition.
I've cooked and had people get up and grab things from the spice/seasoning rack to add to their own (already seasoned) serving. That's insulting. Adding salt/pepper/hot sauce to taste isn't an insult unless you make it one.
Essentially everyone on the thread treats this as a fact question to be decided on the details. The reason most people think he is not the asshole is that his preferring hot sauce has insufficiently large bearing on the implied quality of her cooking. Several noted that if he had not tasted it first, that would make him the asshole, because that would have sufficient implications. The people who do think he is the asshole think there is sufficient information here. No one takes the position that the implication is off the table. We've established what you are, now we're talking price. Which, again, points out that one can be primarily concerned with one thing but not another, and still need to consider both things when making decisions. The other key piece of the puzzle, to me, is that this is fundamentally about her incentives. She went to the effort of cooking a meal. It is unfair and also unwise to punish that action, even if the food is bad! Thus, it is necessary that one's reaction express appreciation, or otherwise reward such action. Of course, it would be remiss to not reward better outcomes more than worse outcomes. And it would be remiss to not work to achieve better outcomes in the future this way and also other ways. It is a legitimate objection that sharing such information at this time needs to be off the table, to the extent that action needs to be taken to hide that information, so long as that does not interfere (too much) with the need for honesty. Thus, the common pattern of: You need or want to take action X, which will have undesirable effect Z, so you need to first or also take action W, which has the effect of (not Z) to cancel out Z. Also the common pattern of: There is true information Z that would hurt someone's feelings, and not provide much value to that person, so you pay some cost to prevent person from learning Z slash allowing person to plausibly consider Z uncertain.
Example 2: Returning a CD to the Library
Back when I would get books on CD I would sometimes forget the last one in my drive or car. Since I didn’t use CDs that often, I would find the last CD sometimes months later. To solve this, I would drop the CD in the library book return slot, which, uh, no longer looks like a good solution to me, in part because of the time I did this in front of a friend and she questioned it. Not rudely or anything, just “are you sure that’s safe? Couldn’t the CD snap if something lands wrong?.” I got pretty angry about this, but couldn’t actually deny she had a point, so settled for thinking that if she had violated a friend code by not pretending my action was harmless. I was not dumb enough to say this out loud, but I radiated the vibe and she dropped it.
My intuitive frame here is again that of incentives. Elizabeth is getting books on CD from the library (good) and then returning them to the library (good). Occasionally she would forget the last CD (bad) but then find it later and return it (good) via the mail slot. There is some chance that returning the CD via the mail slot will damage the CD. But the alternative is going into the library to handle the situation, which is a lot more annoying, potentially gets her a fine, and risks having to be embarrassed about the original forgetting in front of a human. This is potentially a bigger cost than the small chance of the CD being damaged. However, because borrowing a book on CD and then not returning it in full is clearly Elizabeth's fault, a solution that risks doing more damage in order to inconvenience her less seems wrong, and will make her feel bad, even if it's right. Thus, this will either result in bad feeling but no change in behavior, or it will result in a change in behavior that is probably net worse and definitely worse for Elizabeth (she returns the CDs in person), or even worse creates an ugh field such that she stops listening to books on CD from the library at all. So no good comes of this, certainly not for Elizabeth, which Elizabeth instinctively senses, and she radiates the appropriate vibe that her friend is providing information that makes her life worse. Friends are here to not do that. Ideally, to do the opposite. An ideal friend, in this model, would notice what is happening, think about whether it would actually be right to get Elizabeth to change this behavior if the action wasn't fully safe for the CD, and only ask the question in that case. That doesn't mean that this is the right model of friendship! But I do think it is the default one. It also doesn't mean that it is optimal, even if this is the ideal action in this particular case (which is a very open question) whether one should be following a procedure with the sufficient level of prior restraint to realize that this is a question one might want to not ask. That's an open question, too. It focuses more on the (obvious, but also not-so-obvious) point that if one wishes to focus on successfully manipulating the physical world to get good outcomes and build better maps of the territory, that worrying about people's feelings is a real problem and it can get in the way of that quite a lot.
Example 3: Elizabeth fails to fit in at martial arts
A long time ago I went to a martial arts studio. The general classes (as opposed to specialized classes like grappling) were preceded by an optional 45 minute warm up class. Missing the warm up was fine, even if you took a class before and after. Showing up 10 minutes before the general class and doing your own warm ups on the adjacent mats was fine too. What was not fine was doing the specialized class, doing your own warm ups on adjacent maps for the full 45 minutes while the instructor led regular warm ups, and then rejoining for the general class. That was “very insulting to the instructor”. This was a problem for me because the regular warm ups hurt, in ways that clearly meant they were bad for me (and this is at a place I regularly let people hit me in the head). Theoretically I could have asked the instructor to give me something different, but that is not free and the replacements wouldn’t have been any better, which is not surprising because no one there had the slightest qualification to do personal training or physical therapy. So basically the school wanted me to pretend I was in a world where they were competent to create exercise routines, more competent than I despite having no feedback from my body, and considered not pretending disrespectful to the person leading warm ups. Like the hot sauce example, the warm ups were as good as they were regardless of my participation – and they knew that, because they didn’t demand I participate. But me doing my own warm ups broke the illusion of competence they were trying to maintain.
I've taken (some) martial arts, which was interrupted by the pandemic. I have so many questions. The first set would be: Why go to a martial arts studio where they can't provide you a warm-up that doesn't hurt you? What makes you think they are competent to teach you martial arts? What makes you think the martial art practice itself isn't dangerous for you in particular, given you expect the replacements to be dangerous as well? Do you not think there's a reasonably high correlation between these skills among dojos? The second set would be: Wait, the whole forty-five minutes of warm-ups is dangerous, not just one or two things? In my experience, if one particular thing hurts you, and you tell your sensei, they will work with you to find an alternative. And also, if they wouldn't do that, again, what the hell are you doing going to classes here? The third set would be: Why do you need forty five minutes of warm-ups? In particular, why do you need forty five minutes of warm-ups right after taking another class? Why not go take a break, and/or do a few warm-ups elsewhere? It's explicitly all right to skip the warm-ups while taking both classes. I once tried out a Ju Jitsu place near my apartment. During the free trial class, we were doing some warm-ups on the floor, dragging ourselves across. It hurt, the bad kind of hurt, right at my lower back, which has given me problems on and off for a long time and had been problematic recently. It was clearly not the good kind of pain. The instructor noticed, asked what was going on, and said "that's not supposed to hurt." And that if it did, the class definitely wasn't for me until I felt better. I do not think martial arts schools want students to pretend they are competent to create exercise routines. I think martial arts schools are primarily, or at least largely, in the exercise routine business. If you think they have no business being in the exercise business, I am confused how you have business there at all. But that's object-level thinking that is interfering with the central point, so let's presume they were indeed distinct skills, somehow. The second business such schools are in is the sacredness and tradition and obedience and discipline businesses. A place to not have to make decisions, and trust the process. One goes to such a place where there is a way of doing things that has been designed to faithfully replicate itself over many years through repetition. Within that space, you follow The Way and do as the instructor says and show good discipline. It's a lot of the product being sold. I have always thought of the product as a package deal. If you pick and choose what parts to use and which to discard, it's no longer the same product. So you don't get to choose, and to keep it that way, someone else visibly getting to choose is damaging to keeping that intact. Of course I speak up - very politely - when something seems dangerous, but I use this to get myself to do a bunch of stuff that's pretty painful and that I wouldn't choose to do on my own. It's definitely at the risk that I or others choose to walk away, instead, but I don't see a way out of that. That trust, authority and respect for the sensei is also a lot of the 'pay' that the sensei gets. The job does not pay especially well. If you find a good place, you're paying not much money and getting a lot in return. Honoring the traditions, and paying your respects in various ways, is a lot of how you get people to teach these classes at all. Fair is fair. Doing something of the same type but orthogonal to what the group is doing, for an extended period, breaks that tradition in all these ways, in a way that doing your own warm-up on the side right before another class does not break it. "Insulting to the instructor" is one way of describing doing your own routine the whole time. One could also say not honoring the dojo and its traditions. And in terms of updating, yes, it's very insulting. You're claiming the instructor can't figure out how to provide a safe way to do warm-ups. Even if Elizabeth thinks this is a distinct skill from martial arts, I doubt the instructor agrees. And more broadly, such places are supposed to be communities, in a similar sense to a church or other sacred space, so 'not fitting in' is a reasonably big issue. So I think this one has some very strong logic behind it, and I'm confused how 'doing a different set of warm-ups for 45 minutes' could be the right answer on multiple levels. I don't see this as higher simulacrum levels, rather a system working as designed that has been tuned to achieve object-level goals.
Example 4: Imaginary Self-Help Guru
I listened to an interview where the guest was a former self-help guru who had recently shut down his school. Well, I say listened, but I’ve only done the first 25% so far. For that reason this should be viewed less as “this specific real person believes these specific things” and more like “a character Elizabeth made up in her head inspired by things a real person said…” and. For that reason, I won’t be using his name or linking to the podcast. Anyways, the actual person talked about how being a leader put a target on his back and his followers were never happy. There are indeed a lot of burdens of leadership that are worthy of empathy, but there was an… entitled… vibe to the complaint. Like his work as a leader gave him a right to a life free of criticism. If I was going to steel- man him, I’d say that there are lots of demands people place on leaders that they shouldn’t, such as “Stop reminding me of my abusive father” or “I’m sad that trade offs exist, fix it”. But I got a vibe that the imaginary guru was going farther than that; he felt like he was entitled to have his advice work, and people telling him it didn’t was taking that away from him, which made it an attack.
I've been a leader, most notably as the CEO of a start-up, but also I've led a number of teams of other sorts as well. In some areas, I lead my family. I haven't heard even the 25% Elizabeth heard, so my version is even more imaginary than hers, but let me try for a better steelman. Unreasonable demands placed on leaders go well beyond crazy requests like the ones Elizabeth lists above. Leading for the good of the group is consistently an underappreciated, undercompensated role. It's true when you lead a company for two years, when you lead a group of friends preparing for a competition, or when you lead a trip to the supermarket. You take on all the responsibility, in the Seinfeld sense that whenever anything goes wrong they ask who is responsible. Every decision and every piece of stress is yours. You suddenly need to be an expert on actual everything - delegation is great but it only goes so far. You have to talk every person into doing what the team needs them to do, often repeatedly. The advantage of leading are:
If you want it done right, you gotta do it yourself. Or at least, delegate it to the right person yourself.
You get to have your way on things, to the extent that you decide to do that. Sometimes, you get to do this whether it's what the group wants or not.
You get power, which you can use however you decide. They also often pay you the big bucks, but also often they don't, and it's not really enough.
If you're not the boss then someone else is. Be afraid. Be very afraid.
Someone has to, and no one else will. More often than you'd think.
The thanks you get, although it's usually not much.
That fourth one is under-appreciated. The problem with leading is that most of the rewards mostly appeal to, and are available to, selfish leaders who are in it for the power and for themselves. That's true from president all the way down. Being the leader is a cost that good people pay to prevent bad or incompetent people, or no one at all, from leading. Coming off as entitled is bad form. The guru in question should knock it off, whatever's giving that impression, whether it's fair or not. Because being a leader means life is especially not fair to you. Having to be careful how you come off when you talk is a core example of that, whether or not you consider it meta. Tough. Deal with it. Some leaders do the authoritarian version of this. They use their power to demand obedience, and for everyone act as if everything they say to do works great. We can all think of prominent examples of this. It's terrible on a lot of levels, including that it destroys information flow in both directions. It's common for there to be way, way too much deference to a leader. But the right amount of deference can't be zero, or a leader is barely a leader at all. They can't effectively lead, and they can't enjoy the benefits of leading. The whole point of being the leader is an agreement to trust and honor the leader, to some extent, in exchange for their efforts. It is important to preserve information flow up (as well as down) the chain of command. If there's something the leader must know, there has to be a way to tell them. But if the leader doesn't need to know, because the leader is dealing with a lot of things and this one isn't that important to their decision making, then the leader has a similar right to be left alone. It's necessary, because real leaders always have tons of stuff they need to handle. The leader has the right to not be burdened with problems and details that shouldn't be their problem. Solve it yourself. When people go to a self-help guru, they probably have slash develop slash embrace a kind of learned helplessness where they want every detail of every action spelled out for them, and complain about the smallest gaps in the instructions. The leader has the right to not be burdened with non-useful complaints, for similar reasons. When someone complains about a leader, usually the main purpose is to lower the leader's status or absolve one's self from blame. It is easy to pretend one is primarily conveying necessary information, when one is actually mostly trying to lower the leader's status. To bring them down. We can all think of examples (politics is an easy place to find them, but so is everywhere else) where people do have something worthwhile to say, but they're mostly framing and emphasizing who should be blamed for it rather than fixing it. Sometimes, that's necessary to maintain incentives. But mostly, leaders incentives go the other way, and you need to find ways to reward them rather than punish them for being leaders. Again, it's a thankless job. A lot of that is about asymmetric justice. A leader who makes a thousand decisions and gets one wrong, will often mostly hear about the one mistake, over and over again. Even if that mistake was small. Even if it wasn't a mistake, just something that didn't work out. Or has been framed as having not worked out. Meanwhile, people really are never happy. They get adjusted to their circumstances, and whatever is being provided for them. So if you do a great job leading them, people raise their standards on all levels, then get mad at you for the smallest things or for failing to do even better than before. So yes. Being a leader usually puts a target on your back, where people complain all the time and are never happy, in ways that are exhausting and make your life worse and your job as leader harder. The job is a package of tasks and you'll be blamed for the parts you're bad at or that sometimes didn't seem like they quite worked out. You are, by default, held to impossibly higher standards than those not leading. It all sucks. It's vital that the compensation not (only) be the ability to abuse the position. We need to control for that and provide rewards to balance that, or only the people who exploit such positions will be willing to hold them. Part of that is allowing some amount of 'abuse' of the position to get what one wants. When people complain about "inequality" in such situations, that's the most infuriating of all. Leaders should work super hard for us and be hyper-competent. In exchange, we have the idea that the leader should not get much if any more money, and also shouldn't get many if any dibs on anything or be 'abusing' the position to get what they want in other ways, definitely shouldn't be using it to get laid, and definitely should be constantly "held accountable." People demand this until they deal with an abusive leader who turns these impulses towards other scapegoats, and otherwise corrupts the system to their own advantage. The people complaining almost never can do the leader's job, and would almost always be in hell (and/or create a hell) if they tried. That turned into a bit of a rant, so let's bring it back to the concrete question of criticism. What is the guru actually asking for here? My guess is that guru wants people to try their suggestions in good faith, make their best efforts and act as if it is going to work until it has a chance to do so. This is a vital part of many self-help (and other) strategies - they won't work if you're constantly doubting yourself or what you're doing. Then, for them to take what was good, and build upon it, rather than demanding instant life fixing on all fronts. And to continue to work to maintain the vibe that allows for people to continue trying things. That doesn't mean don't provide feedback, but frame and time it carefully so as to keep the real content on object level, or otherwise act to cancel out non-object-level negative implications.
The General Idea
Throughout this, I've essentially described those who wish to communicate on lower levels as effectively having a burden to consider higher-level implications, and to take reasonable and appropriate efforts avoid doing harm on those levels. In particular, this burden is to avoid creating bad incentives on behaviors, and to avoid causing incorrect Bayesian updates. When one's actions, whether that action is the sharing of true relevant information or anything else, would effectively punish actions that we want to encourage, or be misleading, that's a good time to be careful. You may wish to word carefully, or include extra statements to avoid the incentives or updates you wish to avoid. This might be rather expensive. Sometimes, this ends up meaning you remain (or should remain) silent, despite having useful true information to share. (It's also definitely not the only good reason one might choose not to reveal useful true information. And other times, these reasons mean you share true information you wouldn't otherwise bother sharing.) How concerned should we be here about tone policing, or arguments from consequences? Certainly not zero. We should not allow appeals to consequences as a logical argument. But it would be rather dense to think that tone lacks information or lacks consequences, or that sharing information lacks potential negative consequences - such things still need to be one thing that influences our actions. It's not an easy problem. Similarly, we need to recognize that tone contains information, causes updates and constitutes action, reward and punishment. Anyone who has been in a relationship with other people (of any kind) knows this. It is, again, rather dense to act as if it is wrong to care about it, or choose it wisely. Interestingly, those who claim tone policing are in fact opposing the sharing of true relevant information, on the grounds that doing so has negative consequences. And often they are right! But you can't have it both ways. Now we move to the last example, which involves requests for an undo burden of prior restraint.
Example 5: Do I owe MAPLE space for their response?
A friend of mine (who has some skin in the meditation game) said things I interpreted as feeling very strongly that:
My post on MAPLE was important and great and should be widely shared.
I owed MAPLE an opportunity to read my post ahead of time and give me a response to publish alongside it (although I could have declined to publish it if I felt it was sufficiently bad).
Their argument, as I understood it at the time, was that even if I linked to a response MAPLE made later, N days worth of people would have read the post and not the response, and that was unfair. I think this is sometimes correct- I took an example out of this post even though it required substantial rewrites, because I checked in with the people in question, found they had a different view, and that I didn’t feel sure enough of mine to defend it (full disclosure: I also have more social and financial ties to the group in question than I do to MAPLE). I had in fact already reached out to my original contact there to let him know the post was coming and would be negative, and he passed my comment on to the head of the monastery. I didn’t offer to let him see it or respond, but he had an opportunity to ask (what he did suggest is a post in and of itself). This wasn’t enough for my friend- what if my contact was misrepresenting me to the head, or vice versa? I had an obligation to reach out directly to the head (which I had no way of doing beyond the info@ e-mail on their website) and explicitly offer him a pre-read and to read his response. [Note: I’m compressing timelines a little. Some of this argument and clarification came in arguments about the principle of the matter after I had already published the post. I did share this with my friend, and changed some things based on their requests. On others I decided to leave it as my impression at the time we argued, on the theory that “if I didn’t understand it after 10 hours of arguing, the chances this correction actually improves my accuracy are slim”. I showed them a near-final draft and they were happy with it] I thought about this very seriously. I even tentatively agreed (to my friend) that I would do it. But I sat with it for a day, and it just didn’t feel right. What I eventually identified as the problem was this: MAPLE wasn’t going to be appending my criticism to any of their promotional material. I would be shocked if they linked to me at all. And even if they did it wouldn’t be the equivalent, because my friend was insisting that I proactively seek out their response, where they had never sought out mine, or to the best of my knowledge any of their critics. As far as I know they’ve never included anything negative in their public facing material, despite at least one person making criticism extremely available to them. If my friend were being consistent (which is not a synonym for “good”) they would insist that MAPLE seek out people’s feedback and post a representative sample somewhere, at a minimum. The good news is: my friend says they’re going to do that next time they’re in touch. What they describe wanting MAPLE to create sounds acceptable to me. Hurray! Balance is restored to The Force! Except… assuming it does happen, why was my post necessary to kickstart this conversation? My friend could have noticed the absence of critical content on MAPLE’s website at any time. The fact that negative reports trigger a reflex to look for a response and positive self-reports do not is itself a product of treating negative reports as overt antagonism and positive reports as neutral information. [If MAPLE does link to my experience in a findable way on their website, I will append whatever they want to my post (clearly marked as coming from them). If they share a link on Twitter or something else transient, I will do the same]
Her post is short, good and worth reading, so I recommend doing that now if you haven't yet. Even more than the dojo, I see the monastic experience as about taking decisions out of one's own hand and putting one's trust in authority. While you're there, you trust the process. You need to, both in order to clear your mind of such worries and focus inward slash elsewhere, and because the whole thing is going to be unpleasant (at least for a few days, and definitely physically) but you're choosing to do it anyway. The package deal is that you either use the fact that it's all-or-nothing to get the experience, or you bail entirely. Similarly, when you object to something, it maintains the correct atmosphere and incentives to have an undertone of your request making you a wimp and somewhat of a failure. So much so that this undertone should be present for the person making the request, without it actually having to be there in the minds or responses of those running things. In my case, I bailed entirely very quickly, and stand by that decision, because I wasn't ready to be there. I didn't do what Elizabeth did and call ahead to ask about things, and when things went wrong they treated me well, so I have no complaints about Zendo which ran the retreat, or the Garrison Institute where things took place. But believe me, if I did think things were their fault and people like me should avoid such places, I damn sure would be saying so. As it is, it's more like 'such retreats are not places to start your meditation practice, so don't consider going unless you're plausibly ready.' The one specific negative note I'd make about my retreat is that both my retreat and Elizabeth's were unexpectedly and unreasonably cold. I'm guessing that's a pattern that isn't talked about much, and worth keeping in mind when thinking about attending, if only to pack proper clothing. I think Elizabeth wasn't making a large mistake putting her trust in MAPLE. That's how this works, and the upside of it working well was potentially high. The downside of it working poorly is mostly opportunity cost. I think it's good and right to decide to trust, even if there's a good chance that trust isn't deserved. Then, if it proves undeserved, one can bail, complain and/or write an angry blog post. In this case, I think the fact that the blog post could be damaging to MAPLE isn't a reason to flinch or to hold back. It is a reason to publish. MAPLE is providing a flawed service that at least many people should avoid, so helping them avoid it is good. This is information people need to know. Does posting this, or a harsher version of it, lead to misleading updates? I do not see any reason to expect that it would. Does posting this provide bad incentives by punishing good behaviors? No. This punishes bad behaviors, and directly. If anything, holding back because someone expressed worry about an angry blog post rewards bad behaviors and should be all the more reason for an angry blog post. The consequences Elizabeth is being made to worry about, here, are not incidental higher-level effects. They are direct consequences of people learning true and useful things about MAPLE. These people did something bad, and perhaps when people hear about it they should feel bad. Again, feelings are incentives, and when they are deserved and maintain good incentives, they're not a reason to hold back. And this could even be net good for MAPLE in several ways. It could lead to change and improvement, or it could be publicity that helps find the right people while keeping out the wrong people. My chances of going to MAPLE didn't go down much as a result of this, and may have gone up, since I hadn't heard of them, and thus the chances of me going were already epsilon. What does Elizabeth owe MAPLE in a situation like this? I think the correct answer is: Nothing, beyond reporting accurately what happened and including the good along with the bad. Anything short of writing an intentionally slanted hit piece is fine. Organizations do not get prior restraint and right of response on bad reviews. That standard is rather ridiculous, and doesn't work deontologically at all. Warning them is at most supererogatory. So is including their response alongside the post, if they give a reasonable one. Neither is likely to (on net) spare their feelings.
Conclusion: Shifting Emphasis
Throughout, we've taken the perspective of someone trying hard to be at simulacrum level one. We've assumed that all our statements are truthful. When we say there's a lion across the river, we have strong evidence there's a lion across the river. The trick is that we don't say there's a lion across the river if and only if we know there is a lion across the river. We only say it some of the time that there's a lion across the river. There are many true things we know, and at any given time we should be not saying most of them. We've also assumed that our positive motivations are object level, and our mechanisms of positive results are via the object level. We desire to take actions to improve object-level results. We desire to share information to inform others and help them make better decisions. Our concerns about other levels have been negative motivations only. We are worried about preventing negative unintended effects, on various levels. As we should. Negative effects are negative. It's good, on the margin, to make effort to avoid them. We will, in some cases, take additional action motivated by these higher-level concerns, in order to ensure the absence of negative higher-level effects. In my experience, if you seek to maintain an object-level focus, you need to do that. A failure to address higher-level problems will result in focus shifting to higher-level issues, in ways that are more distracting, and that inflict more damage. One cannot hide away from these concerns. Was this less of a problem in the past? Yes, I think so, but it was still always a big problem. Coalition politics, and the motivation to manipulate situations for advantage, and the social animal that must constantly monitor status and power and the implications of all statements, and the unconscious motivations involved in all that, are not going to go away. The elephant is firmly in the brain. The difference is that in many places, there was more of a focus, and a default emphasis, on the object-level, and this led to mostly better things happening. Again, I'm super confused about all this stuff, but more concrete saying of words seems useful. My current model thinks there are two transition points between a level 1 emphasis and a level 2 emphasis. The first is where you are looking to avoid crossing the river, so you gather evidence that the river should not be crossed, and share it, while you hold back or don't seek out evidence that the river should be crossed, but you don't lie about it. You're thinking about communication in terms of causing actions you want, rather than about it conveying useful true information, but you still have a strict watch on that word 'true.' It's 'honest' salesmanship but it's still sales. This is sort of a level 1.5. From a statistical point of view, you are rather untrustworthy. The full transition to level 2 is simple. It's when you're willing to lie. You might, don't have to, stop at 1.5 before getting here. The transition to level 3 is more like the 1->1.5 rather than 1.5->2, because it's about motivation. You choose what to say based on coalition politics and a desire to elicit actions based on perceptions of support for actions, rather than a focus on truth or an expectation that anyone will update their object-level beliefs. By default, this comes after level 2 and is a 2->3 transition. The object level content is no longer reliably truthful, but that takes a backseat to its (for now truthful) expressions of support. There's also an interesting dynamic which is the 1->3 direct transition, while preserving the object level truth values. I think this is real and important, and seems to be missing from most maps of this space. I have definitely experienced dynamics, both in games and in real life, where people are primarily attempting to form coalitions and cause actions that are favorable to them, and choosing what to say mostly on that basis, but with the common knowledge expectation that lying is out of bounds and the object level still exists and positive sum actions are possible. There's the version of this where everyone pays lip service to lying being out of bounds and lies their ass off anyway, occasionally punishing those who are blameworthy because they no longer have plausible deniability that they were lying. This isn't that. This is the good kind of politics, where we have conflicting interests and also shared interests and we fight but also work together and stay friends after, because otherwise we get eaten by a lion. The 3->4 transition is probably like the 1->2 transition, except that I understand it worse. At level 3, your statements about coalitions are accurate, whereas at level 4 they're not. But thinking about level 3 and 4 people as if they are strategic thinkers who are optimizing for outcomes mostly seems wrong to me. I think such people are mostly adaptation executors. What they do is really, profoundly weird. I kind of sort of get how this is, but I also really, really don't. The people who are acting fully strategically at level 4 are the exceptions. When and how do people, groups or societies cross over into a higher-simulacrum-level orientation? What causes us to think that someone is 'operating on' level 2, 3 or 4, if you need to think about those levels throughout ordinary human interaction? How can we keep levels 2-4 'in check' if we need to constantly worry about them? Does the implied action-inaction distinction hold up at all on reflection or is it more like nonsense? Good questions. I'm not sure.