<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Proof Of Logic on Medium]]></title>
        <description><![CDATA[Stories by Proof Of Logic on Medium]]></description>
        <link>https://medium.com/@ProofOfLogic?source=rss-6a2d7d0b9f79------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 09 Apr 2026 03:43:36 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@ProofOfLogic/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Blame vs Constructive Criticism]]></title>
            <link>https://weird.solar/blame-vs-constructive-criticism-778b0aa4764c?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/778b0aa4764c</guid>
            <category><![CDATA[social-psychology]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Wed, 26 Apr 2017 21:22:00 GMT</pubDate>
            <atom:updated>2017-04-26T21:28:56.979Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="http://lesswrong.com/lw/oxc/chaos_and_consequentialism/drkd">lmn comments</a> on Chaos and Consequentialism:</p><blockquote>Of course, if looked at the kind of responsibility that is compatible with blame, you’d notice it’s a lot more in line with the common sense notion of the term.</blockquote><p>Well, yes, and I think that’s mostly unfortunate. The model of interaction in which people seek to blame each other seems worse — that is, less effective for meeting the needs and achieving the goals of those involved — than the one where constructive criticism is employed.</p><p>The blame model seems something like this. There are strong social norms which reliably distinguish good actions from bad actions, in a way which almost everyone involved can agree on. These norms are assumed to be understood. When someone violates these norms, the appropriate response is some form of social punishment, ranging from mild reprimand to deciding that they’re a bad person and ostracizing them.</p><p>The constructive criticism model, on the other hand, assumes that there are some common group goals and norms, but different individuals may have different individual goals and preferences, and these might not be fully known, and the group norms might not be fully understood by everyone. When someone does something you don’t like, it could be because they don’t know about your preferences, they don’t know about a group norm, they don’t understand the situation as well as you and so fail to see a consequence of an action which you see, etc. Since we assume that people do have somewhat common goals, we don’t have to enforce norm violations with punishment — by default, we assume people already care about each other enough that they would have respected each other’s wishes in an ideal situation. Perhaps they made a mistake because they lacked a skill (which is where the constructive feedback comes in), or didn’t understand the situation, your preferences, or the existing norms. Or, perhaps, they have an overriding reason for doing what they did. Social punishment (even the mild social punishment associated with most cases of blame) often doesn’t fix anything and may make things worse by escalating the conflict or creating hard feelings.</p><p>If you discuss the problem and find that they <em>didn’t</em> misunderstand or lack a necessary skill or have an overriding reason that you can agree with, and aren’t interested in doing differently in the future, then perhaps you don’t have enough commonality in your goals to interact. This is still different from the blame model, where sufficiently bad violations mark someone as a “bad person” to be avoided. You may still wish them the best; you simply don’t expect fruitful interactions with them.</p><p>That being said, there <em>are</em> cases where you might really judge someone to be a “bad person” in the more common sense, or where you really do want to impose social costs on some actions. Sociopaths exist, and (if they’re not a pro-social sociopath may need to be truly avoided and outed as a “bad person” (although pro-social psychopaths also exist; being a sociopath doesn’t automatically make you a bad person). However, it seems to me as if most people have overactive bad-person detectors in this regard, which harm other interactions. I don’t think this is because easily-tripped bad-person detectors are on the optimal setting given the high cost of failing to detect sociopaths. I think it’s because the concept of blame conflates the very different concepts involved in cheater-detection/sociopath-detection and more common situations.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=778b0aa4764c" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/blame-vs-constructive-criticism-778b0aa4764c">Blame vs Constructive Criticism</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Chaos and Consequentialism]]></title>
            <link>https://weird.solar/chaos-and-consequentialism-fd154eead4af?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/fd154eead4af</guid>
            <category><![CDATA[cognitive-bias]]></category>
            <category><![CDATA[statistics]]></category>
            <category><![CDATA[reasoning]]></category>
            <category><![CDATA[probability]]></category>
            <category><![CDATA[mathematics]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Wed, 19 Apr 2017 21:10:21 GMT</pubDate>
            <atom:updated>2017-04-24T23:26:03.065Z</atom:updated>
            <content:encoded><![CDATA[<p>There is an interaction between a culture’s common-sense understanding of a subject and science. For example, although <a href="https://en.wikipedia.org/wiki/Folk_psychology">folk psychology</a> is partly a biologically-given human ability to reason about the mental state of others, the way people reason about the mental states of others has been greatly influenced in recent times by Freudian ideas and behaviorism. Unfortunately, the popular version of scientific ideas is often quite skewed or out-of-date.</p><p>I think there are two modern ideas where the public’s intuitive understanding is especially out-of-date, and I think there is a lot of benefit to be gained by improving that understanding. These ideas are not new at all, but an understanding has still not propagated to the public very well. The two ideas are randomness and responsibility. Chaos and consequentialism. Probability and expected utility.</p><p><strong>Randomness.</strong> Most people have an intuitive understanding of randomness which looks something like the D&amp;D “chaotic” alignment. Something looks random to the degree it is unexpected and surprising. The <a href="https://en.wikipedia.org/wiki/Gambler%27s_fallacy">gambler’s fallacy</a> can follow from thinking as if events are actively trying to look well-mixed. States of maximal chaos are imagined to hold unexpected ordered objects (think of Douglas Adams’ infinite improbability drive), when in fact maximum-entropy states tend to be rather boring.</p><p><strong>Responsibility.</strong> People have some strange intuitions about how responsibility and blame should work. I find that people act as if blame is conserved: if you can attribute fault to one thing, then there’s a feeling of release which makes you much less likely to look for other sources of fault. If blame <em>does </em>get spread out among many things or people, it seems as if it “stretches thin”, so that less rests on the shoulders of each point of blame. This does not make very much sense. If a fault has many causes, each need to be addressed equally. This view implies, in particular, that you can’t get out of your share of responsibility just by pointing out someone else’s. In the aspiring rationalist community, this is called <a href="http://lesswrong.com/lw/l6d/a_discussion_of_heroic_responsibility/">heroic responsibility</a>. This is about sane reasoning about the consequences of your actions. There could be a moral duty aspect, if you want to speak of such things, but it’s also just a brute fact of reality: if you act in ways which tend to improve the chances of getting what you want, you’ll tend to get what you want more often; the same cannot be said in favor of putting blame elsewhere. I’ve also heard this idea referred to as “internal locus of control”.</p><p>You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame. The kind of responsibility I’m talking about is a favor to yourself, not to other people. (I mean, it may also be a favor to other people, if you care about those people and decide to help them. But then it’s because <em>you decided</em> you care.)</p><p>Now, there isn’t a perfect consensus on these issues. For probability, there’s the debate between Bayesians and frequentists. I may think the Bayesian perspective is superior, and points to a specific understanding of randomness as a subjective phenomenon (so randomness and uncertainty are really the same thing). I will say things slanted from that perspective, but I think there’s something to be gained just from the uncontroversial laws of probability theory, applied to the kind of events everyone would agree we can apply them to.</p><p>Similarly, there are many versions of, and alternatives to, consequentialism. There’s the debate between causal decision theory and evidential decision theory, and there’s the question of deontology and virtue ethics. Again, although my remarks will be a little biased toward consequentialist thinking, I think what I’m pointing at is mostly common ground — though it isn’t codified by an uncontroversial set of mathematical laws the way probability theory is. The perspective I’m putting forward here can be understood through the lens of expected utility theory, but I suspect it makes about as much sense in alternative frameworks as well.</p><p>Now, I can’t just say “do probability correctly” or “decide what you want and go about trying to get it in a sane manner” and call it good. Both of these are complicated skills which take a significant amount of development. However, I think something useful I can do is try to make a list of the important things you can try to get right.</p><h4>Consequentialism</h4><ol><li>Notice when you’re trying to solve a problem by putting some duty/obligation on someone else. Is that solution going to work? It might, if their goals are sufficiently in line with your goals and they take the suggestion well. But often, I think some part of our brains fools us into thinking that blaming other people for problems is an actual solution to those problems.</li><li>Always consider what you could be doing differently to make for better outcomes. It is sometimes the case that a car crash is “really the other person’s fault”: there is nothing you realistically want to change about <em>your</em> driving habits to make this sort of accident less likely. However, it is <em>never</em> the case that you want to determine this by determining whether there was some big glaring mistake the other person made which they should avoid in the future. Don’t obsess over what you could have done differently if you find that there was nothing, but don’t reason as if the degree to which you could have done something differently is directly opposite the degree to which <em>they</em> could have.</li><li>Don’t assuage your regrets by setting them aside or using unrealistic thinking to reassure yourself. There is a good and healthy kind of obsessing over regrets, where you figure out what you realistically could do differently in similar situations in the future to make things go better. If you can do this while avoiding the unhealthy kind of obsessing over regrets, you turn them into a source of strength. <a href="http://mindingourway.com/staring-into-regrets/">Advice on how to do that</a>.</li><li>Think in terms of what could have happened, not just what did happen. There’s a fallacy in gaming called <a href="https://www.channelfireball.com/articles/owens-a-win-results-oriented-thinking/">results-oriented thinking</a>, in which you put too much weight on your experience (positive or negative) when you know things could have gone differently. You might end up abandoning a good strategy because of a chance bad event, or putting too much faith in a bad strategy which you could easily see you just got lucky with. Getting past this requires an attitude where you regret succeeding for the wrong reasons and pat yourself on the back for doing the right thing even when it ends up backfiring by chance. This is dangerous, because you can blind yourself to the feedback which you’re getting; it has to be combined with honest reassessment of your models.</li><li>Have a model of what you want, have a model of the situation, and try to take actions which lead to what you want. (This doesn’t imply selfishness, as you may want to help other people. It also doesn’t imply rejection of authority or advice, as you may take those as strong evidence. However, it does imply that those considerations ultimately are subservient to what you think is right.) Having a model (even a mediocre model!) of what you want de-biases in several respects. The sunk-cost fallacy becomes difficult to make. The halo effect is reduced, as you are forced to evaluate the overall effect including all pros and cons (or rather, all pros and cons which fit in the scope of your model). There are several other benefits which I’ll have to try and describe in future posts. However, in order to have a very good model, you’ll also have to master the art of uncertainty.</li></ol><h4>Uncertainty</h4><ol><li>Randomness is not a property of an individual event. An event can be judged as low-probability, but a random (high-entropy) process is one in which lots of events have equal (and therefore low) probability. This is why the gambler’s fallacy isn’t true, and why we see lots of clustering in random sequences: a long run of one side of a coin is as probable as an alternating sequence of heads and tails of the same length, even though the second looks better-mixed.</li><li>Estimating probabilities by counting arguments. Combinatorics (aka “the art of counting”) gives a critical tool for thinking about the odds of different events. Even if you don’t ever use the explicit calculations ever again, learning how possibilities combine will help you think about probabilities clearly.</li><li>Thinking in information theory. Again, even if you don’t ever use the math, understanding the concepts can give a better perspective on communication and reasoning.</li><li>Accounting for <a href="https://en.wikipedia.org/wiki/Base_rate_fallacy">base rates</a> when forming estimates.</li><li>Adjusting for <a href="https://en.wikipedia.org/wiki/Selection_bias">selection bias</a>/<a href="https://en.wikipedia.org/wiki/Availability_heuristic">availability bias</a>.</li><li>Requiring good hypotheses to stick their necks out with predictions. Bayesians may codify this in terms of Bayes’ Law while frequentists do it with null-hypothesis testing and other statistical measures, but both agree that this is important. A hypothesis which can never be wrong is about the same as one which can never be right. (A frequentist would think of this principle as “how to distinguish patterns from randomness” while a Bayesian would think of both “pattern” and “randomness” as simply different distributions of uncertainty; this leads the frequentist to privilege the “null” hypothesis as a special default, where the Bayesian treats it as just another hypothesis which gets treated like any other.)</li></ol><p>Awareness of the general shape of each of these is (I think) quite helpful. Of course, turning explicit awareness into a deeper intuition which shapes your reflexes regarding randomness and responsibility is more difficult. It requires noticing what intuitions are currently shaping your thinking, and stepping in to re-shape those intuitions by thinking in new ways until the new ways become habit.</p><p>I don’t think any of this is too surprising to readers here, but I think it is worth Something to arrange it in this way. The two categories correspond to epistemic rationality and instrumental rationality. By no means have I listed all the important points (or even the most important points) which go under those two headings, but I encourage <em>you</em> to try.</p><p><em>(Thanks to Philip Parker for some conversation about this post and ideas for points.)</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fd154eead4af" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/chaos-and-consequentialism-fd154eead4af">Chaos and Consequentialism</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Thoughts on Automoderation]]></title>
            <link>https://weird.solar/thoughts-on-automoderation-c6e5bbc4ca42?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/c6e5bbc4ca42</guid>
            <category><![CDATA[rationality]]></category>
            <category><![CDATA[rationalism]]></category>
            <category><![CDATA[cognitive-bias]]></category>
            <category><![CDATA[discussion]]></category>
            <category><![CDATA[group-dynamics]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Wed, 12 Apr 2017 21:28:25 GMT</pubDate>
            <atom:updated>2017-04-14T04:04:06.129Z</atom:updated>
            <content:encoded><![CDATA[<p>I was intrigued by the recent discussion about a group discussion method called automoderation, introduced at <a href="http://ferocioustruth.com/2017/automoderation/">Ferocious Truth</a> and commented on by <a href="http://agentyduck.blogspot.com/2017/03/automoderation.html">Agenty Duck</a> and <a href="https://thezvi.wordpress.com/2017/03/19/on-automoderation/">Don’t Worry About the Vase</a>. (This post will not make much sense if you haven’t at least read the Ferocious Truth post.) It reminded me of my concerns in the <a href="https://weird.solar/communication-protocol-8b8632211df0">Communication Protocol</a> post. This also made me think of the <a href="http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/dijg">Russian LessWrong Slack Emoji</a>:</p><blockquote>:+1: means “I want to see more messages like this”</blockquote><blockquote>:-1: means “I want to see less messages like this”</blockquote><blockquote>:plus: means “I agree with a position expressed here”</blockquote><blockquote>:minus: means “I disagree”</blockquote><blockquote>:same: means “it’s the same for me” and is used for impressions, subjective experiences and preferences, but without approval connotations</blockquote><blockquote>:delta: means “I have changed my mind/updated”</blockquote><p>As I indicated in <em>Communication Protocol</em>, I think it’s very important to distinguish different types of agreeing. Sometimes, agreement means “I have changed my mind in response to your words”. Sometimes, it means “I already believed that, for the same reason.” Sometimes, “I already believed that, but for a different reason.” Sometimes, you want to signal emotional concordance with the speaker. These different types of agreement have different implications, both epistemically and emotionally:</p><ul><li><strong>“I have changed my mind, and now agree”</strong><em> The speaker can stop trying to convey their point, at least to that person. The speaker also will tend to feel affirmation that they were useful. However, this type of agreement should not provide much further evidence for the statement being discussed.</em></li><li><strong>“I already believed that, for the same reason”</strong> <em>Again, the speaker can usually stop trying to make their point. The speaker feels a different kind of affirmation, that they are among people who think like they do. However, this should still not provide much more evidence for the statement — all we found out is that the other person has the same evidence we do.</em></li><li><strong>“I already believed that, for a different reason”</strong><em> This type of agreement provides significant further evidence for the statement. Someone signaling this kind of evidence might have something worthwhile to share with the group.</em></li><li><strong>“I feel you” </strong><em>Sometimes you just want to signal empathy without significant epistemic connotations, like the Russian :same: signal.</em></li></ul><p>(You can also have negative versions of two of these — “I have changed my mind, and now disagree” and “I have independent evidence against”. Maybe you could also have a signal for lack of empathy with a point, but I’m not sure you’d want to.)</p><p>Automoderation includes the use of “thumbs up” for agreement (along with thumbs-down for disagreement), the “OK” hand signal for “I am interested in this”, and a wiggly-finger gesture for “I feel you”. I think it could be interesting to distinguish “agreement” further. What if thumbs-up meant “agree for a different reason”, while a palm-up gesture means “I have updated, and now agree”? (I think it may be less important to have a symbol for “I already believed that for the same reason” — though I could easily be wrong!) Palm-down could stand for “I have updated against what you’re arguing for”, which could be interesting as well.</p><p>(I haven’t even tried basic automoderation, yet, so my suggestions should be taken with a heavy dose of salt.)</p><p>ETA: As <a href="http://lesswrong.com/lw/ovx/thoughts_on_automoderation/dr0k?context=3">observed by Zvi</a>, my remarks about what kind of evidence the different kinds of agreement indicate do not always hold. It’s quite possible that “I already believed that, for the same reason” provides significant confirmation; maybe you were not sure if your argument made any sense, but an expert in the field says “Yes, that is the consensus among experts”.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c6e5bbc4ca42" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/thoughts-on-automoderation-c6e5bbc4ca42">Thoughts on Automoderation</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stress Response, Growth Mindset, and Nonviolent Communication]]></title>
            <link>https://weird.solar/stress-response-growth-mindset-and-nonviolent-communication-84839deecf6f?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/84839deecf6f</guid>
            <category><![CDATA[personal-development]]></category>
            <category><![CDATA[self-improvement]]></category>
            <category><![CDATA[nonviolent-communication]]></category>
            <category><![CDATA[cognitive-bias]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Thu, 15 Dec 2016 00:35:47 GMT</pubDate>
            <atom:updated>2016-12-22T20:31:56.008Z</atom:updated>
            <content:encoded><![CDATA[<p>Some of what I’ve been reading lately:</p><ul><li><em>59 Seconds</em> by Richard Wiseman.</li><li><em>The Willpower Instinct</em> by Kelly McGonigal.</li><li><em>The Upside of Stress</em> by Kelly McGonigal.</li><li><em>Self Theories</em> by Carol Dweck.</li><li><em>Nonviolent Communication </em>by Marshall Rosenberg.</li><li><em>The Moral Economy</em> by Samuel Bowles.</li></ul><p>Many of these books are more “pulpy” that I’d prefer — I can’t strongly recommend them, because they’re intended for a popular audience and so have a lot of “filler” in the form of stories and such to help get across the “meat” of the material. In particular, the first three. These are in the genre of “scientific self help”; they are trying to convey psychological studies to a wider audience. Read them in the right way, though, and they can serve as a good annotated bibliography.</p><p>I covered some of what I learned from those books in <a href="https://weird.solar/scott-alexander-doesnt-like-growth-mindset-yet-b6e26ec929fb#.te2zzg3u4">Scott Alexander doesn’t like growth mindset… yet</a>. &amp; <a href="https://weird.solar/what-does-long-term-thinking-feel-like-from-the-inside-89dd7732aaac#.wlmvpexnt">What Does Long-Term Thinking Feel Like from the Inside</a>. However, I’d like to point towards a sort of “common core” of concepts I see here. I would like to do a more thorough and well-researched version of this, but I’m not there yet; so this is more like “here are some ideas I’m having which look like they are sorta backed up by research which I need to look into more”.</p><h4>System 1, System2, and Stress Response</h4><p>I’m guessing most of the readers of this will already know about the system 1 / system 2 distinction popularized by <em>Thinking, Fast and Slow</em> and others. “System 1” refers to fast, heuristic, intuitive thinking; “system 2” refers to slow, deliberative reasoning. This isn’t a perfect model of the brain by any means, but it identifies some important things that are going on. The “<a href="https://wiki.lesswrong.com/wiki/Hollywood_rationality">hollywood rationality</a>” stereotype identifies rationality with system-2 thinking, but in fact system 1 and system 2 are both there for a reason, and are good at different sorts of things. For one thing, system 1 tends to be a lot better in areas where we have a lot of experience, but can perform very poorly compared to system 2 in domains outside of our experience. So, it’s helpful to understand these things and get system 1 and system 2 working together well. So, what determines which system will be dominant at a given time?</p><p><em>The Willpower Instinct</em> and <em>The Upside of Stress</em> both point to complexities in the way the interaction between system 1 and system 2 are mediated. The amount and type of stress we experience can make the difference between fast, impulsive, instinctual decision-making and slow, deliberate, reflective decision-making. This is why stress-eating is a thing, for example. But it’s not just that stress causes fast thinking to dominate over slow. The stress response is complicated.</p><p>The picture I’m getting from the two McGonigal books is: there are many kinds of stress response, two of which are particularly important to mediating system 1 vs system 2. The “threat response” is what we typically think of when we think of stress: it is associated with high heart rate, increased impulsivity, increased inflammation (which helps wounds to heal quickly, but is bad for our health in the long term), increased blood pressure, and fight-flight-or-freeze behavior. The “challenge response” is associated with increased heart-rate <em>variability</em>, increased willpower (meaning increased ability to override impulsive responses), and intense focus. In contrast to the threat response, it is actually <em>good</em> for your heart (indicating decreased risk of heart failure).</p><p>When the fast, impulsive decision-making of system 1 is a <em>problem</em>, then, it seems it’s a mis-fire of threat vs challenge response. What tips the balance between these responses?</p><p>The difference between the threat response and the challenge response can be something as simple as telling yourself “I’m excited!” rather than “I’m terrified!” when you feel the jitters before public speaking. This was the setup of one of many studies which she cites on the theme of “thinking stress is good for you makes it good for you”. She suggests that threat response vs challenge is determined largely by whether we think we’re <em>up to the challenge; </em>and we can tip the scales in our favor by viewing the stress itself as a resource rather than a problem.</p><p>She also suggests than viewing the stress as a problem causes people to avoid stressful things. This likely means we’re <em>not dealing with the problems</em> that are leading to stress. This idea seems related to <a href="http://lesswrong.com/lw/21b/ugh_fields/">ugh fields</a>.</p><p>This whole thing seems <em>somewhat</em> similar to Nate’s <a href="http://mindingourway.com/guilt/">Replacing Guilt</a> series <em>[ETA info-hazard warning: for some people, reading Nate’s writing on motivation destroys their motivation]</em>, but with a somewhat less unified approach, more scientific-study based, some neuroscientific justification, and more of a focus on activating the parts of your mind that you want to run rather than getting everything to cooperate together well as Nate emphasizes.</p><p>For me, the idea of seeing the stress response as potentially helpful was a big shift. I value my state of mind, and don’t accept many disruptions to it. I’ve spent a lot of time telling myself “it’s not worth the stress” (especially when it comes to homework assignments). The idea that simply <em>seeing things differently</em> could make the stress response into a valuable resource makes a lot of my common motivational patterns look like alien pseudo-logic. I procrastinate, making <em>more</em> stress for myself, not less, <em>because I think stress is bad</em> and so avoid stressful things. So, you can see why viewing stress as a form of excitement, and an ally in getting things done, would turn the tables.</p><h4>Growth Mindset &amp; Extrinsic vs Intrinsic Motivation</h4><p>I’ve already written <a href="https://weird.solar/scott-alexander-doesnt-like-growth-mindset-yet-b6e26ec929fb#.oij9hfh2f">quite a bit</a> about my reaction to <em>Self Theories</em>, and I have quite a bit more to say about it if I get around to writing another post. For the idea I’m trying to get across here, though, I’ll just note a few critical ideas from the book:</p><ul><li>Self-esteem is not as important as it’s cracked up to be. Rather, self-esteem is important if you are focused on getting approval. Children who cope well with failure are not the ones with high self-esteem; children with high self-esteem have further to fall in the face of failure. Children whose goals are more focused on learning don’t just have good coping mechanisms for handling the hit to their self-esteem; they barely seem to notice any self-worth implications of failure, as they focus on the problem and try to figure it out.</li><li>The way adults give feedback shapes the goals of the children. Attribute-oriented praise, such as praising intelligence, puts students in a more esteem-oriented mindset. Process feedback, such as praising effort and giving remarks on the specific good and bad things the child did and how to do better next time, puts students in a more learning-oriented mindset (aka “growth mindset”).</li></ul><p>Again, this puts a new perspective on things which makes the previous view seem like alien pseudo-logic to me. Kids with these two different mindsets take opposite actions when presented with the same stimuli. The learning-oriented kids will choose harder problems and harder classes, while the esteem-oriented (“fixed mindset”) kids will choose easier ones. Kids who are stuck thinking in terms of approval and self-esteem would see the learning-oriented students who don’t bat an eye in the face of failure, and infer that those kids must have a <em>huge</em> self-esteem. Growth-mindset kids see fixed-mindset kids shooting themselves in the feet. The fixed-mindset kids basically want to look smart (get good grades, get approval from parents and teachers and peers…), but they end up being very short-sighted in the pursuit of that goal; they deprive themselves of learning opportunities because they fear failure.</p><p>This seems to me like a special case of <a href="https://en.wikipedia.org/wiki/Motivation#Incentive_theories:_intrinsic_and_extrinsic_motivation">intrinsic vs extrinsic motivation</a>. Intrinsic motives are self-generated, such as enjoying good conversation, running to feel good, playing video games for fun, and so on. Extrinsic motivations are things imposed by the environment, such as working to get money (to get all the things money can buy), doing assignments to get good grades (to get the things good grades can eventually buy), and so on.</p><p>It’s not a perfect distinction, and I’m hoping to find a better one that cuts a cleaner line around the phenomena, but the generalization seems to get me a lot. Carol Dweck focuses on learning goals vs esteem-related goals such as good grades (which she called “performance goals”), but my generalization to intrinsic vs extrinsic lets me speculatively apply the ideas more broadly.</p><p><em>The Moral Economy</em> offers some evidence for this generalization. In one study it cites, people entering into West Point academy are asked about the reasons they are joining. The reasons are assessed for intrinsic vs extrinsic motivation (or as he terms them, “intrinsic vs instrumental”).The cadets were followed for a decade after graduation to measure their success. In this study as well as many others cited in the book, there was a “crowding out” effect: extrinsic and intrinsic motives did not get along. Success was most closely associated with high intrinsic drive. High extrinsic drives were better than low of both, but high of both was barely better than high extrinsic drives alone.</p><p>This kind of “crowding out” fits well with the ideas of growth mindset, and also with the ideas about different stress responses mentioned earlier. It seems quite plausible to me that extrinsic vs intrinsic motives are another important factor determining whether a stress response has characteristics of a threat response or a challenge response.</p><p>A complicating wrinkle to the story: <em>The Moral Economy</em> shows that crowding-out occurs, but not consistently. Sometimes other effects occur, including “crowding in”, where extrinsic and intrinsic drives reinforce each other. So, it’s complicated.</p><h4>Nonviolent Communication</h4><p>Nonviolent Communication (NVC) is a way to communicate during a conflict or express your feelings or desires in ways which avoid conflicts that might otherwise occur. It emphasises communicating in ways that are more likely to be successful, in cases where many people have a tendency to communicate in ways that make enemies. It’s not too far off to say that it’s a philosophy which applies principles of de-escalation to everyday life. Again, I won’t try to review the entire technique, but here are some important points which strike a chord with the themes here:</p><ul><li>Like <em>Self Theories</em>, NVC discourages praise of a person which focuses on properties of the person such as “You’re great!” or “You’re a genius!”, and instead encourages saying what you appreciated about specific actions. I find myself resonating strongly with this idea; I don’t usually appreciate being broadly praised, and prefer more concrete and informative feedback.</li><li>Like Nate’s Replacing Guilt series which I mentioned earlier, NVC emphasises only doing things because <em>you</em> want to, not because you “have to” or “should”.</li><li>Use of non-judgemental language. Judgemental language puts the other person on the defensive, and is easier to slip in than you may realize. “You make me feel ___” is judgemental, blaming the other person for your feelings; NVC encourages people to “own their emotions” and express them more factually, as in “When you ___, I feel ___” (and also explicitly communicate to the person that you don’t blame them for your emotions, you just want to let them know how you feel).</li><li>Using reward and punishment to get others to do what you want is seen as counter-productive. People have a basic drive to help each other, if they see the humanness of each other’s needs and feel mutual empathy and care. Threatening retribution if you don’t get what you want dissolves this motivation. Rewarding people is almost as bad, as it implies that your approval is conditional on their behavior. You want others to help you out of a desire to help you, not anything else.</li></ul><p>Again, we can get some support for this from the idea of intrinsic vs extrinsic motivation and the crowding-out effect. I also see a tie with the kind of nonjudgemental awareness taught by mindfulness meditation and by Eugine Gendlin’s <a href="https://en.wikipedia.org/wiki/Focusing">focusing</a> technique. Generally speaking, NVC is “goal-oriented communication”: expressing yourself in ways which are more likely to get what you want. But there’s a bit more to it than that — it also has to do with respecting other people’s autonomy in a particular way that seems important.</p><p>Now, I don’t think NVC is perfect. My biggest complaint is probably the way focusing on language (formulae for what language to use and what to avoid) provides the foundation of the technique, which I think leads some NVC advocates into counter-productive “language policing” (which is really against the whole philosophy, but an easy mistake to make given how it’s taught). But I’m interested in the worldview <em>behind</em> the language, and I think it provides something valuable which clicks with the other stuff I’ve been reading. To quote <em>How to Win Friends and Influence People</em> (which also has some commonality with NVC):</p><blockquote>The difference between appreciation and flattery? That is simple. One is sincere and the other insincere. One comes from the heart out; the other from the teeth out. One is unselfish; the other selfish. One is universally admired; the other universally condemned.</blockquote><blockquote>[…]</blockquote><blockquote>No! No! No! I am not suggesting flattery! Far from it. I’m talking about a new way of life. Let me repeat. <em>I am talking about a new way of life.</em></blockquote><p>I’m not looking for the language formulas which come with NVC. I’m looking for a new way of life.</p><p>I don’t think NVC is perfect; nor do I think growth mindset is perfect; nor the theories of stress response laid out by Kelly McGonigal; and certainly not the theory of intrinsic vs extrinsic motivation that ties them all together for me. However, the way all of these things (together with Nate’s series on guilt, and other things) re-shape my thinking (so that old confusions look like alien pseudo-logic) makes me think something deeper is there, waiting to be articulated properly.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=84839deecf6f" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/stress-response-growth-mindset-and-nonviolent-communication-84839deecf6f">Stress Response, Growth Mindset, and Nonviolent Communication</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Betting on Beliefs]]></title>
            <link>https://weird.solar/betting-on-beliefs-d154e1bfb5c5?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/d154e1bfb5c5</guid>
            <category><![CDATA[betting]]></category>
            <category><![CDATA[cognitive-bias]]></category>
            <category><![CDATA[bias]]></category>
            <category><![CDATA[rationality]]></category>
            <category><![CDATA[game-theory]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Mon, 24 Oct 2016 16:47:20 GMT</pubDate>
            <atom:updated>2016-10-28T15:18:23.287Z</atom:updated>
            <content:encoded><![CDATA[<p><em>[Epistemic status — speculative.]</em></p><p>It’s long been a trope of Bayesian rationalism that if you disagree with a friend, you should bet. This is a good community norm: if you bet, you’re more likely to remember that you were wrong; you’re forced to quantify the degree of your certainty; you’ll be more humble or more firm in the future based on how past bets have gone; <a href="http://marginalrevolution.com/marginalrevolution/2012/11/a-bet-is-a-tax-on-bullshit.html">betting is a tax on bullshit</a>; and, betting odds create a <a href="http://mason.gmu.edu/~rhanson/futarchy.html">visible aggregation of group knowledge</a>. However, I would like to set all of those things aside and ask: <em>is it really rational to bet for the sake of the money?</em></p><p>The naive expected utility calculation says yes: if you assign probability <em>p</em> to X and your friend assigns probability <em>q</em>, then both of you will think it’s profitable in expected value to make a bet at odds <em>o </em>between <em>p</em> and <em>q.</em> (<a href="http://bywayofcontradiction.com/even-odds/">Negotiating the odds</a> is another matter.) Realistically, we don’t value money linearly, and furthermore there are practical reasons to avoid risk (since reducing variance in future funds makes planning easier). Still, taking these things into account yields something like <a href="https://en.wikipedia.org/wiki/Kelly_criterion">Kelly betting</a>, which always approves of putting down <em>some</em> money if the expected monetary value of the bet is positive. (It might be less than a cent, however.)</p><p>What makes me shy away from this way of reasoning is: <em>betting is a zero-sum game.</em> The number of won bets equals the number of lost bets at all times. The amount of money won equals the amount of money lost. In any bet between friends, both parties would honestly advise the other not to bet. Presumably, the argument <em>for</em> betting is that if you’re betting based on your beliefs, then you expect to win more than you lose on average. But this appears absurd: within a group of people betting, as much money is won as is lost. The average has to be zero. So, how can it be “rational” to expect to win more often than you lose?</p><p>Maybe you can beat the odds by only betting if you have good reason to expect that you have better information than the person you’re betting against. Again, though, even <em>that</em> strategy can’t pay out on average — not against people who are similarly smart enough to think of it (and it’s not that hard to think of). You have to think you have <em>better than average</em> reason to expect you’re on the right side of the bet. It seems to me that a community of reasonably rational agents just won’t bet with each other, if they’re only after money. We all know that in order to profit from bets on average, we’ve got to have higher standards for when to bet than each other. So, the only possible outcome is for everyone’s standards to be so high that no one ever bets!</p><p>My intuition is based on Aumann’s agreement theorem, which states that Bayesian agents with the same prior (but differing evidence) cannot agree to disagree — if they try to agree on a bet, they will update on each other’s willingness to take bets until they converge to identical beliefs. You might <em>initially</em> give 3:1 odds on a project being late, but a co-worker enthusiastically trying to take that bet decreases your confidence to 2:1. The co-worker is initially interested in the 2:1 odds as well, but when you start to say “sure” for the adjusted odds, your confidence changes your co-worker’s mind; you’ve converged to a mutual 2:1 estimate. According to the Aumann Agreement theorem, Bayesians who try to bet will move their beliefs toward each other until they no longer have a disagreement to bet on.</p><p>How well this applies to humans was much-discussed in the <a href="http://www.overcomingbias.com/2008/11/disagreement-de.html">disagreement debate</a> on Overcoming Bias. Personally, I would more often update toward the other person’s beliefs than I would take a bet.</p><p>“Wait”, the betting advocate says — “that argument assumes everyone is rational. Really, though, we know there are many people in the mix who will take bets when they don’t have good reason to think they’ve got better information than you.” Sure, that’s true. But if you’re making bets, how confident are you that you’re not <em>one of them?</em> We know we’re all biased. Doesn’t it seem safer to have an anti-betting policy? And anyway, that scenario still doesn’t appear to allow bets between reasonable people; bets can only happen when someone participates unreasonably. So, it would seem odd to advise people that betting is rational.</p><p>“Ah, no, that’s not quite right. The <em>existence</em> of unreasonable bettors casts <em>reasonable doubt</em> on which type of bettor I am. This means reasonable people will occasionally bet with me, because they happen to believe I’m a fool, even though that’s not the case.” Really?</p><p>It seems to me that for this argument to go through, you’d need to have privileged information that’s <em>so</em> unlikely, people are more likely to think you’re crazy than suspect the truth. Suppose you have such information. People are willing to bet with you now, because betting with crazy people pays off. But should <em>you</em> make that bet? It’s not just about knowing you’re not clinically insane. Other people <em>see</em> that you’re making offers for extreme bets. We’re assuming they’ve weighed the possibility that you have privileged information against the possibility that you are somehow mistaken, and come to a <em>reasonable conclusion</em> that you’re mistaken. This suggests that mistakes of that order of magnitude are somewhere around as common as the kind of one-in-a-million information you think you have — or perhaps much more common. <em>How sure can you be that you’re not making one of those mistakes? </em>Wouldn’t you do better, on average, if you had a general policy of not betting in this kind of circumstance?</p><p>I still think a habit of making bets with friends is a good one for all the reasons I mentioned before. However, I find it <em>really hard</em> to envision a scenario where you’re justified in taking a bet purely for the money.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d154e1bfb5c5" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/betting-on-beliefs-d154e1bfb5c5">Betting on Beliefs</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What Does Long-Term Thinking Feel Like from the Inside?]]></title>
            <link>https://weird.solar/what-does-long-term-thinking-feel-like-from-the-inside-89dd7732aaac?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/89dd7732aaac</guid>
            <category><![CDATA[motivation]]></category>
            <category><![CDATA[productivity]]></category>
            <category><![CDATA[positive-psychology]]></category>
            <category><![CDATA[intrinsic-motivation]]></category>
            <category><![CDATA[self-improvement]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Fri, 21 Oct 2016 20:36:59 GMT</pubDate>
            <atom:updated>2016-10-24T13:31:56.108Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Epistemic status: at high risk of </em><a href="https://scienceornot.net/2012/10/23/single-study-syndrome-clutching-at-convenient-confirmation/"><em>single-study syndrome</em></a><em>.</em></p><p>In my <a href="https://weird.solar/scott-alexander-doesnt-like-growth-mindset-yet-b6e26ec929fb#.fozvyq939">post on growth mindset</a>, I mentioned time preference. This got me thinking about what kind of thought causes high time preference (meaning impulsive, short-term decision-making) vs high. What kinds of thoughts, specifically, lead to impulsive vs considered behavior? It’s all-too-easy to sit down and create nice long-term plans, but then go right back to old habits like procrastination and comfortable ruts.</p><p>I got much of the information here from Kelly McGonigal’s <em>The Willpower Instinct</em>. It’s a popularization; I don’t recommend it strongly to technical folks (I found all the stories and narration annoying, and wanted more critical evaluation of the facts). However, the end notes (which constitute a good fraction of the book!) make for a good annotated bibliography on the subject. I’m also drawing some references from the book <em>59 Seconds</em>.</p><p>I have relatively low confidence in most of these ideas. It seems like a lot of the stuff is based on a few studies, and might not replicate. Nonetheless, perhaps it’s a bit better than personal anecdote and speculation.</p><h4>Procrastination Equation</h4><p>First, the most solid knowledge we have: the <a href="http://lesswrong.com/lw/3w3/how_to_beat_procrastination/">procrastination equation</a>. This is actually a general theory of motivation, identifying four key factors in motivation (stealing an image from the article):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/548/1*xtXu0ZMlRAXvtSI28Txw9g.png" /></figure><p>This is something of a fake equation; read <a href="http://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Motivation/Steel_%26_Konig_2006_Integrating_theories_of_motivation.pdf">the research article</a> for actual math. However, this version gives a good summary of key factors. Expectancy times value is essentially the classical utility-times-probability from <a href="https://en.wikipedia.org/wiki/Expected_utility_hypothesis">decision theory</a>; so, that part is perfectly rational. Dividing by the delay models <a href="https://en.wikipedia.org/wiki/Hyperbolic_discounting">hyperbolic discounting</a>. This is far from rational (it creates temporal inconsistency), but it’s also a somewhat fixed element of our psychology. <em>Impulsiveness,</em> however, is the factor I’ll mainly be addressing. It determines <em>how much</em> the delay matters.</p><p><a href="http://lesswrong.com/lw/3w3/how_to_beat_procrastination/">The LessWrong article</a> discusses how to use this equation to debug procrastination; we can do things like tweaking tasks to have higher expectancy (more reliable rewards) or lower delay. If you’re fighting procrastination, go have a look at those strategies; they’re more firmly established by evidence than what I’ll discuss here, for the most part. Impulsiveness is the biggest factor in procrastination, though. If the other strategies aren’t working, what else can we try and do to be more conscientious?</p><h4>How about we make people more reflective?</h4><p>Sliding just a bit down on our certainty scale: there seems to be something like <a href="https://meteuphoric.wordpress.com/2016/07/21/two-kinds-of-responses/">reflexive vs reflective responses</a>. Reflexive (IE, reflex-based) responses feel “automatic” and “out of our control”; reflective responses are able to modify these behaviors, inhibiting our automatic reactions or activating other patterns. <a href="http://pubs.aeaweb.org/doi/pdfplus/10.1257/089533005775196750">Reflexive responses are associated with the midbrain; reflective responses are associated with the prefrontal cortex</a>. (I’m purposefully hedging by saying “associated with”; many brain regions are involved in any behavior, but the midbrain and prefrontal cortex have been found to play a significant role in the reflexive-reflective distinction.) In addition, there are associations within the peripheral nervous system. Stress has a tendency to activate the impulsive reflexive decision-making system through the fight-or-flight response; this is associated with the sympathetic nervous system. The parasympathetic nervous system is associated with reflective thinking. Since the parasympathetic nervous system is associated with calmness, one might conclude that reflective thinking is as well; but, it’s a bit more complex. There appears to be a <a href="https://www.researchgate.net/profile/Tory_Eisenlohr-Moul/publication/228101641_Pause_and_plan_includes_the_liver_Self-regulatory_effort_slows_alcohol_metabolism_for_those_low_in_self-control/links/0f31753aa16f7a821f000000.pdf">pause-and-plan response, distinct from calmness</a>, which shares some features of a stress response but favors self-control rather than impulsiveness. <a href="http://psycnet.apa.org/books/13090/009">This pause-and-plan response can be measured effectively by heart rate variability</a>, which appears to be linked in both directions: performing tasks which require you to modulate your impulses creates high heart-rate variability, and also, high heart-rate variability before such a task predicts good performance at the task.</p><p>It seems to me that we can modulate this to a fair degree once we are aware of it. Common advice like counting to ten or focusing on slow, deep breaths seems likely to be helpful. Mindfulness techniques like <a href="https://en.wikipedia.org/wiki/Distancing_(psychology)#Self-distancing_perspective">self-distancing</a> also <a href="http://www.tandfonline.com/doi/abs/10.1080/10478400701598363?journalCode=hpli20">seem likely to be helpful</a>.</p><h4>Value Affirmation &amp; Moral Licensing</h4><p>There’s some evidence that value-oriented thinking reduces impulsive behavior. <a href="https://srconstantin.wordpress.com/2015/05/04/values-affirmation-is-powerful/">Value affirmation is powerful</a>. Spending 15 minutes writing about what you value has a long-lasting positive effect. However, it may also help to re-frame in terms of values more regularly.</p><p>As I discussed in <a href="https://weird.solar/the-internal-lawyer-b00625428e1e#.b5zt8t61m">The Internal Lawyer</a>, there’s an effect called <a href="https://en.wikipedia.org/wiki/Self-licensing">self-licensing</a> which causes people to work against themselves. Suppose you are trying to lose weight. Exercising regularly may cause you to eat too much; you “license” one bad behavior in your head, justifying it with the other good behavior. This type of behavior has been observed in a variety of domains.</p><p>Before tying this back to value affirmation, let’s think more about why this might happen. I’ve come across three different explanations so far:</p><ol><li>My “internal lawyer” story: we’re constantly thinking about how to justify our actions. When we do one positive thing for our goals, some part of us says “Aha, now I can do whatever I want!”. We’ve done enough work to push off criticism from the internal judge, so we lose motivation; this results in “licensing” ourselves to work against the progress we’ve made.</li><li>We seek to reward ourselves for doing well. This results in giving in to pleasurable activities when we’ve done more difficult things, in a way that <em>looks like</em> impulsive behavior, but is actually strategic self-reinforcement. If this sometimes causes us to work against ourselves, maybe that’s the exception rather than the rule; a reasonable loss for an overall beneficial self-management strategy. This is the explanation most people give for their contradictory actions, in my experience.</li><li>Humans manage a variety of desires, some of which contradict each other. We might desire to have a lot of money saved up in case of disaster, but simultaneously desire nice clothes, fancy electronics, and so on. We manage those desires by a sort of balancing act: desires which haven’t been acted on are felt more strongly, while those which have been acted on quiet down for a while. The result is that putting money away in savings can cause us to go on a shopping spree, undoing our progress. That’s the explanation favored by <a href="https://faculty.chicagobooth.edu/ayelet.fishbach/research/FD_JCR_05.pdf">this paper</a>.</li></ol><p>In any case, it turns out that we don’t <em>always</em> act like this, at least not always to the same extent. That’s where value affirmation comes in. In <a href="https://faculty.chicagobooth.edu/ayelet.fishbach/research/FD_JCR_05.pdf">a study in this paper</a>, participants who were asked to evaluate their past actions in terms of <em>progress</em> displayed the usual moral licensing effect, whereas participants who were asked to evaluate their past actions in terms of <em>commitment</em> to their goals had a lessened effect. This sounds a lot like value-affirmation to me. Note, however, that the effect was measured in self-predictions, rather than actual performance, so the conclusion isn’t that strong. <a href="http://www.bm.ust.hk/mark/staff/Anirban/Anirban%20JCR%20-%20Dec%202008.pdf">This paper</a> offered participants hypothetical cake in a similar procedure. Some participants were asked to recall times where they resisted temptation; others were asked to recall times when they gave in. In both cases, a subset of these were asked to recall the <em>reasons</em> for their decisions as well. (A control group was also offered cake with none of these prompts.) Here are the results:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/777/1*hz3FBCC7tPaLLt1coDVvIQ.png" /></figure><p>Recalling reasons appears to make impulsive people act like non-impulsive people, displaying consistency with past behavior rather than the usual reversal created by self-licensing. Perhaps “non-impulsives” habitually think about the reasons why they do what they do?</p><p>It’s also interesting that “non-impulsives” chose cake more often when reminded of past failures of willpower. We might hypothesize that non-impulsives think of reasons by default, <em>but also</em> think of past successes rather that failures by default. Impulsives would <em>also</em> appear to think of past successes more than past failures; but for them, this has the opposite effect. Since they don’t remember their reasons by default, thinking of past success leads to self-licensing. (This suggests that they’d be better off thinking of <em>failures</em> all the time; on the other hand, that could lead to depression or counterproductive avoidance behaviors. Shifting to recalling reasons by default seems wiser.)</p><p>On the other hand, it’s not clear how well this generalizes to decisionmaking in the wild. It seems like the moral licensing effect is well-supported in the literature, with real choices rather than multiple-choice questions as in the cake-or-salad study. Whether people really fall into “impulsive” and “non-impulsive” groups who act as described above is harder to say.</p><h4>Counter-Productive Future Thoughts</h4><p>Another scary result which comes up in a <a href="https://faculty.chicagobooth.edu/ayelet.fishbach/research/FD_JCR_05.pdf">few</a> <a href="http://faculty.som.yale.edu/ravidhar/documents/WhereThereisaWayIsThereaWill.pdf">other</a> studies is that self-licensing appears to operate on <em>hypothetical future behavior</em> as well as actual past behavior. This way of thinking is familiar to procrastinators: you can constantly believe you’ll “do it tomorrow”, justifying slacking off today. The result, of course, is that you keep putting it off further and further until there is no tomorrow to push things off into; you end up doing everything at the last minute.</p><p>Perhaps remembering reasons is a sufficient safeguard in this case, like the case with past good behavior. However, there does seem to be a bit more going on here. <a href="http://www.apa.org/pubs/journals/releases/xge-134123.pdf">This study</a> investigates the effect of estimated future free time on decision-making, finding that we act as if we will have much more free time in the future than we eventually do. Perhaps we can try to be unrelentingly realistic about how much time we’ll find we have in the future, to decrease the bias.</p><p>A third possible solution is in the book <em>The Science of Self Control</em>, which suggests (based on experiments with smokers) that striving for <em>consistent</em> behavior helps to mitigate the effect. If you’re always thinking “I can do more work tomorrow to make up for it”, you’ll procrastinate. If you’re thinking “I’m trying to be consistent in how much I get done”, then taking today off directly implies taking tomorrow off (to keep it consistent!). For this mental trick to work, you’ve got to think of consistency as actually being <em>more</em> important — otherwise you’ll be tempted to cheat by doing better tomorrow to make up for slumps (and in reality, put this off indefinitely as usual). Instead, consistency has to be viewed as a <em>foundation</em> of performance. If you’re trying to be productive, <em>first you need your productivity to be consistent.</em> If you’re trying to quit smoking, <em>first you need your smoking to be consistent.</em> If you’re trying to eat well, <em>first you need your diet to be consistent</em>. You can’t move your behavior in the direction you want if it’s so unreliable to begin with.</p><p>Maybe that’s not for everyone. It’s possible to have <a href="http://www.paulgraham.com/procrastination.html">very inconsistent, but very productive, work habits</a>. It does seem like we’re constantly fooling ourselves, though, by imagining that our future behavior will be better (and more consistently good), and that we’ll have more free time, etc. I suspect thin is the internal lawyer again, watching how we can present ourselves to others. People like to hear optimistic versions of the future. When we don’t keep what we tell others separate from what we tell ourselves, this messes us up. (But remember, my internal-lawyer model is <em>very</em> made-up.)</p><p>Another thing that gets us twisted up is positive thinking. There’s all kinds of books and such that will advise you to vividly visualize your success. This is supposed to motivate you, setting you on the path to get what you want. Turns out, it has the opposite effect! <a href="http://www.europhd.it/html/_onda02/07/PDF/9th%20Lab%20Meeting%20Scientific%20Material/Oettingen/Oe.%20%26%20May.,%202002,%20JPSP.pdf">Visualizing success</a> <a href="https://www.psy.uni-hamburg.de/arbeitsbereiche/paedagogische-psychologie-und-motivation/personen/oettingen-gabriele/dokumente/oettingen-1991.pdf">predicts failure</a>. It’s as if the positive visualization satisfies us, and we lose motivation to actually go get what we want. This shouldn’t be too surprising given what we’ve discussed so far. However, those same studies found that <em>expecting</em> success is still a predictor of success, as the positive thinkers would have you believe. It’s <em>visualizing</em> success that de-motivates. Those who visualized success but predicted failure ended up worse off. Those who visualized failure but predicted success were best off.</p><p>In terms of the procrastination equation, we need to feel a sufficiently high expectancy; yet, we also need to visualize failure. Why would this be? One reason might be the old motto: <em>hope for the best, and prepare for the worst.</em> Where visualizing positive outcomes lulls some part of us into stagnation, visualizing negative outcomes likely motivates us to deal with those possibilities. <a href="http://www.psych.nyu.edu/oettingen/Oettingen,%20G.,%20&amp;%20Gollwitzer,%20P.%20M.%20(2002).%20Psychological%20Inquiry_neu.pdf">Several</a> <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.601.4733&amp;rep=rep1&amp;type=pdf">studies</a> <a href="http://www.psych.nyu.edu/oettingen/Oettingen,%20G.,%20Pak,%20H.,%20&amp;%20Schnetter,%20K.%20(2001).%20Self-regulation%20of%20goal%20setting.pdf">have</a> <a href="https://kops.uni-konstanz.de/bitstream/handle/123456789/10461/04OettBulgHendGoll_GoalPurs.pdf">suggested</a> that <em>mental contrasting, </em>in which one purposefully thinks about both potential success and failure, compares well to thinking about success or failure alone. It stands to reason: to be motivated into action, we must believe that good things are possible if we act, <em>and</em> that bad things are possible if we don’t act.</p><p>This isn’t the only helpful way to use visualization, though. <a href="http://lesswrong.com/lw/3w3/how_to_beat_procrastination/">One study</a> found that visualising “process” rather than “outcome” works. In the study, that meant visualizing studying rather than visualising getting a good grade. Other research has found that visualising the training process is <a href="http://journals.humankinetics.com/doi/abs/10.1123/tsp.11.3.277?journalCode=tsp">useful</a> <a href="http://search.proquest.com/openview/633f6c1ac151257e0bc5b12f694ae996/1?pq-origsite=gscholar&amp;cbl=30153">for</a> <a href="http://journals.lww.com/nsca-scj/Abstract/2012/10000/Maximizing_Strength_Training_Performance_Using.10.aspx">athletes</a>, as well. It seems this helps form connections in the brain almost like real practice would.</p><p>This brings up another point from the procrastination equation, which is that we do better if we set our sights on short-term goals. Perhaps one reason visualizing success de-motivates rather than motivates is that it sets the goal far in the future. As discussed extensively in the previous section, it’s motivationally very important to keep our long-term goals in mind; but, I’d say it’s similarly important to translate these into actionable next steps. This implies a kind of dialog between long-term and short-term thinking. Long-term goals make us think about actionable next-steps, while thinking of single steps also bring to mind the long-term goals they’re connected to. This back-and-forth can be facilitated by another shift in thinking, which has to do with intrinsic vs extrinsic motives.</p><h4>Intrinsic vs Extrinsic Motivation</h4><p><a href="http://search.proquest.com/openview/d5c287e3379861a866b1febaa8eac6b1/1?pq-origsite=gscholar">Not all self-affirmations are created equal</a>. People who are instructed to give <em>intrinsic</em> motives get more benefit from value-affirmation than those who are instructed to give extrinsic motives. (I’ve seen a study which found that people who spontaneously give intrinsic reasons end up doing better than those who give extrinsic, too; but I can’t find it now.) Intrinsic motives are self-generated desires which have to do with the task at hand, such as raw pleasure, curiosity, and playfulness. Extrinsic motives are things like money and survival, for which we do many things we aren’t fundamentally interested in. Intrinsic motives have been observed to be <a href="http://seer.ufrgs.br/Movimento/article/viewFile/2659/5763">better for inducing a flow state</a>. Extrinsic motives may induce more of a fight-or-flight like response, increasing impulsiveness.</p><p>This is a paradoxical state of affairs! Those who are following immediate motives, doing what they are doing for the sake of doing it, do better in the long run. Such is human psychology. Luckily, human motives are also malleable. Even if you only went to school in order to make better money later, you can still do better by focusing more on intrinsic goals such as learning. Someone (I forget who) summarized this as: <em>consequentialism is what’s true; but virtue ethics is what works.</em> Our motivation system doesn’t work that well if it has to follow long chains of consequences to derive the value of an action. We work much better if we can think of particular actions as intrinsically good or bad. Take a cue from virtue ethics: don’t turn in work ahead of time because you fear the consequences of being late; do it because you <em>value diligence </em>(or value being ahead of the game; or whatever works for you).</p><p>I think this helps with the sort of back-and-forth value-action/action-value thinking I mentioned at the end of the previous section, because it brings actions and values close together. Actually, I feel like someone who writes about extrinsic motives in a value-affirmation exercise is sorta missing the point: you don’t really do things “for money”; money isn’t an end goal. It’s a sign you haven’t gone far enough in the chain of reasons, to find what you really enjoy in life.</p><p>In any case! That’s all I have for now. Here is a quick summary of the strategies:</p><h4>Summary</h4><ol><li><strong><em>Structure tasks in a way that respects the motivation equation. </em></strong>The brain likes things which grant visceral rewards with high probability and low delay. Improve any of those dimensions and you’re likely to improve your motivation.</li><li><strong><em>Be more reflective.</em></strong> Breathe, count to ten, feel the soles of your feet, think of yourself from 3rd-person perspective. Observe and react to your impulses, rather than letting them control you.</li><li><strong><em>Remember what you’re after. Remember what you value.</em></strong> Thinking of reasons rather than just actions increases consistency and commitment.</li><li><strong><em>Be unrelentingly realistic about future time and behavior. </em></strong>Don’t make the mistake of thinking you’ll have more free time tomorrow than today. Don’t make the mistake of thinking you’ll procrastinate any less tomorrow, either.</li><li><strong><em>Consistent performance is the foundation of good performance.</em></strong> Don’t let your internal lawyer fool you into making exceptions “just this once” because you can make it up later. Think of your behavior now as establishing a rule that you’ll have to follow next time as well.</li><li><strong><em>Be optimistic, but visualize failure. </em></strong>You need high expectancy to be motivated, but it’s also important to be thinking of what could go wrong so that you don’t get complacent.</li><li><strong><em>Visualize process, not outcome. </em></strong>A mental walk-through of what you need to do serves some of the same functions as actual practice. Also, thinking of process helps generate next steps, and possibly helps you think of more things that could go wrong and ways to counter them.</li><li><strong><em>Consequentialism is what’s true, but virtue ethics is what works.</em></strong> The human motivation system works better if it values what it’s doing right now, rather than some future payoff.</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=89dd7732aaac" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/what-does-long-term-thinking-feel-like-from-the-inside-89dd7732aaac">What Does Long-Term Thinking Feel Like from the Inside?</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Scott Alexander doesn’t like growth mindset… yet.]]></title>
            <link>https://weird.solar/scott-alexander-doesnt-like-growth-mindset-yet-b6e26ec929fb?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/b6e26ec929fb</guid>
            <category><![CDATA[education]]></category>
            <category><![CDATA[cognitive-bias]]></category>
            <category><![CDATA[rationality]]></category>
            <category><![CDATA[growth-mindset]]></category>
            <category><![CDATA[psychology]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Mon, 10 Oct 2016 21:50:42 GMT</pubDate>
            <atom:updated>2016-10-16T01:37:07.535Z</atom:updated>
            <content:encoded><![CDATA[<p>I recently read Scott Alexander’s posts about growth mindset (<a href="http://slatestarcodex.com/2015/04/08/no-clarity-around-growth-mindset-yet/">one</a>, <a href="http://slatestarcodex.com/2015/04/10/i-will-never-have-the-ability-to-clearly-explain-my-beliefs-about-growth-mindset/">two</a>, <a href="http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/">three</a>, <a href="http://slatestarcodex.com/2015/05/07/growth-mindset-4-growth-of-office/">four</a>). He starts out admitting that he’s biased against it, and I agree — reading his criticism made me take growth mindset <em>more</em> seriously, because his criticism was rather weak and he obviously tried pretty hard to knock it down.</p><p>Before reading his take, all I knew of growth mindset was that there are a lot of people who go around responding “Yet! Growth mindset!” whenever they hear things like “I’m not good at math” which sound like they imply fixed skill levels. It’s mostly a silly in-joke. You wait for someone to say they “can’t”, and then you pounce: “YET!”</p><p>Scott divides growth mindset into two possible versions, the “Sorta Controversial Position” (which he thinks might not be true outside the lab) and the “Very Controversial Position” (which he finds almost absurd). Quoting his second article:</p><blockquote>SCP: The more children believe effort matters, and the less they believe innate ability matters, the more successful they will be. This is because every iota of belief they have in effort gives them more incentive to practice. A child who believes innate ability and effort both explain part of the story might think “Well, if I practice I’ll become a <em>little</em> better, but I’ll never be as good as Mozart. So I’ll practice a little but not get my hopes up.” A child who believes only effort matters, and innate ability doesn’t matter at all, might think “If I practice enough, I can become exactly as good as Mozart.” Then she will practice a truly ridiculous amount to try to achieve fame and fortune. This is why growth mindset works.</blockquote><blockquote>VCP: Belief in the importance of ability directly saps a child’s good qualities in some complicated psychological way. It is worse than merely believing that success is based on luck, or success is based on skin color, or that success is based on whatever other thing that isn’t effort. It shifts children into a mode where they must protect their claim to genius at all costs, whether that requires lying, cheating, self-sabotaging, or just avoiding intellectual effort entirely. When a fixed mindset child doesn’t practice as much, it’s not because they’ve made a rational calculation about the utility of practice towards achieving success, it’s because they’ve partly or entirely <em>abandoned success as a goal</em> in favor of the goal of trying to convince other people that they’re Smart.</blockquote><p>(Fixed mindset is the name for the opposite of growth mindset.)</p><p>A researcher studying growth mindset responds to Scott by email (quoted in the <a href="http://slatestarcodex.com/2015/05/07/growth-mindset-4-growth-of-office/">fourth</a> article) saying that Scott misunderstands the claim behind growth mindset, and that neither Carol Dweck (the originator of growth mindset research) nor other researchers define growth mindset like that. I’ll talk more about what Carol Dweck says in her book <em>Self Theories</em>, but although I think Scott does get it a little wrong, I think he’s mostly right that she advocates the VCP. I’ve also come to believe the VCP is mostly true!</p><h4>Scott’s Case Against</h4><p>Scott’s first argument against growth mindset is <em>the evidence is too damn good</em>. Growth mindset studies have big effect sizes, excellent statistical significance, and replicate well. Scott doesn’t make his reasoning explicit, but I suspect he’s referencing an idea which has emerged from the <a href="http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/">replication crisis</a>, which is <a href="http://www1.psych.purdue.edu/~gfrancis/Publications/GFrancis-R1.pdf">if you see more positive results than you’d expect, that’s evidence of bias</a>. Taking this into account properly can <a href="https://arxiv.org/abs/1601.00900">make our confidence decrease as evidence gets stronger</a>. Growth-mindset results replicate well, which means it isn’t likely an illusion created by <a href="https://en.wikipedia.org/wiki/Data_dredging">p-hacking</a>. That leaves publication bias as a possible culprit. <a href="http://faculty.wcas.northwestern.edu/eli-finkel/documents/InPress_BurnetteOBoyleVanEppsPollackFinkel_PsychBull.pdf">This meta-analysis</a> concluded that publication bias could only be creating small distortions. I tentatively conclude that we’re in the clear: growth mindset seems unlikely to be another illusion to be washed away by improved rigor.</p><p>Second, Scott complains that it’s just not very plausible: telling kids who fail that they just have to try harder is <em>mean</em>, not inspiring. His wording in the first post against growth mindset suggests that there’s research backing this up, which would indeed be a point against growth mindset; but he links to <a href="http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/">one of his own posts</a>, and I don’t see any research cited in it. In any case, he argues that blaming fixed biological intelligence for our failures is not only more true, but more compassionate than blaming low effort.</p><p>Which brings us to another point. Scott actually doesn’t make as big a deal out of this as I’d expect, but reading <em>Self Theories</em>, I got pretty annoyed at this aspect of the work. Carol Dweck has several ways of inducing growth or fixed mindset, but the biggest one (and the one which she thinks is most important, as I understand it) is to <em>lie to kids</em> about how intelligence works. In some studies she uses fake psychology articles, some of which talk about fabricated research showing that intelligence levels are malleable, while another set of articles (given to the other set of subjects) discusses similarly fake research showing the opposite. The fake research is delivered mixed in with vivid stories from the lives of geniuses which support one side or the other.</p><p>Now, Carol Dweck doesn’t outright deny any of the research linking intelligence to biology or measuring its stability across a lifetime. Scott <a href="http://slatestarcodex.com/2015/04/10/i-will-never-have-the-ability-to-clearly-explain-my-beliefs-about-growth-mindset/">states in strong terms</a> that he isn’t accusing her of that. However, I think it’s pretty clear that <em>she would have you believe</em> such research is wrong. In <a href="http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/">terms Scott might prefer</a>, it’s a motte-bailey ideology. She tells other researchers that growth mindset isn’t claiming biology doesn’t matter for intelligence; rather, it’s claiming that <em>if you believe biology doesn’t matter</em>, you do better. This allows her to advocate <em>telling</em> people that intelligence is malleable, in no uncertain terms, while not needing to claim that intelligence is indeed malleable.</p><p>The nature of the game becomes really clear in chapter 9, her chapter on IQ. She can’t bring herself to look at the research on IQ (which is quite unfortunate, since there are really interesting issues here — I’ll address that later). Instead, she side-steps the research by citing what the <em>inventor</em> of IQ thought, saying that he “knew” IQ wasn’t a measure of fixed potential. She gives a few anti-IQ citations, but without saying what any of them accomplish; looking at them, most don’t seem too promising.* She goes on to raise various doubts about how intelligence should be defined. She cites her own research showing that growth-mindset individuals (who are sorted out from the rest via a survey which ensures their views fit her own) define intelligence by phrases like “how much effort you put in and your willingness to learn and do all that you can to fully understand it”. Finally, she ends the chapter by admitting she’s not very interested in the truth of the matter:</p><blockquote>The goal of this book is not really to resolve what intelligence is, but rather to ask: What is the most useful way of thinking about intelligence and what are the consequences of adopting one view over another? I think our research findings speak very clearly to this issue.</blockquote><p>GAaah. :(</p><p>Despite my frustration, I could forgive her for that. If this is the reality we’re living in — if people who believe false things about intelligence simply do better that people who believe the truth — her position is arguably the wise one. She <em>should</em> be beating around the bush when it comes to IQ research, because which view is more useful <em>is</em> more important than which is true. This way of thinking annoys me terribly, but it’s not really a point against Carol Dweck’s research.</p><p>However, it gets worse for growth mindset.</p><p>I think this is Scott’s most effective argument against growth mindset: <em>every study</em> shows growth-mindset individuals starting out at an equal academic performance level to fixed-mindset individuals (or lower!). This leads him to say:</p><blockquote>The studies don’t show any real-life correlation between growth mindset and any measures of success.</blockquote><p>This isn’t quite true; in her book, Dweck discusses a couple of studies which show correlation with later success. But that only makes it even more puzzling! Dweck herself emphasises, in study after study discussed in <em>Self Theories</em>: when students are tested for growth mindset vs fixed mindset, the groups are almost always equal in academic performance starting out. Nonetheless, growth-mindset students perform better in Dweck’s experiments.</p><p>One possibility is that growth mindset doesn’t show any differences in performance in the grade-school students that Dweck mainly studies, but creates a widening gap later. In chapter 5 of self-theories, Dweck discusses a study which shows this happening in the transition from grade school to junior high. Fixed-mindset students declined in standing across this transition; especially the fixed-mindset students with the best grades starting out. Similarly, growth-mindset students with the <em>worst</em> grades starting out showed the most improvement. I don’t know whether there was a difference in average grades between growth-mindset and fixed-mindset students in higher grades. She doesn’t say in the book, and there wouldn’t have to be: mindsets could change over time in a way that destroyed the correlation with GPA at any one grade. I’d expect her to mention a difference emerging if it had happened, since a growing gap between the GPA of growth-mindset kids and fixed-mindset kids would speak in favor of growth mindset.</p><p>A <a href="http://disjointedthinking.jeffhughes.ca/wp-content/uploads/2012/10/Robins-Pals-2002.-Implicit-self-theories-in-the-academic-domain.pdf">different study</a> (also discussed in chapter 5) looked at college students. In that case, the fixed-mindset students entered college with slightly better SAT scores. They then did as well as growth-mindset students, meaning that they underperformed relative to their SAT scores. Keep in mind that since colleges are trying to select the best students, this doesn’t really suggest the growth-mindset students started out academically worse; perhaps their applications made up for it in other ways. (The college studied was Berkeley, so standards were high.) In that case, we’d <em>expect</em> low-SAT students to over-perform relative to their SAT alone, and high-SAT students to under-perform. doesn’t seem too convincing as an example of growth-mindset doing better. However, I see no indication of that in the study — growth-mindset and fixed-mindset groups were the same as each other in high-school GPA on entry. (High-school GPA was the only other on-entry academic ability statistic they looked at.) So, it’s hard to say. At least we can conclude that growth mindset didn’t <em>make up for</em> the initial differences in academic performance between the groups.</p><p>Overall, it looks like the case for long-term benefits is weak in the book. There are some results I haven’t mentioned yet, which <em>do</em> show long-term benefits for minority students. The study Scott Alexander <a href="http://slatestarcodex.com/2015/04/22/growth-mindset-3-a-pox-on-growth-your-houses/">spent so much time complaining about</a> was like that, too. But I have to wonder: if there wasn’t an effect on average, but there was a positive effect for minorities, doesn’t that mean it’s a <em>negative</em> effect for non-minorities?</p><p>All this seems like a fairly damning case against the importance of growth mindset. I’ll explain why I think not as we go on.</p><p>But first, what <em>is</em> growth mindset, according to Carol Dweck?</p><h4>Dweck’s Case For</h4><p>After reading Scott’s series of posts, I had my suspicions that growth mindset was actually a pretty good hypothesis that summarized a lot of ideas I cared about and put them on a firmer experimental footing. However, I expected to be annoyed by Dweck lumping all these important things under one label, “growth mindset”. I was wrong on that count: in <em>Self Theories</em>, she examines a number of distinct phenomena and doesn’t lump them together under one umbrella even after she’s shown that they correlate with each other (and moreover, establishes cause-effect relations).</p><p><strong><em>Wanting to look good vs wanting to learn.</em></strong> “Wanting to learn” means focusing on the material itself, being driven by challenges, and choosing those things <em>over</em> external validation when they conflict. “Wanting to look good” means focusing on grades, pleasing teachers or parents, or impressing fellow students. Basically, external validation. Carol Dweck uses the terms “performance goals” vs “learning goals” for this distinction, but some psychologists use “performance goals” to mean the exact opposite thing, so I’ll just call these students “validation-oriented”.</p><p><strong><em>Believing intelligence is fixed vs believing it is malleable.</em></strong> She uses the terms “entity theory of intelligence” vs “incremental theory”, but once again I’ll opt out of the terminology. (“Entity theory”?)</p><p>Carol Dweck’s experiments show that students who focus on learning handle challenges much better that those who focus on external validation. When students are sorted based on these goals (by taking a survey), the two groups initially perform at the same level when presented with typical problems for their grade (as we bemoaned above). However, when she presents them with problems above their grade level, the two groups show dramatic differences. Learning-oriented students step up their game. They are happy for the challenge and say so. On the other hand, validation-oriented students become frustrated. They give up easily, call themselves stupid, and engage in <a href="https://en.wikipedia.org/wiki/Self-handicapping">self-handicapping</a> behaviors. When the students return to problems of an ordinary difficulty level, validation oriented students drop in ability, deflated by their defeat. Learning oriented students perform at their normal levels. After the exercise, validation-oriented students overestimate the number of problems they got wrong, while learning-oriented students remember correctly.</p><p>Carol Dweck calls these response profiles <strong><em>helpless response vs mastery response</em>. </strong>The term “helpless” invokes research in <a href="https://en.wikipedia.org/wiki/Learned_helplessness">learned helplessness</a>. Learned helplessness is the phenomenon of learning that you can’t improve a situation, and stopping any effort to do so even when the situation changes. Learned helplessness has been connected to depression; and indeed, research has also found links between fixed mindset and depressive behavior. But why would there be a link between learned helplessness and being validation-oriented?</p><p>That’s where a fixed theory of intelligence comes in. If you think that people are either smart or dumb, then it makes sense to conclude that a difficult problem is just too hard far you. It makes sense to then focus on getting external approval: even learning oriented students tend to <em>want</em> that; it’s just not as much of a focus. Experiments show that these traits are linked, and that causing students to have a malleable theory of intelligence or a fixed theory (by showing them the fake research mentioned earlier, or by similar methods) <em>does</em> cause them to have learning goals or validation goals (and the mastery or helpless responses to go with ‘em).</p><p>Growth mindset, then, is the combination of a malleable theory of intelligence, learning-oriented goals, and a mastery response to challenges. These three things are correlated, and causally related as well.</p><p>All of this also helps to explain the mystery from the previous section, too. <em>Growth mindset only makes a difference in the face of challenges.</em> Carol Dweck gave students problems inappropriate for their grade levels. Teachers work really hard to ensure that this does not happen in the classroom. The way things work, teachers more or less have to cater to the worst students in order to be sure almost everyone can keep up. This is great for fixed-mindset students, who want to be able to get good grades predictably. It’s much worse for the growth-mindset students who seek a challenge. (Sal Khan thinks he can change this, and uses Carol Dwek’s mindset terminology.)</p><p>The exception to this rule is the treatment of minority students, who often face language barriers (as in Los Angeles, where children who speak spanish at home go to be taught in english in the public schools) or other difficulties. So, it makes a lot of sense that growth mindset would be more important there.</p><p>I did find <a href="http://mtoliveboe.org/cmsAdmin/uploads/blackwell-theories-of-intelligence-child-dev-2007.pdf">a study</a> which shows the overall long-term effect in the transition to junior high, though! There’s a nice graph:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/485/1*wcc-4M3qC3vToAGxgM7Uhg.png" /></figure><p>I don’t know if that kind of thing replicates or just happened in this particular study, but it’s encouraging. Notice that they only look at math achievement. Of this, they say:</p><blockquote>Mathematics is a subject that many students find difficult; thus, it meets the requirement of being a sufficiently challenging subject to trigger the distinctive motivational patterns related to theory of intelligence, which may not manifest themselves in situations of low challenge.</blockquote><p>They even go and do a causal model:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/565/1*tb6Q_UNdTBlngKXBF_6p2A.png" /></figure><p>How pretty. Well, I’m sold.</p><p>VCP is true. Your theory of whether intelligence is fixed or malleable is a root cause for a host of other variables determining outcomes which we care about.</p><p>As for the college study which found such unimpressive long-term trends, it could be that grades don’t represent the differences in outcomes very well. If validation-oriented students are choosing easier classes which they know they can get good grades in, and learning-oriented students choose tougher courses which they learn more from, that could easily make up the difference.</p><p>I’m not really very certain of all of this. There are a lot of papers that I’ve only read summaries of, and summaries can be very misleading (especially plain-english summaries of statistics). To be really sure, I’d want to actually read each of the studies glossed in <em>Self Theories</em>, looking for alternative explanations. But the truth is, while Scott Alexander is biased against growth mindset, I’m biased for it. Aside from the annoying perspective on IQ, it just seems to get a lot of things right. It’s “in the same spirit” as other things I agree with, such as <a href="http://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html">this</a>.</p><p>Let’s see if I can make it as obvious to you as it is to me.</p><h3>Re-Framing Growth Mindset</h3><h4>Realistic Views of Intelligence</h4><p>My #1 annoyance with <em>Self Theories </em>is, as I’ve said, the way it advocates “useful” theories of intelligence over factual ones. This seems entirely unnecessary to me. We can encourage growth mindset without over-selling the malleability of intelligence.</p><p>First, some facts. How malleable <em>is</em> intelligence? It’s a complicated issue. Within a single generation, variation in adult IQ is <a href="https://en.wikipedia.org/wiki/Heritability_of_IQ">about 75% genetics</a>. Much of the rest is made up of other biological factors, such as nutrition and disease (but I haven’t found numbers on that). However, this kind of percentage can be misleading. The <a href="https://en.wikipedia.org/wiki/Flynn_effect">Flynn Effect</a> observes that IQ has risen about 2.93 points per decade. This is high enough to contradict a primarily biological explanation for the IQ variation; biological factors vary, but not that much. So, as I understand it, the prevailing theory is that variation <em>within</em> a generation are primarily biological, but variations <em>across </em>generation show a large cultural effect from things like improved schooling.</p><p>The Flynn effect may be a point in favor of malleability, but not within-generation. Looking at that alone, we might conclude that non-biological factors of IQ have to be pervasive cultural elements to show up. There do seem to be more short-term factors, though. <a href="http://www.pnas.org/content/108/19/7716.full">This study</a> finds that effort accounts for 0.64 standard deviations on IQ tests! This wouldn’t show up as a factor ordinarily, since the experiment specifically incentivised some participants to do well, and not others; something you’d ordinarily never consider when trying to get a good objective measure of IQ.</p><p>So, it appears that there is some evidence for malleability. Even so, this is far from the fabricated stories which were used to induce growth mindset in experiments. Do we <em>need</em> to exaggerate the malleability of intelligence?</p><p>I’ve held to Carol Dweck’s usage of the term “intelligence” as it relates to growth mindset, in order to illustrate it to you so we can be annoyed at it together. Scott Alexander doesn’t do her the courtesy. Notice that is version of the SCP and VCP don’t mention intelligence. In Carol Dweck’s work, it’s children’s beliefs about intelligence that matter the most. In Scott Alexander’s discussion, he’s talking about how much effort matters in comparison to innate ability. In one case (discussed in chapter 5 of <em>Self Theories</em>), she gives students this equation to see how they fill in the blanks:</p><blockquote>Intelligence = _____% effort + _____% ability.</blockquote><p>Scott, on the other hand, just talks about how much“effort vs ability” matters. <em>Scott’s version makes more sense.</em> Even if intelligence is a relatively fixed trait, other factors may be just as important to success. Before investigating, I thought that IQ swamped other psychological correlates of success, such as personality traits. Not so. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4499872/">This meta-analysis</a> finds that personality, IQ, and socioeconomic status are about equally responsible for lifetime income. Shouldn’t we be talking about factors of success more broadly? Why is Carol Dweck talking about whether <em>intelligence</em> is fixed?</p><p>This is not a battle she needs to be fighting; <em>especially</em> if you take a close look at the causal graph I cited from one of her papers earlier. “Incremental Theory” (the belief that intelligence is malleable) looks like the source of everything in the network, but it is the furthest node from the things we actually care about. Can’t we try other things than modifying the beliefs about intelligence?</p><p>I should mention that Carol Dweck did investigate other ways. Simply <em>telling</em> students that a set of problems is for the purpose of evaluating their ability creates validation goals, and a more helpless response when challenges come along. On the other hand, presenting the same problem set as a learning experience, for the purpose of students testing themselves and seeing what they need to work on, creates learning goals and mastery responses toward challenges.</p><p>Consulting our handy causal diagram, it looks like “positive effort beliefs” are a really good thing to try and modify. And, to me, <em>this version makes so much more sense</em>. You don’t have to believe something like</p><blockquote>Results = 90 % effort + 10 % ability</blockquote><p>in order to believe it’s worth putting in lots of effort. Maybe results are 99% due to innate ability. We can still benefit from putting in effort, applying beneficial strategies, focusing on learning goals, and avoiding learned helplessness. You just have to believe <em>more effort pays off</em>.</p><p>I’m still saying VCP is essentially true, <em>but </em>we probably don’t have to lie to kids to reap the rewards of growth mindset. Sure, some people have a much higher IQ than you do, and maybe they’ll be better than you at whatever they try. So what? <em>Effort pays off. The amount you’ll eventually be able to do if you dig into challenges is far larger than if you avoid them. The best way to look smart in the long term is to be willing to look dumb in the short term by asking questions, seeking out challenges, and trying things that might fail (so that you can learn). </em>Those things seem so clear to me that I would be rather puzzled indeed if such behavior didn’t have long-term benefits!</p><h4>Errors vs Bugs</h4><p>I can imagine Carol Dweck responding: “We <em>do</em> tell subjects that effort pays off, but when we’re trying to install growth mindset, we do everything we can. Leaving out the malleability of intelligence would weaken the effect, since those beliefs drive others.”</p><p>Well, I’ve got another proposal for you. Fixed-vs-malleable is just one dimension of how we think about skill. Believing in malleability of skill seems to help kids believe that effort matters, but maybe it’s not the best way. It’s untested, but I think <a href="http://celandine13.livejournal.com/33599.html">errors vs bugs and the end of stupidity</a> might be a whole lot better. Quoting:</p><blockquote>I wasn’t an exceptional pianist, and when I’d play my nocturne for him, there would be a few clinkers. I apologized — I was embarrassed to be wasting his time. But he never seem to judge me for my mistakes. Instead, he’d try to fix them with me: repeating a three-note phrase, differently each time, trying to get me to unlearn a hand position or habitual movement pattern that was systematically sending my fingers to wrong notes.</blockquote><blockquote>I had never thought about wrong notes that way. I had thought that wrong notes came from being “bad at piano” or “not practicing hard enough,” and if you practiced harder the clinkers would go away. But that’s a myth.</blockquote><blockquote>In fact, wrong notes always have a <em>cause</em>. An immediate physical cause. (<a href="http://celandine13.livejournal.com/33599.html">more</a>)</blockquote><p>Even if we believe that our skill is malleable, we may not think that we have very much power to change it. Perhaps we think it’s tied to some aspect of the environment, like how good the teacher is or whether we get enough intellectual stimulation. But the idea that <em>each mistake has a specific cause</em> is very empowering. Not only is skill changeable; it’s got moving parts that you can examine! This concept also encourages you to work smarter. Blindly practicing, practicing, practicing might be better than not, but directing effort intelligently to figure out what’s missing from your skill and how to correct it will be much more effective.</p><h4>Time Preference</h4><p>You might be thinking that my version is just going to lie to the kids at a different point: rather than installing unfounded beliefs in the malleability of intelligence, I’m now in a position where I’d have to install unrealistically high beliefs in the results of effort.</p><p>I don’t think this is true. I think the helpless-response group <em>really are</em> sabotaging themselves. They’re failing to put in effort that <em>really would</em> benefit them. In challenging situations, learning-oriented students are going to constantly be one step ahead of validation-oriented students; they’re still dealing with the status implications as learning-oriented students dig in. Validation-students are going to be more past-oriented, focusing on how well they’ve done so far and how to damage-control for failures. Learning-oriented students have all their thoughts on the problem at hand. The result is that validation-oriented students learn less <em>and</em> don’t look as good.</p><p>Why would this happen? How is it that the validation-oriented strategies deprive them of the very thing they want? Well, first, validation-oriented students will avoid this type of situation in the first place. By seeking areas where they can already excel, they achieve their validation goals reliably. But second, I think validation-orientation naturally creates short-term thinking. If you’re focusing on grades rather than learning, you’ll tend to study the night before the test. If you’re focused on getting someone’s approval rather than getting things done, you’re likely to make big promises that you won’t be able to keep. This is called <a href="https://en.wikipedia.org/wiki/Time_preference">high time preference</a>: validation-oriented students prefer immediate reward at the expense of long-term reward.</p><p>I’m guessing. I don’t know if there have been any studies associating growth mindset with time preference; I haven’t found them. However, there has been some work in modifying time preference. <a href="http://www.acrwebsite.org/volumes/12841/volumes/v34/NA-34">This study</a> suggests that concrete thinking creates a high time preference, while abstract thinking creates low time preference. If so, it <em>may</em> be that focusing on reactions of people around you creates high time preference, while focusing on intellectual questions associated with learning creates low time preference. <a href="http://www.overcomingbias.com/2014/04/rank-linear-utility.html">This model</a> suggests that time preference is a function of what options you have in mind. Thinking of more near-term events naturally creates a high time preference, while thinking of far-off events creates a low one. If you’re thinking about what grades you will get in upcoming assignments and tests, maybe this creates a high time preference in comparison to thinking less about those things.</p><h4>Fostering Curiosity</h4><p>You can <a href="https://weird.solar/decisions-969bf5478573#.jg15zecd0">change your response to a situation by thinking</a> about a different aspect of the situation. What you focus on is what you react to. In <em>Surely You’re Joking, Mr. Feynman!</em>, Feynman mentions again and again that a big advantage in his life has been that when he’s thinking of physics, he doesn’t have the attention to think about who he is talking to. This means he doesn’t think twice about saying “you’re wrong” to important people, where others would say “yes, very interesting”. This is very close to the distinction between learning-oriented goals and validation-oriented goals.</p><p>In my experience, a lot of people get tripped up because they don’t ask “stupid questions” or “make stupid comments” — they assume everyone else knows the answers, or that what’s obvious to them is equally obvious to everyone. These comments or questions are usually much more important than they seem to the one voicing them. Focusing on what <em>you</em> know and what <em>you</em> don’t know is therefore very helpful. Even if you <em>are</em> misguided, this approach is the fastest way to get corrected.</p><h4>Actually Try</h4><p>I think the <em>most</em> valuable advice provided by growth mindset is to actually try to solve your problems. That’s the point of the “Yet! Growth mindset!” game which I mentioned at the beginning: reminding people to try. Why would reminding people to try be useful? Can people just <em>not think of trying to solve their problems?</em> I think the answer is yes. This is a little boggling, though. Why would this be?</p><p>Part of my answer is that mentioning the possibility of changing things re-frames the time-preference, as mentioned in the previous section. This isn’t the whole story, though. A second piece of the puzzle is that we are finite creatures, and sometimes it just doesn’t occur to us that a particular fact about our situation may be changeable. Reminding ourselves/others to actually try at least raises the possibility to our attention so that we can evaluate it.</p><p>A third reason why this might be useful is that we spend a lot of our time only signalling trying. It’s usually enough to put on a good show of effort. Maybe this kind of behavior is useful so often that we don’t frequently consider doing the other kind of thing, where we break out of our social roles and do our damndest to get what we’re after.</p><p>This is related to my <a href="https://weird.solar/the-internal-lawyer-b00625428e1e#.4rsln2pav">internal lawyer</a> idea: a lot of human actions are more optimized for justifiability than outcome.</p><h3>Denouement</h3><p>If you’re frustrated with the way “growth mindset” lumps a lot of ideas together, I can sympathize. All the traits have been found to correlate positively, so we can use an error model as a convenient approximation. The growth mindset literature is really about a bug model, though; it provides specific cause-and-effect relationships between the parts which have been borne out by the data (and adapted with the data over time). I hope you can get a good idea of many of the pieces and why they seem plausible individually from this post.</p><p>The growth-mindset research really seems quite good. In other areas of psychology I’ve tried to dive into in this way, things seem to get murkier and murkier as I chase down data and citations. Here, the picture seemed to get clearer and clearer instead. That’s quite an accomplishment.</p><p>*: <em>A book, </em>On Intelligence…More Or Less<em>, looks relevant; but I haven’t looked at it too much yet.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b6e26ec929fb" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/scott-alexander-doesnt-like-growth-mindset-yet-b6e26ec929fb">Scott Alexander doesn’t like growth mindset… yet.</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Super-Scientific Realism]]></title>
            <link>https://weird.solar/super-scientific-realism-9303e8d74dbc?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/9303e8d74dbc</guid>
            <category><![CDATA[science]]></category>
            <category><![CDATA[science-fiction]]></category>
            <category><![CDATA[physics]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Wed, 05 Oct 2016 20:15:51 GMT</pubDate>
            <atom:updated>2016-10-05T20:15:51.374Z</atom:updated>
            <content:encoded><![CDATA[<h4>or “Over-Hard Sci-Fi”</h4><p>Ordinary fiction has lots of holes in its logic which we let it get away with for a bit of fun. One of the tropes of <a href="http://rationalfiction.io/story/rational-fiction">rationalist fiction</a> is to deny the author that tool, requiring characters to have realistic motives (rather than doing things like <a href="http://yudkowsky.tumblr.com/writing/level1intelligent">giving up just to fit the scene</a>) and a world with a consistent physics (so there can be magic or other fantastic elements, but it has to follow definite rules). Somewhat similarly, hard sci-fi requires everything to be realistic to the best of scientific knowledge</p><p>An interesting thing about this is that our understanding of the real world <em>isn’t</em> generally consistent. At any given time, there tend to be a number of known inconsistencies between theory and data. Could there be a genre of fiction which specifically emphasized being <em>more consistent than our best view of reality? </em>You’d play off of the known inadequacies of a theory, filling in details as consistently as possible, but without trying to be like reality. The aim would be to look like something a consistency-obsessed alien would come up with, having access to only our theories divorced from observation. This would likely be easiest in the softer sciences, such as economics and sociology, where the result would be a kind of other-world anthropological visit to the society which would result if the simplified models were true. A somewhat harder case would be biology; I have next to no idea what strange things would be predicted by biology if its theoretical inconsistencies were resolved without regard for reality. The grand challenge, of course, would be physics: describing the strange alternative worlds where quantum field theory holds but general relativity doesn’t, and making a <em>plot</em>? Carrying out the implications of our best partial unified theories? Surely there exist people who could write this, but not I.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9303e8d74dbc" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/super-scientific-realism-9303e8d74dbc">Super-Scientific Realism</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Communication Protocol]]></title>
            <link>https://weird.solar/communication-protocol-8b8632211df0?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/8b8632211df0</guid>
            <category><![CDATA[information]]></category>
            <category><![CDATA[belief]]></category>
            <category><![CDATA[communication]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[cognitive-bias]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Wed, 28 Sep 2016 19:39:48 GMT</pubDate>
            <atom:updated>2016-10-03T02:24:30.404Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="https://wiki.lesswrong.com/wiki/Information_cascade">Information cascades</a> and <a href="https://econ.duke.edu/uploads/assets/People/Kuran/Availability%20cascades.pdf">availability cascades</a> are a set of mechanisms by which mass belief shifts (or apparent belief shifts) can occur in a winner-takes-all manner. The subject is complex, and I will not attempt to summarize it here (although I’d like to discuss it further in later posts). The very basic idea is that a group of people can start repeating each other’s beliefs, take each other’s belief as further evidence, repeat the belief more strongly, and quickly converge to a strong self-reinforcing group belief. How can we avoid this problem?</p><h4>Belief Propagation</h4><p>Fortunately for us, communication protocol which minimize the risk of harmful information cascades have been heavily studied in Bayesian networks. A Bayesian network is a set of variables connected by conditional probability distributions. These are used for statistical inference: you add data to the network, and ask the network to tell you new probability distributions for all the variables. In order to make inference efficient, computer scientists wanted the variables to “talk to each other”: rather than using all the evidence that’s been added to the network, each variable can only individually “hear” the “neighbor” variables in the network. A variable listens to all its neighbors, forms a new belief about its own probability distribution, and then tells that new belief to its neighbors. We let the whole network talk to itself until all the information has propagated around the network.</p><p>This algorithm, if poorly designed, would lead to the same problem as we saw in information cascades. If X is correlated with Y, and X starts out slightly leaning in one direction, Y could hear this and slightly update itself in the same direction. X hears Y’s new belief, and updates more in the same direction. Y hears X’s belief has gone further, and nudges itself a bit further also. In the end, X and Y could become very confident based on only a little evidence. This problem is called double-counting of evidence.</p><p>It turns out there are many solutions to this problem. The oldest and simplest to understand is called <em>belief propagation</em>. In belief propagation, we still make the simplifying assumption that our friends are independent sources of information. However, we make sure to not create <em>direct</em> feedback loops with our friends, by removing the influence they’ve had on our belief when talking to them. X tells Y its current belief, but <em>dividing out any influence from Y. </em>Similarly, Y tells X its current belief dividing out influence from X. (When I say “dividing out”, I am literally referring to dividing belief functions by each other; but since we are not going into mathematical details here, it’s best to think of it more loosely.)</p><p>Imagine Sal and Alex are talking about a controversial large cardinal axiom that’s been in the news lately. They’ve been friends for a while, and they always talk about the latest results in set theory. Sal asks Alex: “So, old buddy old pal, do you think it’s true?”</p><p>According to the belief propagation algorithm, when Alex responds to Sal, Alex should attempt to <em>factor out the influence that Sal has had on Alex’s current beliefs</em>. Suppose Sal has been talking for the past 15 minutes about the raw beauty of the new axiom. Alex should <em>not take this into account</em> when answering Sal. Otherwise, Alex runs the risk of causing Sal to double-count evidence: if Alex nods vigorously due to the raw beauty of the new axiom, Sal may become overconfident when no new evidence has been put on the table. Instead, Alex would seek to communicate <em>other</em> information, not already mentioned by Sal.</p><p>Of course, this only works if everyone knows that belief propagation is the communication protocol currently in play. If Sal was expecting Alex to be totally won over by the argument and instead receives that relatively cold answer, Sal may update in the <em>opposite</em> direction. In the real world, Alex should make the intention clear by saying “Well, before you talked to me I was thinking…”, “If you hadn’t won me over, I would have said…” or similar phrases.</p><p>There is an analogous problem dealing with <a href="https://en.wikipedia.org/wiki/Preference_falsification">preference falsification</a>. Preference falsification is a big topic, but the core idea is that people’s stated preferences are edited based on the social context. People may not differentiate between personal preference and what they think the group should do based on everyone’s preferences; they may even purposefully obscure the difference due to social incentives. This warps the group consensus in much the same way as feeding back beliefs does. The effect is reduced if everyone can be clear, <a href="http://lesswrong.com/lw/jis/tell_culture/">tell-culture</a> style, about what they would want if they hadn’t heard the other’s preferences yet.</p><p>Belief propagation happened to be the first solution we tried. In a large interconnected network, belief propagation is not guaranteed to do the right thing, but <a href="http://www.merl.com/publications/TR2001-22">does well surprisingly often</a>. Many other algorithms have been proposed since then, such as <a href="http://event.cwi.nl/uai2010/papers/UAI2010_0125.pdf">tree-reweighted belief propagation</a>. Unfortunately, these largely seem too difficult to use as a communication protocol for humans. Are there any other practical communications protocol can we apply to do even better?</p><h4>Attaching Arguments</h4><p>Belief propagation does not fully avoid double-counting of evidence, but tends to <a href="http://www.cs.huji.ac.il/~yweiss/cbcl.pdf">double-count evidence equally</a>. This tends to a certain kind of overconfident belief structure.</p><p>If all of your friends say something, a belief-prop node will treat this as independent evidence and become overly confident in it. If everyone follows this policy, a group can lock in to overconfident beliefs despite avoiding double-counting in pairwise relationships.</p><p>We probably can’t avoid this problem fully, but how do we mitigate it?</p><p>We could <a href="http://lesswrong.com/lw/qw/principles_of_disagreement/">require beliefs to be backed by arguments</a>, so that if our twelve friends give the same argument for their belief, we know that it’s not independent evidence; there’s only one argument’s worth of evidence. We might <em>trust</em> our friends beliefs, but in some sense we can’t fully accept it if we’re unable to go through the argument ourselves. This style of thinking will be familiar to mathematicians, who tend to feel they don’t really know something until they know the proof.</p><p>This is helpful, but we can’t just ignore everything with insufficient citation (or track down citations indefinitely). Realistically, we don’t always find out the reasons our friends believe what they believe. Instead, we attach a little <strong><em>[citation needed]</em></strong> sign to it in our heads if the information didn’t come with a source. (When we forget to attach the sign, sometimes <a href="http://lesswrong.com/lw/k5/cached_thoughts/">bad things happen</a>.)</p><p>This might be counter-intuitive to those who have internalized the idea that <a href="http://lesswrong.com/lw/jl/what_is_evidence/">beliefs should be contagious</a>:</p><blockquote>Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are <em>not</em> contagious — that you believe for private reasons which are not transmissible — is so suspicious. If your beliefs are entangled with reality, they <em>should</em> be contagious among honest folk.</blockquote><blockquote>If your model of reality suggests that the outputs of your thought processes should <em>not</em> be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality. You should apply a reflective correction, and stop believing.</blockquote><p>This is also related to <a href="http://www.overcomingbias.com/2007/01/we_cant_foresee.html">Aumann-style can’t-agree-to-disagree arguments</a>.</p><p>However, all of this could be modelled formally. Attaching an argument (or a citation) to a belief reduces our uncertainty about which beliefs offer independent evidence, allowing us to integrate different information sources together with higher confidence.</p><h4>The Structure of Your Uncertainty</h4><p>We have to be really careful that the arguments we attach are not rationalizations. An argument written after the conclusion has been decided <a href="http://lesswrong.com/lw/js/the_bottom_line/">does not provide any additional evidence</a>. You’ve got to attach your <a href="http://lesswrong.com/lw/wj/is_that_your_true_rejection/">true reasons</a> for forming the belief! Otherwise it’s just noise.</p><p>(Actually, this is not quite true — let’s take a small tangent to examine the claim. If I <em>know</em> that you’re rationalising a belief, cherry-picking arguments in its favor, then I will still be convinced if you find very strong arguments such as a mathematical proof. The thing is, I can also be justified in updating <em>against</em> what you are arguing for; if you can only find relatively weak arguments, that is evidence that stronger arguments don’t exist. Note, however, that <a href="http://grognor.livejournal.com/1223.html">this can also be a failure mode</a>.)</p><p>The main point I want to drive home is that good communication strives to convey the <em>exact structure of its uncertainty</em>, in as much detail as is convenient given other constraints. There’s a little leverage to be had by people being more aware of these things on the receiving end, trying to infer how much evidence is in a communication, trying to figure out how to integrate beliefs coming from different sources. There’s a lot more to be gained from the <em>communicator</em> end, being proactive in telling the audience how to update on the information; being careful about stating the amount and type of evidence being conveyed.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8b8632211df0" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/communication-protocol-8b8632211df0">Communication Protocol</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Judgement as Fake Explanation]]></title>
            <link>https://weird.solar/judgement-as-fake-explanation-da0483dc8a82?source=rss-6a2d7d0b9f79------2</link>
            <guid isPermaLink="false">https://medium.com/p/da0483dc8a82</guid>
            <category><![CDATA[affirmations]]></category>
            <category><![CDATA[cognitive-bias]]></category>
            <category><![CDATA[critical-thinking]]></category>
            <category><![CDATA[understanding]]></category>
            <category><![CDATA[judgement]]></category>
            <dc:creator><![CDATA[Proof Of Logic]]></dc:creator>
            <pubDate>Wed, 21 Sep 2016 20:01:01 GMT</pubDate>
            <atom:updated>2016-12-06T18:04:36.773Z</atom:updated>
            <content:encoded><![CDATA[<p>Since writing <a href="https://weird.solar/descriptive-before-normative-b42dd871d7e8#.asc1e3sn5">Descriptive Before Prescriptive</a>, I’ve thought a bit more about the general pattern I’m trying to point at. A big part of what goes wrong is: a value judgement becomes a <a href="http://lesswrong.com/lw/ip/fake_explanations/">fake explanation</a>, stopping curiosity. If an atheist writes off a religious belief as “just stupid”, it <em>feels</em> very much like a sufficient explanation. Really, though, it has very little content.</p><p>This is closely related to the idea in <a href="http://celandine13.livejournal.com/33599.html">Errors vs Bugs and the End of Stupidity</a>. An “error model” treats mistakes as mostly random, dependent only on broad traits like skill level, intelligence, and so on. A “bug model” instead postulates cause-and-effect mechanics which lead to a mistake. An error model may be useful as a first approximation, but a bug model is typically much more informative. Unfortunately, possession of an error model can keep you from looking for a bug model.</p><p>Although we could think of this as purely a descriptive problem (involving aggregate statistical models vs more detailed causal models), error models have a strong prescriptive component. I think that’s part of the reason why they can be such an effective thought-stopper: judging something by an error model makes it “good” or “bad”, which doesn’t just serve as a fake explanation; it serves as a fake solution, too.</p><p>Another example: I’m not poly myself, but I once advised a friend who was frequently cheating on romantic partners and unable to control this to try polyamory. Telling this story to another friend, I got the reaction “this just sounds like a terrible person”. I think “terrible person” serves as both a fake explanation and a fake solution: it hides the details in an error model, and it creates a visceral sense that you should just get away from such a person, stop associating with them, not try to help them. (This <em>can</em> be a good solution — but I’m pointing at the process.)</p><p>I’m not denouncing judgement (as delicious as that contradiction might be!). Judging is necessary at some point, to make decisions. However, whenever there’s an opportunity for better understanding, judgement has to take a back seat to that. Otherwise, it’s all-too-easy for prescriptive thinking to block its own descriptive foundations from being built.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=da0483dc8a82" width="1" height="1" alt=""><hr><p><a href="https://weird.solar/judgement-as-fake-explanation-da0483dc8a82">Judgement as Fake Explanation</a> was originally published in <a href="https://weird.solar">Solar Panel</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>