redqueen

System 1 ↔ System 2 Relations

My current theory on S1/S2 relations:

One needs to give both S1 and S2 full decisionmaking power; terrible things happen if either half is neglected.

If my S2 comes up with a goal like “exercise for an hour every day,” then I need to act on that, or else my model of my goals becomes a free node: it’s unconstrained by reality, it can have any value it wants and nothing changes. My beliefs about my goals are no longer paying rent. And so I’m likely to end up with a bunch of fake goals that I don’t actually care about, and my plans for achieving even my real goals will be silly, and actually have substantive downsides. This seems pretty terrible. I’m advocating something like the strategy Nate advocates in Deregulating Distraction: “I know that once I’ve settled on a goal, I’m going to move towards it… it’s just the nature of goals. I can change the goal and I can drop the goal, but I can’t hold the goal and not pursue it.”

Habitual resistance of S1 impulses, on the other hand, seems (at least in me) to create Red Queen race-like dynamics. Say you have some proto-cheetahs and proto-gazelles 50 million years ago. They’re both kind of slow and clumsy; the proto-cheetahs tend to catch about the slowest 10% of the gazelles, which is just enough to keep a small population of predators extant, though the slowest proto-cheetahs die. Over time, gazelles get faster and faster. Cheetahs get faster and faster. Fast forward 50 million years, and both animals are super fast. And actually spending a ton of energy and resources on physiology, muscles, etc to make sure they’re super fast. But cheetahs still only catch the slowest 10% of gazelles – they’re working a lot harder for their dinner, but only getting the same amount of food.

So analogously, say I’m bouncing off of some work project and I have an urge to ask someone for help, but I resist because it feels like that’d interfere with my goals around becoming stronger.  I often find my S1 immediately starts flailing more on the project, feeling more helpless, and visualizing asking for help more vehemently. This time I resist in part because now it seems like I’m a bit triggered, and so whoever I ask will have to navigate that, too. Three minutes later, I’m completely freaking out, and also, resisting with every fiber of my being the impulse to desperately fling myself at the feet of any random person; because I know that will end terribly. So I’m spending a bunch of energy flailing and a bunch of energy trying to do the thing, but getting no results.  (Maybe even more closely analogous than the cheetahs is the story about Pax and Reavers in the Harry Potter/Firefly crossover fanfic Browncoat, Green Eyes.)

In the cases of both S1 and S2, I think there’s a really unfortunate missed opportunity to update and calibrate around both a) what sorts of plans actually work, and b) what I actually want. Even if the goal or impulse is mistaken, keeping myself from acting on it is kind of like having a kid that you never let make any mistakes – skinning their knee, failing to do their homework, forgetting to water a plant. They’re not going to learn about the world, and they’re also not going to learn how to judge risks. Nate, again, I think has this mostly right (see Productivity Through Self Loyalty). Another related story from the Game of Thrones books: a group of four people, one of whom is a supposedly a prince but also quite young, are on the run, trying to figure out which way they should go. The prince is very frustrated, because it seems like the more experienced group members are ignoring his opinion, even though he technically ranks them. He keeps sulkily daydreaming about nice things in the south, and resenting the others for advocating going north instead. But suddenly they turn to him and say, “Hey, we’re going to follow whatever you decide; you’re in charge. You’ve heard the considerations; think carefully, and then whatever you tell us, we’ll do.” The young prince chooses to go north. As above, he’d been thinking of his “opinion” as a free node that didn’t actually impact reality; as soon as it seemed like it mattered, he thought more carefully and came to a better decision.

So that’s the basic model. An obvious objection to this is that you can’t just give S1 and S2 total control; you have to have some strategy for dealing with the situations where they disagree.  Mostly I suspect this should be handled via listening to whichever one seems to feel most strongly about the outcome.  But here’s another hypothesis, which is maybe more tenuous, but also more elegant: S2 picks goals, and S1 picks actions/tactics. The sources of hypotheses, though, are inverted: S1 provides hypotheses for goals, and S2 provides hypotheses for actions/plans/tactics.

When this is working right, S1 is putting forth a rich and colorful array of tempting and delightful ideas: wouldn’t it be amazing to go skydiving, to be the sort of person who can learn new languages in a week, to have this accomplishment, or that narrative. And S2 gets to explore all these things like walking through the fabric store, running its fingers along the bolts of cloth, putting different ones next to each other, choosing this and that and a bit of a few other things to craft together into a coherent goal structure.

And S2, once it picks the goals, puts its mind to the task of creating a bunch of plans and suggested actions: TAP’s, systems, affordances, so that as S1 is trying to actually navigate the world in real-time, it gets these dialogue boxes that say “Hey! Just in case it’s helpful, I thought a bunch of this through in advance for you, and using my excellent analytic skills and best model of what you’d need and want, made this plan for what you can do right now.” At its best, this comes as a huge relief to S1! Like being really stressed out because you’re planning a big party, and you’ve never done it before and don’t know how and aren’t even sure where to start – then having a perfect-for-you party planning checklist suddenly drop out of the sky and into your hand.

Disclaimer: Of course, YMMV. And I am totally not implementing this psychology right now. But it does feel like the thing I would be doing, if I wasn’t such an idiot / the next experiment I’d like to try running on myself.

4 thoughts on “System 1 ↔ System 2 Relations

  1. If I am understanding S1/S2 properly, it seems to me that many of my “good choices” come from S2 and many of my “bad choices” come from S1. Example, long planning says that I should go to the gym everyday. S1 would say, “Let’s skip they gym and do X today instead.” Seems like some long range planning would be to convince those impulsive choices to be more positive ones. But YMMV :)

    1. My take is more that S1 & S2 have different strengths/weaknesses/blind spots. Sometimes when my S1 is averse to something that looks like a good idea (like going to the gym) it’s because the plan I’ve made is too elaborate or too much of a hassle; it pattern matches more to the sort of thing I think that the sort of person I’d like to be would do, instead of what actually makes sense given the tradeoffs of my life.

      That said sometimes the long-range plan does seem basically sound, and S1 is just not on board for whatever reason. My first tactic in such situations is to model my S1 as a kind of sub-agent in my mind, with its own belief structure, and to try and check whether the reluctance might be based on a false belief about what outcomes will follow from what actions. Eg, maybe my S1 doesn’t actually think going to the gym will actually make me healthier. And then I see if there’s a way I can change its mind, perhaps kind of like I might try to teach a child. Though it’s pretty important to go in with a willingness to have your S2/long range planning system change its mind instead, since sometimes S1’s objections turn out to be pretty sensible (which I suppose may also be good advice when talking to a child!).

Leave a Reply

Your email address will not be published. Required fields are marked *