Do you remember the first time you realized an adult was wrong? The feeling in the pit of your stomach when you discovered that your parents not only didn’t know everything, but that they could actually be mistaken? And maybe even be lying to themselves about what they knew and didn’t know?
I think it’s safe to say that thousands of Social Psychologists felt that way at 5:03PM on January 29th when renowned researcher Michael Inzlicht tweeted:
Inzlicht is one of the co-authors of a study that is coming out next month in Perspectives on Psychological Science. The “RRR of ego depletion” to which he is referring has mustered the greatest minds in the field for over 2 years and more resources than any other paper. All to answer a question that has been settled science since 1998. In fact, until a few rogue researchers started poking around in old studies that everyone thought were sacrosanct five years ago this question was simply unaskable:
”Do acts of self-control have a mental cost?”
Since Roy Baumeister, Ellen Bratslavsky, Mark Muraven, and Dianne Tice published their original paper Ego Depletion: Is the Active Self a Limited Resource? in 1998, there has been little doubt that acts of will had what Baumeister calls “a psychic cost.” There have been hundreds of peer-reviewed studies on Ego-Depletion1 and epic meta-analyses demonstrating what has become settled fact: Acts of will seem to deplete a mental resource like fatiguing a muscle. Which is why this is often called the “Strength Model” or more commonly the “Resource Model” of willpower. And it has firmly held as the dominant model for 20 years.
Until that tweet.
“Sometimes I wonder if I should be fixing myself more to drink,” begins Inzlicht in his February 29th blog post Reckoning with the Past. “I’m in a dark place. I feel like the ground is moving from underneath me and I no longer know what is real and what is not.”
This Registered Replication Report, or “RRR,” is a big deal. It’s making top social psychologists (including one that helped on the study!) consider drowning themselves in booze and leaving the profession. It’s making us all reconsider how we know what we know. And it’s made Habitry look long and hard about how we recommend Motivators work with clients.
Fair warning: This is a long article (it’ll take a practiced reader about 42 minutes), because we think you deserve a in-depth analysis of how the Ego-Depletion RRR came about and what it means.
We’re writing this because every Motivator has the right to know how to keep people feeling motivated.
If, however, you’re seeking an article to post in a Facebook group to prove a point, you might as well stop reading. If you post this article with the headline "Ego-Depletion Isn't Real" or "Ego-Depletion Is Real", a kitten will die. Your daughter's kitten. In her arms. On her birthday.
This is an article about promise, discovery, nuance, fear, hope, and what it means to know a thing. It’s an article about science and the philosophy of science. It’s an article a cynic would tell us that coaches have no business reading.
But you’re not just a coach.
You’re curious. You’re passionate about helping the people who depend on you, and you strive to use the latest in psychological science to do right by them.
We wrote this for you because you’re not just a coach. You’re a professional. And most importantly, because you’re a Motivator.
Table of Contents
The SignsThe story of how we got from “Willpower is a limited resource” to famous social psychologists announcing on their blogs that they don’t even know what’s real and what’s not real.
What the Ego Depletion RRR isWhy this is no ordinary study and how the whole field of social psychology came together in a way that has only been done two other times in history to finally answer the question of whether the Resource Model is true.
What the Ego Depletion RRR isn’tWith a finding this apocalyptic, it can be easy to get carried away. We’ll walk you through some conclusions that a lot of journalists have reached and why some of them are really stupid.
What this means for Evidence-Based CoachingA practical look and how this impacts the field of coaching, as well as Habitry itself. If you just want to know how this tidal shift will affect you, read this.
The Future and the “Shifting Priorities Model”Our vote for the new frontrunner for how willpower works and how it might actually explain a lot more about the phenomena we see in other social psychology frameworks.
ConclusionA final message from Habitry on what this means for Professionals and Motivators everywhere.
Additional MediaOther ways to read this story.
Further ReadingA collection of lay articles and blog posts about what’s going on.
ReferencesPrimary sources for further exploration.
Day in and day out, science can look boring. However, this is often not the case for the people steeped in it. Scientists, the people who make their living in the community of science, are competitive as hell. In order to keep their jobs and their reputations, even tenured scientists have to read studies, recruit participants, run studies, analyze data, write papers, and most importantly, get those papers published in major journals. This constant jostling to “publish or perish” has put a lot of pressure on scientists to do the kind of science that gets published: novel studies with moderate to large observed effects2.
Scientists have pointed out the potential problems with this structure, and created systems and organizations to attempt to correct for it such as the Reproducibility Project and the Center for Open Science. These non-profits have taken it upon themselves to do the hard work necessary for science to function: peer-reviewed replication of old studies, a practice which the current “publish or perish” environment has squeezed out of mainstream science.
“Great!” we all thought. “Now we’ll finally know the stuff we’re right about!”
We had no idea how wrong we could be. There had been warning signs, but like most early warning signs about something this apocalyptic, they were easy to dismiss:
In March of 2011, Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn published False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant, which showed many ways that Psychology researchers can change they way they analyze data to show a lower rate of false-positive findings (a “p-value” of less than or equal to 0.05)3 . This is important because a study needs a p-value of .05 or less to even get read, let alone published4 . And in the dark part of our souls, we knew it was true. It’s pretty easy to manipulate data sets to get a sub-.05 p-value. In fact, it even has a name: “p-hacking.”
“But surely,” we thought, “not enough of us are doing that to matter. Besides, the fraudsters will be caught when we eventually do replication studies. And until then, those of us doing honest science will balance it out.”
Then in November of 2013, Andrew Gelman and Eric Loken published The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. This paper outlined that if a researcher has an expectation for a given hypothesis (which is 100% of studies based on any theoretical framework), the researchers can unconsciously bias the results way they select what data analysis procedure to use, even if they never p-hack. In short, you don’t have to be fraudulent. You can accidentally skew results simply by being well-versed in your field of study!5
“Well that’s embarrassing, but replication will eventually take care of it,” we thought. “And until then, we’ve done meta-analysis6 and shown moderate effect sizes7. Hagger et al. (2010), a meta-analysis of 198 published studies on Ego-Depletion, showed an effect size of d=0.62 [95% CI: (0.57, 0.67)]8. And they even used magical statistical things9 to show that bias couldn’t have possibly been a problem.”
But then specific worries appeared. In July of 2014, Carter and McCullough published Publication bias and the limited strength model of self-control: Has the evidence for ego depletion been overestimated? which took a deep look at Hagger et al. (2010) and found that the magical statistical things the researchers did to account for bias and p-hacking…don’t work. “Our findings suggest that the published literature on the depletion effect is clearly influenced by small-study effects10, and as a result, overestimates the strength of the phenomenon. Furthermore, it would appear that this overestimation is likely due to publication bias” (Carter & McCullough, 2014; pg. 7).
“All you technically showed is they didn’t account for unpublished studies!” we yelled. “Technically that just shows they could be wrong and we don’t even know how likely that is but it’s gotta be pretty unlikely because reasons!11 Besides, all this will be taken care of when we eventually get around to doing replication studies one day.”
So Evan C. Carter, Lilly M. Kofler, David E. Forster, and Michael E. McCullough did their own meta-analysis which included more unpublished studies12 and better controlled for bias and heterogeneity of effect sizes13. The result was, A Series of Meta-Analytic Tests of the Depletion Effect: Self-Control Does Not Seem to Rely on a Limited Resource which states pretty clearly, “We find very little evidence that the depletion effect is a real phenomenon, at least when assessed with the methods most frequently used in the laboratory. Our results strongly challenge the idea that self-control functions as if it relies on a limited psychological or physical resource” (pg. 1).
“Um… Ok. That’s interesting. But I guess we’ll never know unless we can do a replication study larger than the original Baumeister et al. (1998) or one a lot like it.”
And that’s exactly what the Ego-Depletion RRR is. It’s THE replication study. The one we’ve all waiting for. The last stand. The final straw that is breaking the back of the Resource Model.
What the Ego Depletion RRR is
Registered Replication Reports are created when scientists from all over the world agree to open the books and throw the kitchen sink at reproducing the result of a specific study to see if the effect size holds up to all the assumptions we make when we do studies with smaller populations. According to the Association for Psychological Science, “they are motivated by the following principles:
Psychological science should emphasize findings that are robust, replicable, and generalizable.
- Direct replications are necessary to estimate the true size of an effect [Ed: which means meta-analysis isn’t enough].
Well-designed replication studies should be published regardless of the size of the effect or statistical significance of the result” (Association for Psychological Science, n.d.; Mission Statement)
In a sense, RRR’s are the pinnacle of what science should always be: collaborative, open source, and open discussion.
Here’s how RRRs work:
“Registered” means that the study authors, in this case Martin Haggar15, Alex Holcombe, Michael Inzlicht16 and scientists from 24 labs all over the world (which we will call “Holcombe and Haggar (2014)” until the actual study attribution comes out of embargo at the end of the month17), pre-registered the study18. Pre-registering a study means that the researchers report their hypothesis and their methods before they do data collection and analysis. This transparency is the new gold-standard in science because it is a big deterrent from p-hacking (malicious or unconscious) and opens the method selection to peer-review before the data collection even begins. It also means that researchers can’t stop data collection when they get the result they were looking for. Holcombe and Haggar (2014) pre-registered with the Open-Science Framework (OSF)19 what study they wanted to replicate, how they were going to do it, and that they were going to get 2,000 participants. They issued a press release to recruit help.
“Replication” means that the Holcombe and Haggar (2014) is a direct attempt to replicate the results of a single study. In this case, the authors chose to replicate Sripada, C., Kessler, D., & Jonides, J. (2014) entitled Methylphenidate blocks effort-induced depletion of regulatory control in healthy volunteers because it was the most similar to the original Baumeister et al. (1998) but used “computerised versions of tasks to minimise variability across laboratories” (Holcombe & Haggar, 2016; Wiki Home). Since minimizing variability is the key, labs that wanted to participate had to all follow the same protocols to the letter, so it was all put up on the OSF and the labs got cracking.
“Report” means that all the labs reported the results in the same way to the OSF so that the data could be aggregated instead of having to undergo meta-analysis. That way, the study authors simply have to add it all together and can report on the effect size and the confidence interval.
What’s in this RRR
Well that’s kinda a problem until the embargo is lifted. We don’t really know. The only neat summary of what they found is Inzlicht’s pretty damning one: “nothing, nada, zip. Only three of the 24 participating labs found a significant effect, but even then, one of these found a significant result in the wrong direction!” (Inzlicht, 2016). What Habitry believes we can assume from this (and the discussions the authors have had in the press and on their blogs in the last month) is that the depletion effect was no more likely than chance in this massive replication. This is a very big deal because it means that as of January 29th, 2016 there have no been no studies that have controlled for selection bias which show a depletion effect. None. Nada. Zip. Since 1998, we simply haven’t been testing what we thought we were testing. What we thought was signal was noise. We were gamblers spending millions of dollars flipping coins and really smart people were writing books and making predictions on what would happen next because we thought we had a “system.”
When it comes to Ego-Depletion, the Resource Model, the glucose-decision fatigue link20, we’re essentially back to square one. We now have no experimental data on which to base a model for what the hell is going on in our heads when we will ourselves to do stuff.
We are living at a time when two people can say, “willpower is limited” and “willpower is unlimited” and both of them are neither right nor wrong. Until we have more data untarnished by bias, it’s just nonsense. They’re saying the scientific equivalent of, “2 + 2 = fish.”
What the Ego Depletion RRR isn't
So that’s all scary as hell, but what’s much more interesting to talk about is what the Holcombe and Haggar (2014) doesn’t say. However we need to get some dumb arguments out of the way before we can get to the fun things.
It’s not a “single paper”
Some people will point out that this is a single paper and that we should never rely on a single paper. And I will congratulate them on remembering a single thing from 8th grade science class.
The community is freaking out about Holcombe and Haggar (2014) not because it’s of what it says, but because of what it says in the context of a grander discussion about selection bias and reproducibility. We hope we’ve done a decent job telling you the story of why this paper is so important to social psychology, but if you want to dismiss it because it’s “a single study” then I certainly hope you study hard and do better in 9th grade science class.
It’s not proof that psychology is crap
Psychology seems particularly prone to the selection biases due to the small population sizes in most psychology studies and the difficulty recruiting heterogeneous populations. As Brian Resnick points out in his article for Vox, Psychology is also newer than most sciences21, even other social sciences22. Psychology also gets a lot of scrutiny because it’s hip and gives people “insights” into their lives and the behavior of the people they care about.
However, this isn’t just a psychology problem. The problems of bias and replication are are happening across all of science. The so-called “Replication Crisis” is happening in biology and even medicine. This is a big story inside an even bigger story. Motivators, we’re even having trouble replicating cancer studies.This is not a kinda big deal. It is simply and non-hyperbolically a big freaking deal.
It’s not proof that science is broken
It seems trite to say at a time like this, but this is how science is supposed to work. Being able to be wrong is the only way you can find out what’s right. "Any good science should always be looking at its methods, its statistics, but in a bigger sense, its institutions, the way it thinks about evidence.” to quote researcher Sanjay Srivastava out of the University of Oregon Resnick (2016).
Habitry CEO, Steven M. Ledbetter, was a philosophy major undergrad at the University of Chicago where he specialized in the History and Philosophy of Science. The Grandaddy of that field is Thomas Kuhn, who’s 1964 work, The Structure of Scientific Revolutions pretty much outlines what is happening now in the (relatively young) field of social psychology, psychology, and even older fields like biology, and medicine.
Without commitment to a paradigm [Ed: you can read “paradigm” here as “theory,” “model,” or “framework.”] there can be no science... the study of paradigms is what prepares a student for membership in a particular scientific community. [People] whose research is based on shared paradigms are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., for the genesis and continuation of a particular research tradition. …scientific revolutions are inaugurated by a growing sense that an existing paradigm has ceased to function adequately in the exploration of an aspect of nature (pg. 11).
To put it bluntly, we’re all witnessing a paradigm shift23 in how psychology is going to have to done. And biology. And medicine.
According to UC Davis Psychology Professor Simine Vazire’s musings on this topic, it’s looking like “social/personality psychology research actually requires not 60 but more like 600 participants per study… [I] wish [I] could still do informative research with 60 participants… but we never could. [W]e just thought we could, but we were always wrong.”
We can no longer use a model to make a prediction, recruit a few dozen college students to test it, stop recruiting when we get the result we want, analyze the data to get a p-value of 0.05, then report the effect size, and assume someone will get around to replicating our study eventually. We’re going to have to be more open with our methods, our data, and most importantly we have to decide as a community to replicate, replicate, replicate. And be even more open when we do those replications and nothing happens.
Now that we have Open Science Framework, we can’t go back. Science doesn’t work that way. But that’s why it’s science. It’s a community that stays focused on getting a little bit better every day, even when that means examining everything because we weren’t really making as much progress as we thought.
It doesn’t mean we don’t know what to do
When we sat down to bring all these stories together for you, we knew it was going to be hard. Because we knew the thing you would be looking to use the most for is certainty. But science doesn’t do certainty. We didn’t have it before Holcombe and Haggar (2014). We will never have it after. Karl Popper, another great Philosopher of Science wrote in his final book, In Search of a Better World, “we cannot reasonably aim at certainty. Once we realize that human knowledge is fallible, we realize also that we can never be completely certain that we have not made a mistake” (pg. 4).
Science can never “know,” because all science does is test. And Holcombe and Haggar (2014) illustrates what Karl Popper also writes: “Good tests kill flawed theories; we remain alive to guess again.”
So scientists know what to do. We must posit. We must predict. We must test. We must share. And most importantly of all, we must repeat. “Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the scientific game” (Popper, 2005; pg. 280).
And for those of us who are not scientists, we think you know what to do, too. As long as you keep in mind one thing:
It doesn’t mean the feeling of ego-depletion isn’t real
You read that right. No matter what is in the final publication, Holcombe and Haggar (2014) cannot conclude that the feeling of ego-depletion isn’t real. The theory of ego-depletion was a theory that was derived to explain how a feeling arises in our minds. The theory of “how” that feeling got there was just shown to be non-predictive, but it is not saying that the feeling itself is not real. We frankly just don’t know how it happens anymore.
The physiological phenomenon that willpower depletes a physical resource might not be real, but the feeling you have at the end of long-ass day that all you muster the will to do is eat ice cream and watch Netflix is very real. Feelings, as Stevo’s mother would remind us, are not facts. So no one’s “feelings” can be “wrong.”
And feelings, in the day to day experience of most people’s lives, are about as real as it gets. Pain. Joy. Motivation24, the feeling of being completely drained at the end of a long day. These may just be, “in our heads” but that does not mean they are not real. Imagine someone telling you the pain you feel when you lose a loved one is “only in your head.” No matter how technically true that statement is as a fact does not change the experience of that feeling.
We want to be very clear about this distinction, because we practitioners have a nasty habit of projecting our knowledge of “how things work” onto people’s lived experience. It’s how we get reputations as being “aloof” and “not really listening.” Habitry is worried that instead of listening and helping people learn resilience, practitioners will go off half-cocked and tell people that their feelings of ego-depletion were “just proven wrong by science” or that “fatigue you feel at the end of a long day is just in your head.” Which is why if we learn you’ve done that, we’re going to be the ones killing your daughter’s kitten on her birthday. Because we want to watch while you explain to her that “it’s all just in your head.”
Søren Kierkegaard wrote, “immediate feeling selfishly understands everything in relation to itself” (Kierkegaard, 1993; pgs. 71-72). We help people by meeting them where they’re at. Knowledge does not excuse us from basic human empathy.
It doesn’t (currently) impact other theories
Holcombe and Haggar (2014) is a massive replication of a foundational test of ego-depletion. It is not a test of Self-Determination Theory, Habit Formation, or Social Cognitive Theory. Although, we should expect those tests to be coming down the pipe because the Open Science Framework and the Replication Crisis mean nothing is “safe.” Vazire warns us, “it will happen to other effects as well, so pick your favorite effect and insert it here” (Vazire, 2016).
However, Habitry predicts that Self-Determination Theory (SDT) and Social Cognitive Theory will prove to be the most robust because the studies upon which they are based were done 40 and 50 years ago years ago and have been replicated many times more than Ego-Depletion (although not in the Open Science Framework). Interestingly, a lot of the people on this RRR (like Martin Haggar) are also top contributors to SDT, so we anxiously await some Self-Determination Theory RRRs.
We also predict SDT will be more robust because the micro-theories that comprise SDT primarily concern affect. SDT basically says, “people who report these feelings do better in these ways.” We absolutely need to check those studies (and pre-registering would be a great way to make sure the measures we’re using actually measure what we claim they do25), but as alluded to in the section above, it’s a lot harder to refute claims of “feeling” one way or the other26.
What this means for Evidence-Based Coaching
At Habitry, we think the truly great practitioners, the Motivators out there who hold themselves to a higher standard in any helping profession, should strive to be practicing their craft in an Evidence-Based manner. We’ve also taken pains to define what that means since a lot of people use the phrase, “Evidence-based coaching” to mean, “coaching like me”.
As Stevo writes in the Science Appendix to We Make Communities:
“Evidence-based” means you are making decisions based on something more than your intuition. Based on more than personal anecdote or your own collection of cognitive biases (which we all have). “Evidence-based” means you are not just working to eliminate your own doubt, but the doubts of like-minded people in your profession, and outsiders looking in. It is not ethical a priori, but defendable. It means that if you are called before a jury of strangers, you could defend your decisions on what to did with a client under a cross-examination of your peers.
When it comes to ethics, I personally hold myself to a virtue standard rather than a defendable or “rule-based” standard. I care about looking in a mirror as much as a jury of my peers, but I consider “defendable” to be the lowest acceptable professional standard (pg. 159).
As we stated earlier, when it comes to willpower being limited or unlimited, we can’t really say anything at this point and refer to evidence. 2 + 2 = fish. Which is why Habitry is going to start pulling our references to willpower being a limited resource. We can’t defend that statement with evidence. What we can say is that, “most people report feelings of fatigue after situations of high cognitive load such as making lots of small decisions or deciding between a lot of similar options.” We will defend this more qualitative statement from the qualitative studies27 such as Huffman & Kahn (1998) and Malhotra (1982) whom Vohs et al. (2008) cite as their inspiration for researching “Decision Fatigue” in the first place.
The Future and the “Shifting Priorities Model”
Where does this leave Willpower? In need of a fundamental rethink, for sure. Luckily, Michael Inzlicht (yeah, same guy. The one who wants to drink more), Brandon J. Schmeichel, and C. Neil Macrae might have started doing just that with their prescient opinion piece in 2014 Why self-control seems (but may not be) limited.
The key word, the word you absolutely have to understand, is “seems.” Why self-control seems limited. In their reviews, Inzlicht et al. (2014) and Inzlicht et al. (2015) start with the neurobiology, what we have measured, and begin building a theory outward from that data and observed phenomena like the feelings of fatigue that people have after periods of great cognitive load. This feeling we we usually observe as “depletion” is what Inzlicht et al. (2015) calls a “motivational shift.”
Another way of conceiving of this motivational shift is that it may lead people to increasingly prefer “want” over “should” goals; with “wants” offering greater immediate utility than “shoulds,” but with “shoulds” offering greater total utility over time (Milkman, Rogers, & Bazerman, 2008). Thus, depletion may lead people to prefer “wants” that are immediately gratifying (e.g., watching an action movie, eating ice-cream) as opposed to “shoulds” that are not as instantly gratifying even if they have higher total future utility (e.g., watching a documentary film, eating salad) (Milkman, Rogers, & Bazerman, 2009). Depletion may focus people on their present leisure-seeking selves, ignoring the need for cognitive labor that would benefit their future selves” (Inzlicht et al., 2015; pgs. 19-20).
In short, the longer we do things we feel we should do, the more we compensate by thinking of the things we want to do. This “feels” like work, because we never really wanted to do it in the first place and stuff we’d prefer doing keeps popping up in our head. Nothing is getting depleted, but we sure as hell feel drained.
Inzlicht et al. (2015) posits the feeling of fatigue is an evolutionary mechanism designed to protect us. Our ancestors, who survived long enough to pass on their genes, were able to successfully make trade-offs between labor and leisure. That balancing act is not easy or obvious to do, especially in a complex environment like the one our ancestors found themselves in. The fact that we can exercise self-control to serve our future-selves doesn’t mean we should at the risk of our present-selves. It’s possible that we evolved to feel fatigue so that we have no choice but to stop laboring on a task that has no guarantee of paying off. You can afford to think of your future only when you have enough food to eat. The Shifting Priorities Model makes the claim that once people feel tired, their minds will automatically move their attention and motivation away from wanting to spend effort on goals that will pay off in the future. Instead, their attention and motivation will compel them to engage in things that help them relax and recover (Schmeichel, Harmon-Jones & Harmon-Jones, 2010). So fighting that feeling of wanting to relax will only make people more exhausted. This explains why willpower feels limited.
[Shifting Priorities Model] suggests that this flagging of effort is due to changes in motivation and priorities, and not due to some internal limit on how long effort can be sustained. This seems to fly against conventional wisdom and subjective experience. Yet, modern theories of fatigue suggest that this is precisely what happens (Hockey, 2013). (Inzlicht & Schmeichel, 2016; pgs. 19-20)
Habitry is partial to this model, because it aligns with our coaching experience, Habit Formation, and Self-Determination Theory. It specifically jives with Organismic Integration Theory which states that, over time, people begin to “internalize” external regulations of their behavior into their identity and find previously non-self-determined motivation increasingly less controlling (Deci & Ryan, 2002). In addition to that awesomeness, the Shifting Priorities Model:
“is consistent with the view of self-control as a decision about whether or not to exert effort” (Inzlicht & Schmeichel, 2016; pg. 18).
“is consistent with the view of self-control as a decision about whether or not to exert effort” (Inzlicht & Schmeichel, 2016; pg. 18).
is consistent with, and is indeed based on, modern theories of fatigue that make no reference to energy or resources, but instead are based on motivation (Hockey & Earle, 2006; Hockey, 1997, 2013)” (Inzlicht & Schmeichel, 2016; pg. 18).
However, as much as we like this new model, there is hardly a bandwagon on which to jump. The shortcomings are plenty.
“A… drawback of [Shifting Priorities Model] — and one that we think confounds some people — is that self-control feels limited. The [Shifting Priorities Model] suggests that self-control is limitless if motivation and desire is high, but this seems to collide with common sense” (Inzlicht & Schmeichel, 2016; pg. 19).
“[Shifting Priorities Model] makes strong claims about self-control failure being the product of changing motivational priorities (and not depleted resources), yet there is little direct evidence in support of this view” (Inzlicht & Schmeichel, 2016; pg. 19).
The Shifting Priorities Model has a long road of testing ahead of it, but it’s great to see theories with promise were bubbling up even before the nail was in the coffin for the Resource Model.
First of all, I’d like to congratulate you for reading this sentence. If you’re reading this sentence, it mostly likely means you read all the other ones above it, which means we totally owe you a high-five. Email coachstevo [at] habitry [dot] com with the subject headline, “I read it!” and Stevo will totally give you a digital high-five.
Secondly, as you can probably tell, this stuff is dense. It’s nuanced. And there are not a lot of places for easy answers. The failure to replicate the depletion effect means that we are almost back to square one for understanding willpower, but the Shifting Priorities Model does give us hope that a new model might soon take its place. Science without a paradigm is a vacuum, and nature abhors a vacuum.
What this means for you as a Motivator, though, is a different story. There’s going to be a lot of “stuff” coming out about this, and it’s going to be hard to stay on top of it28. However, the things that make you a great Motivator are not this knowledge; it’s your curiosity itself that makes you great at what you do. Habitry will do our best to keep you up to date29, and we will keep our source code and the Habitry Professionals up to date with the latest on how this affects the Habitry Method.
Confidence, however, is not something we can give you in an article. Or even in our Training Courses. Because confidence is not something that comes from knowledge. Or a system. Or certainty. Confidence comes from wisdom. And that wisdom we earn by embracing our curiosity despite the uncertainty, and trusting the slow elimination of doubt that begins with the phrase, “I don’t know. But here’s how I think we can find out together.”
What follows is a collection of lay articles and blog posts about what’s going on. If you want primary sources, see References.
Books on Ego Depletion
Well, I guess these are outdated as far as science, but the “self-help” parts might still be valuable.
- Willpower: Rediscovering the greatest human strength by Roy Baumeister and John Tierney
- The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It by Kelly McGonigal
On the Ego Depletion Crisis
There are other articles out there, but these are currently the only ones we recommend for the right balance of story and nuance.
- Brian Redneck’s fantastic March 14th, 2016 article for Vox
- Daniel Engber’s breaking post for Slate from March 6, 2016
On the Replication Crisis in Psychology
Psychology in general is getting hard hit by replication problems.
- Michael Pettit has a timeline of what’s happened.
- Katie M. Palmer’s blow by blow of how the Open Science Collaboration’s first RRR was received by psychology. Hint: not well.
- Christie Arschwanden has a good summary up until August 2015.
- Barbara Spellman thinks the causes might be demographical.
On the Replication Crisis in Science
And so are other fields.
- A super-deep dive into medicine by John P. A. Ioannidis.
On the Philosophy of Science
If you are interested in learning more about the Philosophy of Science, here are some more resources.
Aschwanden, C. (2015, August 27). Psychology Is Starting To Deal With Its Replication Problem. Fivethirtyeight. Retrieved from http://fivethirtyeight.com/datalab/psychology-is-starting-to-deal-with-its-replication-problem/
Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego depletion: is the active self a limited resource?. Journal of personality and social psychology, 74(5), 1252.
Becker, B. J. (2005). Failsafe N or file-drawer number. Publication bias in meta-analysis: Prevention, assessment and adjustments, 111-125.
Deci, E. L., & Ryan, R. M. (2002). Overview of self-determination theory: An organismic dialectical perspective. Handbook of self-determination research, 3-33.
Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7(6), 555-561. Retrieved from https://www.researchgate.net/profile/Christopher_Ferguson/publication/258180082_A_Vast_Graveyard_of_Undead_Theories_Publication_Bias_and_Psychological_Sciences_Aversion_to_the_Null/links/0c96053041198978c6000000.pdf
Gailliot, M. T.; Baumeister, R. F.; Dewall, C. N.; Maner, J. K.; Plant, E. A.; Tice, D. M.; Brewer, B. J.; Schmeichel, Brandon J. (2007). "Self-control relies on glucose as a limited energy source: Willpower is more than a metaphor". Journal of Personality and Social Psychology 92(2): 325–336. doi:10.1037/0022-35220.127.116.115
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Department of Statistics, Columbia University. Retrieved from http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf
Hagger, M. S., & Chatzisarantis, N. L. (2014). It is premature to regard the ego-depletion effect as “Too Incredible”. Frontiers in psychology, 5, 298. Retrieved from http://journal.frontiersin.org/article/10.3389/fpsyg.2014.00298/full
Hagger, M. S., Wood, C., Stiff, C., & Chatzisarantis, N. L. (2010). Ego depletion and the strength model of self-control: a meta-analysis. Psychological bulletin, 136(4), 495. Retrieved from http://archive.is/aK7Vn
Heath, C., & Heath, D. (2010). Switch: How to Change Things When Change Is Hard. New York, NY: Broadway Books.
Hockey, G. R. J. (1997). Compensatory control in the regulation of human performance under stress and high workload: A cognitive-energetical framework. Biological Psychology, 45(1- 3), 73–93. doi:10.1016/S0301-0511(96)05223-4
Hockey, G. R. J. (2013). The Psychology of Fatigue. Cambridge Univerity Press. Retrieved from http://www.cambridge.org/ca/academic/subjects/psychology/cognition/psychology-fatigue- work-effort-and-control
Hockey, G. R. J., & Earle, F. (2006). Control over the scheduling of simulated office work reduces the impact of workload on mental fatigue and task performance. Journal of Experimental Psychology. Applied, 12(1), 50–65. doi:10.1037/1076-898X.12.1.50
Holcombe, A. O., & Hagger, M. S. (2014, October 20). RRR- Ego Depletion (Sripada et al.). Retrieved from osf.io/jymhe
Huffman, C., & Kahn, B. E. (1998). Variety for sale: Mass customization or mass confusion?. Journal of retailing, 74(4), 491-513.
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLOS Med, 2(8), e124. Retrieved from http://doi.org/10.1371/journal.pmed.0020124
Ioannidis, J. (2008). Interpretation of tests of heterogeneity and bias in meta‐analysis. Journal of evaluation in clinical practice, 14(5), 951-957.
Inzlicht, M. (2015). A Tale of Two Papers. [Blog post]. Retrieved from http://sometimesimwrong.typepad.com/wrong/2015/11/guest-post-a-tale-of-two-papers.html
Inzlicht, M. minzlicht. Big news: RRR of ego depletion reveals no effect. Nada. Zip. Nothing. @ME_McCullough called it first #spsp2016 [Tweet]. Retrieved from https://twitter.com/minzlicht/status/693238008305176576
Inzlicht, M. (2016, February 10). Reckoning with the Past. [Blog post]. Retrieved from http://michaelinzlicht.com/getting-better/2016/2/29/reckoning-with-the-past
Inzlicht, M., Berkman, E., Elkins-Brown, N., & Inzlicht, M. (2015). The neuroscience of “ego depletion” or: How the brain can help us understand why self-control seems limited. Social Neuroscience: Biological Approaches to Social Psychology, 1-44. Retrieved from https://www.researchgate.net/profile/Elliot_Berkman/publication/273805571_The_neuroscience_of_ego_depletion_or_How_the_brain_can_help_us_understand_why_self-_control_seems_limited/links/550ddb6f0cf2128741675f8e.pdf
Kierkegaard, S. (1993). Upbuilding discourses in various spirits. H. V. Hong, & E. H. Hong (Eds.). Princeton, NJ: Princeton University Press.
Kool, W., & Botvinick, M. (2014). A labor/leisure tradeoff in cognitive control. Journal of Experimental Psychology: General, 143(1), 131. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3739999/
Kuhn, T. S. (2012). The structure of scientific revolutions. Chicago, IL: University of Chicago press.
Ledbetter, S. M. (2015). We Make Communities. Emeryville, CA: Habitry, Co.
Leek, J. T. and Peng, P. D. (2015). P values are just the tip of the iceberg. [Blog post]. Retrieved from http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412
Lehrer, J. (2010, December 13). The Truth Wears Off. The New Yorker. Retrieved from http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off
Malhotra, N. K. (1982). Information load and consumer decision making. _Journal of consumer research, _419-430.
Milkman, K. L., Rogers, T., & Bazerman, M. H. (2008). Harnessing Our Inner Angels and Demons: What We Have Learned About WantShould Conflicts and How That Knowledge Can Help Us Reduce Short-Sighted Decision Making. Perspectives on Psychological Science, 3(4), 324–338. doi:10.1111/j.1745-6924.2008.00083.x
Novella, S. T. (2015). P-Hacking and Other Statistical Sins. [Blog post]. Retrieved from http://theness.com/neurologicablog/index.php/p-hacking-and-other-statistical-sins/
Nuzzle, R. (2015). Scientific method: Statistical errors. [Blog post]. Retrieved from http://www.nature.com/news/scientific-method-statistical-errors-1.14700
Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of educational statistics, 157-159.
Palmer, K. M. (2016, March 3). Psychology Is in Crisis Over Whether It’s in Crisis. Wired. Retrieved March 16, 2016, from http://www.wired.com/2016/03/psychology-crisis-whether-crisis/
Pettit, M. (2016, March 8). Replication in Psychology: A Historical Perspective. Retrieved March 16, 2016, from https://psyborgs.github.io/projects/replication-in-psychology/
Popper, K. R. (1996). In search of a better world: Lectures and essays from thirty years. Abington, UK: Psychology Press.
Popper, K. (2005). The logic of scientific discovery. Abington, UK: Routledge.
Quantitative research. (2016, February 24). In Wikipedia. Retrieved from https://en.wikipedia.org/w/index.php?title=Quantitative_research&oldid=706570969
Rahman, J. (2013, September 1). Cancer research in crisis: Are the drugs we count on based on bad science? Salon. Retrieved from http://www.salon.com/2013/09/01/is_cancer_research_facing_a_crisis/
Replication Crisis. (n.d.) In Wikipedia. Retrieved March 15, 2016, from https://en.wikipedia.org/wiki/Replication_crisis
Association for Psychological Science. (n.d.). Registered Replication Reports. [website] Retrieved on March 16th, 2016 from http://www.psychologicalscience.org/index.php/replication
Resnick, B. (2016, March 14). What psychology’s crisis means for the future of science. Retrieved March 16, 2016, from http://www.vox.com/2016/3/14/11219446/psychology-replication-crisis
Schmeichel, B. J., Harmon-Jones, C., & Harmon-Jones, E. (2010). Exercising self-control increases approach motivation. Journal of Personality and Social Psychology, 99(1), 162– 73. doi:10.1037/a0019797
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 0956797611417632. Retrieved from http://opim.wharton.upenn.edu/DPlab/papers/publishedPapers/Simmons_2011_False-Positive%20Psychology.pdf
Spellman, B. A. (2015). A Short (Personal) Future History of Revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899. http://doi.org/10.1177/1745691615609918
Sripada, C., Kessler, D., & Jonides, J. (2014). Methylphenidate blocks effort-induced depletion of regulatory control in healthy volunteers. Psychological science, 0956797614526415. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4206661/
Sutton, A. J. (2009). Publication bias. The handbook of research synthesis and meta-analysis, 435-452.
Vizier, S. (2016, February). it's the end of the world as we know it... and i feel fine. [Blog post]. Retrieved from http://sometimesimwrong.typepad.com/wrong/2016/02/end-of-the-world.html
Vohs, K. D., Baumeister, R. F., Schmeichel, B. J., Twenge, J. M., Nelson, N. M., & Tice, D. M. (2014). Making choices impairs subsequent self-control: a limited-resource account of decision making, self-regulation, and active initiative. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.465.4357&rep=rep1&type=pdf
“Novel” meaning completely new ideas or ideas that push theories forward into new predictive power. “Large observed effects” means the strength of the phenomenon being observed is clear and powerful.
Using strength training as a metaphor, a program that increases your clients’ deadlifts an average 1% has a “small effect size.” A program that increases their deadlifts an average of 30% has a “holy-crap-that’s-huge effect size.”
Interestingly, the P-value doesn’t even really mean that. Most researchers in psychology (who are not statisticians), are taught in Statistics 101 that a p-value of .05 “indicates a 5% probability that the data is due to chance, rather than a real effect” (Novella, 2015). But that’s not quite right because “P-values do not consider many other important variables, like prior probability, effect size, confidence intervals, and alternative hypotheses” (Novella, 2015). If we actually ask the question, “what is the chance that we could do this study again and results of the same effect size” and take into account all the other variables... the answer might be as low as 50% (Nuzzo, 2014).
And we need to really stop here and say, p-value is not the same as effect size, despite the fact that it’s become a lazy short-hand almost everyone is guilty of using (Leek & Peng, 2015).
If you want a good description of how this is possible, check out Michael Inzlicht’s blog post “a Tale of Two Papers” (Inzlicht, 2015).
“Meta-analysis” is when researchers take the results of a whole bunch of tests, weigh them based on bunch of factors like population size and heterogeneity, and report a “weighted average” of the effect sizes within a confidence interval. The idea here is that pooling studies can yield a “common truth” and “balance out” mistakes and biases made in individual studies.
Moderate effect sizes are actually huge in psychology, where most effect sizes are smaller than Ant-Man and get even smaller as you increase the population size of the study.
This is called a “confidence interval” and can be read as: “Were this exact procedure to be repeated on multiple samples of the same population, there is only a 5% chance that the effect size would fall outside 0.57 on the lower bound and 0.67 on the upper bound.”
This “magic” is called “The failsafe N,” and it’s a way Orwin (1983) proposed to account for unpublished studies that show no or the opposite effect. It’s actually a neat idea: You see, researchers have known for a while that journals tend to only publish papers with positive effects. To correct for that problem, studies like Haggar et, al (2010) have used “the failsafe N” to calculate the number of unpublished studies that would have to exist in scientists’ file drawers (this is also called, “The File Drawer Number”) for the effect size to be skewed outside of the Confidence Interval. Neat huh? The catch is... it’s pretty much a crap technique (Becker, 2005; Ioannidis, 2008; Sutton, 2009; Ferguson and Heene, 2012) that has allowed decades of meta-analysis to proceed unchecked for publication bias and p-hacking (Sutton, 2009).
This is the tendency for smaller studies to report larger effect sizes. A tendency than can be explained by Gelman & Loken (2013) among many other reasons.
These reasons were Hagger and Chatzisarantis (2014) which looked at Carter & McCullough (2014) and concluded “one would expect the effect sizes in the literature to be randomly distributed in both positive and negative directions about zero. If this is the case, then where are those negative findings? There are scant few ego-depletion experiments that have found opposite effects, i.e., an improvement in second-task performance after engaging in an initial self-control task, let alone null effects.”
For those of you keeping score at home, what Hagger and Chatzisarantis (2014) said was essentially, “sure, maybe we’re wrong. But there’d have to be so many studies sitting file drawers that it’s crazy unlikely we are.”
Remember those studies in file drawers that Hagger and Chatzisarantis (2014) said couldn’t exist? Carter et al. (2015) actually went and found some.
They created 8 data sets and used 7 statistical techniques on each one.
This RRR is only the third one ever.
Yes, the same Martin Haggar.
Yes, the same Michael Inzlicht.
Don’t be scared: “Embargo” just means the agreed upon publication date. It’s a PR thing.
The reason that the year on the reference is “2014” is because Holcombe and Haggar pre-registered the study on October 20th, 2014.
Galloit tel al. (2007) proposed that the resource being depleted was actually blood glucose available to the brain, but this has proven to be the weakest link in the Resource Model and it was Carter’s failure to replicate the original study that lead him and McCullough to write their 2014 paper.
Funnily enough, Stevo wrote about this problem with the glucose-fatigue link for his newsletter “UNSEEN DEGREES” back in 2013!.
Darwin was cruising around on the HMS Beagle writing what would become On the Origin of Species 30 years before Freud was even born.
Adam Smith’s The Wealth of Nations, the book that inspired modern economics, was published 200 years before Baumeister et al. (1998).
Kuhn’s The Structure of Scientific Revolutions is actually from where we get the term “paradigm shift.”
Yes, motivation is a feeling, not a fact.
Most SDT studies use the same handful of questionnaires to test for participants’ perception of their Basic Psychological Needs. This is clearly a weak link and we predict that the BREQ, the GCOS, BPNS, and the SDS will need a hard look with a pre-registered replication.
However we might find after a hard, pre-registered look at the questionnaires that the affects we were measuring were not what we thought we were measuring. We might have thought we were measuring autonomy only to discover we were measuring something closely related, but not the same. That might not seem like a big deal, but it would be a huge deal to SDT.
Qualitative Research, studies that have no hypothesis but rather are “the examination, analysis and interpretation of observations for the purpose of discovering underlying meanings” (Qualitative Research, n.d.; Overview)
Even for us, and there’s more people with psychology graduate degrees at Habitry than any other company in this space.
Our team is actually in the Facebook Group where all these scientists are hashing things out. Yes, it’s 2016 and even social psychology researchers use Facebook Groups.