Effective Self-Help

Research

Our work

Effective Self-Help conducts thorough reviews of scientific literature, alongside other forms of valuable evidence, to identify the most effective practices individuals can implement to improve different aspects of their wellbeing and productivity. 

As a new organisation, founded in October 2021 through a grant from the Effective Altruism Infrastructure Fund, we currently have a limited capacity and so our research is not yet as comprehensive as we would like.

Our key reports up to now can be found below.

Our Research Process

An explanation of our research process, including the principles that direct our work, a step-by-step outline of how we build our reports, and a brief discussion of what we consider good evidence.

Principles

Our research process is motivated by a set of principles that we think are key to providing the best information possible.

Maximising impact

All of our work aims directly at doing the most good possible. Effective Self-Help is inspired by the effective altruism community and its rigour in figuring out the most impactful ways to improve the world.

Our aim is to bring the same attitude to self-help advice, focusing every aspect of our work on producing the most useful information and then presenting it in the most actionable ways possible.

Truth-seeking

Many existing self-help resources are based on individuals’ opinions or experiences, rather than a meticulous attempt to find out what is true. At Effective Self-Help, we try to adopt a scout mindset, trying to understand things as they are rather than as we wish them to be.

When we’re uncertain, we use Fermi estimates and weighted models of our subjective impressions to get closer to the truth in a rigorous, falsifiable way. Wherever possible, we look to evaluate our work through empirical tests of impact, rather than based only on our subjective judgements.

Reasoning transparency

We try as much as possible to make it clear how we make our decisions and why. Through a strong commitment to reasoning transparency, we hope to provide information that is more trustworthy and easier to understand. Research should not be a black box.

Prioritisation

Many self-help articles provide lists of actions to take to improve a certain aspect of mental health and wellbeing without any discussion of which actions are most useful.

Some charities are more than 100x as effective as others. In a similar way, our research suggests that some self-help interventions are more than 10x as effective as others. All of our recommendations are ranked, helping you to concentrate on the actions that will bring the greatest benefit to your life.

How we build our reports

In the spirit of reasoning transparency, what follows is a quick step-by-step breakdown of how we carry out our research into new wellbeing or productivity topics.

  1. Define the outcome we’re looking to maximise (e.g. reducing stress; improving sleep quality)
  2. Conduct a search of high-quality scientific literature that provides an overview of the topic. We use academic search engines like Google Scholar, PubMed, and Elicit to find systematic reviews and meta-analyses with high numbers of citations from well-respected journals.
  3. Build a longlist of potentially high-impact interventions for the topic. We then apply the same process as above to review high-quality research into each intervention, using this to produce an evidence table
  4. From this research, we rank our interventions by effect size, using the studies we’ve looked at to develop more subjective judgements of the risks, external benefits, and cost-effectiveness of each recommendation. Put together, these factors allow us to form our overall view on the intervention’s value.
  5. At this point, we will conduct a more general search of evidence on the topic, assessing a mix of popular and high-quality blogs, books, podcasts, and other resources. This forms the basis of our general discussion section in each report, and helps us to ensure we haven’t missed any obviously important interventions.
  6. With the above completed, we can then write out a research report on the topic, summarising our findings and directing people to the interventions that appear most worthwhile.

What constitutes good evidence?

Systematic reviews and meta-analyses

Wherever possible, we concentrate our research on compiling evidence from systematic reviews and meta-analyses. These are scientific studies into topics that sit at the top of the evidence hierarchy, providing the most reliable evidence on a given subject. 

Both systematic reviews and meta-analyses have a number of key aspects which improve the quality of research they present. These include:

  • Reviewing multiple (and often dozens) of studies at once, providing a much larger base of empirical evidence than individual trials.
  • A systematic process of selecting studies to review, minimising risks of bias in how the evidence is collated.
  • A high bar for evidence quality, with most systematic reviews and meta-analyses excluding studies that lacked a control group, were not randomised, or had poorly-defined methods for measuring outcomes.

Control groups and RCTs

A control group in an experiment receives the same treatment apart from the one variable that we are looking to study. By comparing a control group to an intervention group, we can have confidence that any effect we find is caused by the intervention we are testing, rather than an external factor. 

Randomised Control Trials (RCTs) allocate participants between the control and intervention groups at random. By using a random process, we can avoid large biases in who is selected for each group or the characteristics of participants in each group.

Effect size

We rank each intervention by the effect size (Cohen’s d) presented in the studies into its effectiveness that we reviewed. By using a consistent and well-validated measure, we can make accurate comparisons between interventions, drawing conclusions as to which recommendations are of highest value.

For an explanation of Cohen’s d and the value of effect sizes, we recommend this article as a useful explanation.