Three tools to help think clearly

Part of our job at a research lab is to clarify an ambiguous area and create a clear path to navigate it. To do this effectively, we need to dedicate significant time to think. I've found Mental Models, or tools for thinking, to be extremely useful during this process. I'd like to share three that I use quite often, in the hopes that others might also find them beneficial.


Thought Exercise

While there are numerous well-known exercises, the one I've found most useful involves hypothetical scenarios, or "what ifs". The basic idea is that you construct a hypothetical scenario which differs from the current reality. Then, you explore the problem within this alternative reality. Here are a few examples:

  • What if I could discard the current system and rebuild it from scratch? What would I do differently?
  • What if I could, hypothetically speaking, "fire" the entire current team, regain the headcount, and rehire one by one? Who would I want on the new team?
  • What if I were the leader of a certain team (like the whole Meta company)? How would I make decisions differently?

You might not derive actionable outcomes directly from these thought exercises 99% of the time, simply because these hypothetical scenarios are unlikely to occur. However, such processes naturally aid in avoiding common pitfalls. 

Imagining that you're starting from scratch, either in system building or team building, helps you avoid falling prey to the sunk cost fallacy, and confirmation bias, since there are no past costs or pre-existing points to prove. 

Imagining yourself in a different role, responsible for decision-making, prompts the question, "What should I know in this role?" This, in turn, can lead to revelations about your blind spots, which can be a challenging realization. This perspective could help establish a more empathetic view of the current state of affairs, acknowledging that not all problems has a immediate solution.


Socratic Questioning 

Socratic Questioning is a method designed to stimulate critical thinking by posing a series of probing questions about a particular belief. The goal is not to drive to a conclusion, but to surface what we know and don't know, and identify what is consistent and what is self-contradictory. It is particularly useful when dealing with an ambiguous research space, where we cannot assume we fully understand everything. A good starting point might be a chain of questions like:

  • What is the definition of X?
  • Is A an example or instantiation of X?
  • Is B not an example or instantiation of X?
  • Is C an attribute of X?
  • Is D not an attribute of X?

Based on the responses, you can follow up with more detailed questions to gain increased clarity. I've written many Socratic Questioning docs to help myself achieve clarity when I started new projects.

Socratic questioning is easy to practice alone since people aren't typically offended by their own questions. However, it can be more challenging when used with others, as it might sound provocative if a question challenges someone's fundamental beliefs. Despite this, having a few trusted peers who can withstand intense debate and regularly practice this method can be incredibly beneficial for strengthening ideas and fully understanding issues.


Causal vs Statistical thinking

Both are mental models employed to understand why certain events happened in the past or to make predictions about future outcomes.

  • Causal thinking asserts strong correlations between sequences of events in a deterministic way. Simply put, A happened because of B or doing C will cause D to happen. Here are some examples:
    • A project was successful because their leader is brilliant.
    • A system integration failed because a certain team failed to deliver a complex component.
  • Statistical thinking, on the other hand, treats the final outcome as a joint probability of all the events or components along the way. This model embraces uncertainty: A increases the likelihood of B happening, or doing C will decrease the likelihood of B happening. Using the same examples above, it would read like this:
    • A strong leader greatly increased the chances of the project's success.
    • The complex component's failure greatly decreased the chance of a successful system integration.

When trying to comprehend an ambiguous, unsolved problem, I've found it more useful to employ statistical thinking instead of causal thinking. A few concrete examples:

  • Reduce the surprise factor: Having brilliant people working on things increases the chances of success, but they might still fail, and that's okay.
  • Maximize the chances by doing more: Instead of assuming that doing one thing well will lead to final success, we should also consider doing all the visible things that increase the chances of success.
  • Better understanding of monolithic vs distributed design/team: A system with four independent components, each with an 80% success rate, only has an overall success chance of 40%. The same principle can be applied to teams: multiple independent teams working in parallel on the same project, despite each team's individual successes, could lead to a much lower chance of overall project success.

Comments

Popular posts from this blog

Risk vs. Uncertainty

Feedback Loop