I love to seize opportunities.
For years, I’ve been the type of person who disregarded risks and was solely interested in the exciting new prospects life has to bring.
It was all about embracing the unknown.
Well, it was reckless.
I surfed the opportunity wave for a while, but soon enough, disasters occurred, and I was the one to blame.
Fortunately, these misadventures happened early enough in my career, and damage was limited. Still, I knew I had to adjust my view to assess risks and opportunities better.
I soon started to master risk analysis matrices from SWOT to more complex tools. We’d spend – too much – time with my team assessing impact severity and the likelihood that a particular risk would turn into reality.
We ignored “Low impact/Low Probability” risks and scrutinized “High Impact/High Probability.”
This approach had significant problems, though:
- First, it always felt like guessing on impact and probability. We were often wrong. I learned that nothing’s static, and one must revisit these factors over time.
- Second, all the guessing made it hard to prioritize which risks were worth considering. We ended up generating rather long lists of risks to monitor.
- Third and most importantly, the very language of risk management had a negative psychological effect on the team. Suddenly, the focus was on risks, not on opportunities anymore. We had a hard time finding the needed balance between the two.
It took me some time to realize that this “risk management” language was getting in the way.
That’s when I started introducing a new paradigm in our team discussions. The focus would now be on discerning assumptions from facts. Then, once clear on these assumptions, we’d look into testing them.
It’s a critical notion, especially when you start a new project: you generally have more assumptions than facts. So, we made sure we’d identify them correctly.
Then, the challenge at hand was to categorize these assumptions correctly. We’d look into which assumptions were the most critical to the project’s success and assess how much evidence we had about them. Tools could become quite sophisticated, but a simple 2×2 matrix often did the job (Anecdotal vs. Critical x No Evidence vs. Much Evidence).
The assumptions that would impact our business most and where we had little evidence were the ones we’d test in priority.
One of the most powerful questions I remember asking myself was: “What would need to happen to prove this assumption wrong”? It helped reduce confirmation bias and keep some degree of objectivity.
Eventually, we could make more informed decisions based on enough evidence.
The key was to embrace the dynamic nature of our knowledge of the environment for a given project. The continuous assessment of facts vs. assumptions, their categorization, and testing helped us gradually gain more certainty and make the right decisions.
I still love this world of possibilities. And I’m embracing the unknown more than ever, but I’ve learned to help my team articulate assumptions and handle them effectively to maximize our chances of success.