I have a confession: I built a Monte Carlo simulation tool because I was tired of making terrible decisions with my own money. Well, technically the company's money, but when you're a co-founder, the distinction gets blurry.
It started with a question that had me lying awake at 2 AM: Should we spend €15,000* on a computer vision conference?
I'd recently devoured "How to Measure Anything" with obsessive enthusiasm. I was deep in my quantification phase, that period where you want to measure everything, including whether measuring everything is actually worth measuring. (It's also a weakness, but we'll get to that.)
The book planted two ideas in my brain: first, every gut feeling is actually a measurement, just a spectacularly bad one. Second, to get better estimates, you need to dismantle your assumptions using the Fermi method.
So instead of trusting my gut feeling of "this feels expensive but probably worth it," I decided to actually measure it.
On paper, it seemed straightforward. Previous conferences had generated 2-4 serious leads, roughly 10% converted to customers, and new customers were typically worth €100,000 in the first year. My trusty spreadsheet confidently proclaimed: 3 leads × 10% conversion × €100k value = €30,000 return on €15,000 investment. Clean math. Respectable ROI. Decision made.
Except my stomach was doing backflips about it.
The Problem with Point Estimates
Here's the thing about spreadsheets: they're confident liars. "3 leads," my Excel sheet declared with unwavering authority, as if the universe had signed a contract to deliver exactly that number.
But my brain knew better. Last conference we got exactly 1 lead. The one before that? 6 leads. Our conversion rate has bounced between 5% and 20% with all the predictability of a cat choosing where to nap.
Customer value? Some clients want a tiny proof-of-concept. Others decide they want to rebuild their entire computer vision pipeline and suddenly we're talking enterprise money.
The Fermi method was screaming at me to break this down: - How many humans actually wander past our booth? - What percentage aren't just there for the free stickers? - Of those, how many can actually write checks? - What forces determine whether someone wants a €30k project or a €200k one?
But Excel just stared back at me with its little rectangular cells, demanding single, confident numbers. It doesn't want to hear about uncertainty.
I love probabilities because they capture uncertainty and make it quantifiable. The difference between being 60% confident and 90% confident isn't just academic; it's the difference between "let's do this" and "let's definitely do this."
Time to build something that spoke my language.
Building a Better Model
I decided to build a tool that could handle both the Fermi breakdown and the uncertainty. Instead of Excel's demand for "3 leads," I'd decompose it:
- Booth visitors: 200-400 people (based on previous foot traffic)
- Serious conversations: 15-25% of visitors (the rest are hunting for free USB sticks)
- Qualified leads: 30-50% of serious conversations (people who can actually make decisions)
- Total leads: somewhere between 1-8, with 3 being my best guess
For each variable, I'd model the uncertainty based on our actual experience. Deal size became its own breakdown: - Small projects: €30k-60k ("Let's see if this AI thing works") - Medium projects: €80k-120k ("OK, we're believers, let's scale this up") - Large projects: €150k-250k ("We want ALL the computer vision")
The Monte Carlo simulation would run thousands of scenarios, each time rolling the dice within these ranges. But here's the key: not all outcomes are equally likely. Deal size might follow a log-normal distribution (most deals are small, but a few are huge), while conversion rates might be more normally distributed around our historical average.
Some runs would be disasters (1 lead, 5% conversion, tiny deal). Others would be unicorn scenarios (6 leads, 20% conversion, enterprise customers). Most would cluster in the middle.
What I discovered made me question everything I thought I knew about conferences.
The Real Question
After running 10,000 simulated conferences, the results painted an interesting picture: - 35% chance of losing money - 40% chance of 2-3x return - 25% chance of 5x+ return
But here's where my quantification obsession paid off. I expected lead generation to be the main uncertainty driver. After all, if nobody visits our booth, we're just paying €15,000 to stand around looking hopeful.
I was completely wrong.
Deal size was absolutely dominating everything. Whether we generated 2 leads or 5 leads barely moved the needle compared to whether those customers wanted €30,000 projects or €180,000 ones.
Suddenly my entire mental model flipped. Instead of asking "Should we spend €15,000?" I was asking "How do we become the kind of company that attracts larger deals?"
From Paralysis to Action
This revelation transformed our conference strategy. Instead of optimizing for lead quantity (bigger booth, flashier demos), we focused on lead quality.
We started targeting enterprise customers with large-scale computer vision challenges. We prepared materials showcasing our work with automotive Tier 1 suppliers rather than smaller tech companies.
Did it work? We generated exactly 3 leads, converted 1 into a customer, but landed a €180,000 deal instead of our usual €60,000. The tool didn't just help me make a better decision. It helped me realize I was asking the wrong question entirely.
The Meta-Lesson
Building decision-making tools for yourself turns out to be oddly addictive. The Fermi method forces you to be honest about what you actually know versus what you're assuming. Monte Carlo simulations then show you what happens when those assumptions vary.
Instead of being paralyzed by uncertainty, you start dancing with it. You begin asking questions that actually move needles instead of just satisfying your need to feel decisive. You stop pretending you know things you don't.
Sometimes the answers are still beautifully uncertain. That's life refusing to be solved. But at least you know which uncertainties actually matter.
Now I use this tool for every decision that involves more than choosing lunch: Should we hire a contractor or employee? Is this marketing spend worth it? What's our realistic runway given different growth scenarios? Each time, I'm surprised by what matters most.
The funny thing is, I thought I was building a tool to help me make better decisions. Turns out I was building a tool to help me systematically question my assumptions.
And honestly? That might be the most valuable thing I've ever coded.
You can try the Monte Carlo decision tool I built here. Fair warning: it might change how you think about uncertainty.
*All numbers in this post are completely made up to protect the innocent (and our competitive secrets). But the methodology? That's 100% real, and embarrassingly effective.