Writing

Thoughts on AI, decision-making, and technology

The Reasoning Illusion: Can LLMs Actually Think? (Part 1)

•
AI machine-learning intelligence LLM research

The Next Token Prediction Game – At their core, large language models (LLMs) are pattern-matching systems. They predict the next word in a sequence based on statistical patterns learned from billions of text examples. There's no explicit module for logic or mathematics; everything emerges from a single mechanism: analyze input, compute probabilities for possible next tokens, and select the most likely one.

Read more →

The Intelligence Measurement Problem: Are LLMs Statistical Parrots or Emerging Scientists?

•
AI machine-learning intelligence computer-vision research

Two wildly different AI takes appeared on my feed within minutes of each other. As someone working in computer vision and physics, I've seen LLMs fake competence brilliantly—but also watched them crumble when I actually know the domain. The real puzzle? How do you measure intelligence when you might need a more intelligent system to do the judging.

Read more →

How I Learned to Love Uncertainty

•
decision-making monte-carlo tools uncertainty

I built a Monte Carlo simulation tool because I was tired of making terrible decisions with my own money. Here's what I learned about uncertainty, assumptions, and asking better questions.

Read more →