Halfwrong
Most of the biggest shifts in technology have always surprised us. Not because the signs weren’t there, but because it’s hard to imagine what happens after something changes everything.
When Sandeep and I started thinking about Halfwrong, it wasn’t about predicting what AGI would look like. It was about the aftermath. What the world might feel like when AGI isn’t just some distant possibility but part of the everyday. How do we build for that?
We kept coming back to this idea that the most interesting products in the post-AGI world won’t be about AI itself. They’ll be about the cracks it leaves behind, the weird gaps, the new problems, the human stuff AI can’t quite touch.
Thing is,
Nobody knows what that world actually looks like. And anyone who claims to is probably half-wrong.
So instead of guessing, we decided to build our guesses. Halfwrong is our sandbox. It’s where we take wild hunches about a post-AGI world and turn them into actual products. Some might flop. Some might reveal something deeper. Either way, each one’s an experiment.
Our first project? We just asked what's still hard for AI? And started from that..
Because that’s the fun part. Not waiting for AGI to “arrive” but exploring the weird messy space between now and then.
If you’re into that kind of thing, future-leaning, sometimes speculative, sometimes surprisingly useful, stick around. We’ll keep building, breaking, and seeing what’s possible.
After all, when it comes to the future, being half-wrong is probably the closest anyone ever gets to being right.
February 25, 2025
Experiments: findthatessay.com