All Blog Posts

The Question is Everything

January 29, 2026 | By Luis Sanchez

On AGI, the bubble, and why asking beats answering in a world of cheap intelligence.

Views

The Inversion

For most of human history, answers were expensive. You needed years of training to solve differential equations, mass capital to sequence a genome, deep expertise to write production code. The person with answers held power. Schools tested answers. Interviews tested answers. Promotions rewarded answers. We built entire economies around the scarcity of people who could produce them.

That's over now.

In 2026, answers are approaching commodity pricing. Claude writes your code. GPT drafts your legal brief. Agents ship your MVP while you sleep. The cost curve for generating correct, useful answers has collapsed so fast that we haven't updated our intuitions. We're still optimizing for a game that no longer exists.

Here's what I keep coming back to:

In a world where the cost of answers is nearing zero, the value of the question becomes everything.

The scarce resource flipped. And most of us are still trained for the old scarcity.

What This Looks Like Right Now

I spend my days watching AI coding agents do things that would have taken me hours. Cursor, Claude Code, Devin-style systems: they're not perfect, but they're good enough. The answer layer is automating. The delta between "I can code" and "I can ship" is compressing toward zero.

Everyone's asking if software engineers are being replaced. Wrong question. The job isn't disappearing. It's migrating upstream. The tasks that involved translating known requirements into working code? Those are evaporating. What remains is the part that was always actually hard: figuring out what to build, why it matters, and what tradeoffs to accept.

The bubble follows the same pattern. Capital is flooding into answers: foundation models, compute clusters, inference optimization. Billions chasing the ability to generate more answers, faster, cheaper. But the alpha isn't there. The alpha is in questions: What problem actually benefits from AI? What wedge creates a moat? What user pain is severe enough to pay for?

I've watched this movie before. At Zigsaw, we built cool technology. We won hackathons. We had "answers" nobody else had. But we kept pivoting, because pivots are just confessions that you asked the wrong question. The market didn't care about our answers until we found the right question: who has budget, urgency, and a problem that our specific answers solve better than alternatives?

The Anatomy of a Good Question

Not all questions are equal. The ones that matter have specific properties:

  • Surface area. Good questions open territory. They create space for exploration, multiple valid answers, productive disagreement. Bad questions close things down, lead to a single right answer, end conversations.
  • Expensive inputs. Good questions require things AI can't cheaply generate: lived experience, domain taste, exposure to real pain points, pattern recognition across fields that don't usually talk to each other.
  • Upstream leverage. A good question, answered well, makes dozens of downstream decisions easier. A bad question burns cycles on things that don't matter.

Consider two questions that look similar:

"How do I add AI to my product?" Cheap question. Everyone's asking it. The answers are commoditized. You'll end up with a chatbot that doesn't differentiate you.

"What problem does my user have that AI solves better than any existing alternative, and why hasn't someone already built it?" Expensive question. Requires customer empathy, competitive awareness, technical taste, and honesty about your own capabilities.

The crypto bubble was a question failure. The entire industry asked "how do we decentralize everything?" without seriously asking "what actually benefits from decentralization?" Billions burned because the upstream question was wrong. The answers were technically correct (yes, you can put a database on a blockchain) but the question was broken.

AGI discourse has the same problem. Everyone asks "when will it be smarter than us?" or "how do we align it?" Almost nobody asks "what do we actually want to use general intelligence for?" We're optimizing for capability benchmarks while being confused about purpose.

What "Real Engineering" Means Now

I spent months building an operating system from scratch. Kernel threads, syscalls, virtual memory, file systems: the whole stack. It was painful and slow and I loved it. Not because I'd ever ship a production OS, but because it taught me where costs hide.

That knowledge is a question-generating asset. When I see an architecture, I can ask: where's the latency? What's the memory story? What happens under contention? These aren't questions Claude can ask for me, because they require intuition built from watching systems fail in specific ways.

AI agents flatten the answer curve. Anyone can ship code now. But shipping the right code requires judgment that doesn't come from language models. It comes from scar tissue. From debugging a race condition at 3am. From watching a "simple" feature balloon into a maintenance nightmare. From understanding that every abstraction is a bet, and some bets lose.

The "real engineer" in 2026 isn't defined by typing speed or syntax knowledge. It's defined by:

  • The questions you refuse to let the agent skip
  • The constraints you impose because you know what happens without them
  • The problems you decide are worth solving in the first place

The "10x engineer" of the past was a 10x answer-generator. The 10x engineer of the future is a 10x question-asker. The leverage moved.

The Personal Reckoning

I'll be honest: this shift makes me anxious. I've spent years getting good at producing answers. School rewarded it. Internships rewarded it. The job market rewarded it. Now I'm watching the thing I optimized for get commoditized in real-time.

My finances are tied up in bets about where this goes. Some crypto, some tech equity, some belief that building skills still matter. Every week I question those bets. Not because I think AI will replace me tomorrow, but because I'm not sure I'm asking the right questions about my own future.

The honest version: I'm a builder in a moment when building is cheap. What's my value? I think it's the questions I can ask, shaped by weird experiences, specific pain points, combinations of context that don't exist in training data. But I'm not certain. Nobody is.

What I do know is that doubling down on answer-production is the wrong move. The returns there are diminishing. The returns on question-generation are compounding.

The Uncomfortable Implication

Here's what I don't see people talking about enough: most of us, including most engineers, were trained to answer, not ask. The entire educational system is answer-optimization. Tests have right answers. Homework has right answers. Even "critical thinking" curricula usually mean "arrive at the answer we expect through a slightly longer path."

Interviews are answer-production rituals. Leetcode is answer-production. System design rounds are answer-production with extra steps. We've built hiring pipelines that select for the skill that's being commoditized fastest.

If questions are now the scarce resource, most of us are optimized for the wrong game. And the retraining isn't technical. You can't take a course on "asking better questions." It's epistemic. It's about cultivating curiosity, comfort with ambiguity, willingness to say "wait, why are we doing this?" in rooms where everyone else is busy answering.

That's a different skill. It's slower to develop. It looks like inefficiency from the outside. It requires the courage to pause when everyone else is shipping.

So What Do You Do?

I don't have a framework. I have hunches:

  • Talk to users more than you talk to models. AI can synthesize what's in training data. It can't tell you what the person across the table is frustrated about, or what they'd pay to make go away.
  • Study adjacent fields. The best questions come from collision points, where your weird specific knowledge meets a domain that's never seen it before. Breadth generates questions. Depth generates answers.
  • Be suspicious of your own answers. They're cheap now. If you arrived at it quickly, so can everyone else. The valuable stuff lives in the questions that are hard to even formulate.
  • Build taste by building. Ship things. Watch them fail. Develop intuition for why. That scar tissue becomes the question-asking substrate that models can't replicate.
  • Protect your uncertainty. The pressure to have answers is immense. Resist it. The person who says "I'm not sure that's the right question" is often more valuable than the person who confidently answers the wrong one.

The moat, if there is one, is in the questions only you can ask. The ones shaped by your specific experiences, your particular failures, your unique vantage point. AI can answer any question you pose. It can't pose the questions that haven't occurred to it, the ones that require your exact configuration of context and confusion.

That's the asset now. Not what you know. What you wonder.