The Voice Unlock.

April 27, 2026

The Voice Unlock

We still mostly type at the model. Whole lifetime of writing emails and Slack messages and texts has wired us to clean up our thoughts before sending them anywhere — structure, edit, make them presentable for the human on the other end. But the model isn't a human, and it's actually better at handling mess than polish.

I read about an engineer who broke his arm and had to redo his whole workflow around voice prompting because he couldn't type. Forced experiment, basically. Turned out it didn't just work as a workaround — it unlocked a level of productivity he hadn't found while typing.

That stuck with me because it kind of inverts the assumption. We treat voice as a fallback. Slower, less precise, worse than typing — at least that's the instinct. But for working with AI it might actually be the better modality. You spew out the full mess of what's in your head, half-formed thoughts, tangents, the part where you change your mind mid-sentence, and the model handles the structuring. Counterintuitively, the more raw context you give it, the better the result. The unstructured dump is the feature, not the bug.

Spew it all out next time. Have the model distill and parrot it back to you before you proceed. You save time on the typing, you probably gain some clarity on what you were actually trying to say, and the answer you get back is better because the model has more to work with.

And I think this is just the start. Voice is one new door. Meta's glasses, OpenAI's audio devices, those always-listening table pucks, Optimus Prime robots — all of it is about embedding the model deeper into daily life, in modalities that have nothing to do with sitting at a keyboard. Each one expands the surface area for how we communicate with these systems, and how productive we can be alongside them.