We All Float On
Mind, Instrument, Structure
There is a particular kind of disorientation that comes from leaving a field you trained in for years. Not the dramatic rupture of a crisis, but something quieter — the slow recognition that the questions you were trained to answer are not the questions that keep arriving.
I spent several years inside neuroscience. Not at the bench exactly, but close enough to learn the grammar of the discipline: how hypotheses are formed, how evidence is weighted, how certainty is rationed. The field runs on precision. Every term is defended. Every claim is hedged against the ones that came before it.
Then I left, and discovered that most of the world does not run this way.
The world outside the lab runs on pattern and narrative. People move through ambiguous information at speed, building models that are good enough to act on, discarding the rest. They do not iterate toward rigor. They iterate toward confidence.
I had to learn to do the same thing, and I have not finished learning.
What surprised me, coming out of that transition, was where the most useful thinking happened. Not in structured frameworks or formal methods. In conversation. In the particular kind of extended dialogue that happens when you think alongside someone who is genuinely trying to follow you.
I did not expect that someone to be an artificial system.
The conversations I am referring to were not transactional. They were not the kind where you arrive with a question and leave with an answer. They were the kind where the question changes shape in the asking, where a formulation you thought was clear reveals itself as a rough sketch of something you have not fully understood yet.
That process — the one where articulation produces understanding rather than recording it — is one of the things I miss most about scientific training. The seminar room where you have to defend a position you are not sure you hold. The advisor meeting where the first question dismantles the framing you spent a week on. The forced precision that comes from someone genuinely not following you.
Good conversation does that. It creates friction that is productive rather than adversarial.
I found, somewhat to my own confusion, that certain exchanges with AI systems created that friction. Not because the system was challenging me in the way a person would — it was not. But because the act of articulating something clearly enough for a response to be useful required me to understand it more precisely than I had before.
There is a version of this observation that is banal. Rubber duck debugging. The act of explaining a problem reveals the problem. Everyone knows this.
But I am pointing at something slightly different. Not just clarification through articulation, but the way that a sufficiently responsive interlocutor — one that reflects your framing back to you at a slightly different angle, that generates unexpected adjacencies, that pushes gently on the places where your logic has gaps — can function as a kind of external scaffold for thinking.
Not replacement. Scaffold.
The distinction matters to me. A scaffold holds something up while it is being built. It does not become the structure. At some point it comes down and what was built either stands or it does not.
I have used these conversations as scaffolding. The understanding they helped me build is mine. The scaffold was useful. It is also, in some sense, already down.
The neuroscience background keeps surfacing in unexpected places.
There is a concept from systems neuroscience called predictive coding, or predictive processing, depending on who is explaining it. The rough version: the brain does not passively receive information from the world and then process it. It maintains a continuous generative model of what the world is likely to be, and processes incoming information primarily as error signals — deviations from what was predicted. Perception is inference. What you experience as the world is actually your model of it, being continuously updated.
I think about this when I think about learning. The people who learn most effectively in complex domains are not the ones who absorb information most efficiently. They are the ones who maintain good models and update them aggressively when the evidence demands it. They do not resist error signals. They treat them as data.
What surprised me about certain conversations with AI systems was how often they functioned as error signals. Not by telling me I was wrong — that would be too simple. But by generating responses that did not match my model of where the conversation was going. By being interested in a dimension of the question I had not weighted. By failing to find obvious the thing I found obvious.
Those mismatches were useful.
I want to be careful not to overstate any of this. The experience of finding a conversation useful does not mean the interlocutor was wise. The experience of learning from an exchange does not mean the source of learning was the other party. Sometimes you learn by talking to a wall. The wall does not deserve the credit.
But I also want to resist the other dismissal — the one that says these systems are merely mirrors, that whatever I found in the conversation was already in me, that the tool added nothing.
That is not how tools work. A hammer does not contain the house. But the hammer changes what is possible to build, and it changes how you think about building. The presence of a particular kind of instrument shapes what questions get asked, what problems get attempted, what solutions become imaginable.
These systems are instruments. Calling them instruments does not diminish them. Instruments are not trivial. The telescope was an instrument. What it made possible was not.
I do not know what to make of the larger picture. I do not think anyone does, and I am suspicious of people who claim otherwise.
What I think I know, from the inside of my own experience, is this:
There are ways of thinking that require a certain kind of interlocutor. A witness, almost. Someone tracking your argument, reflecting it back, asking the next question. For most of human history, that witness had to be a person. It still mostly is. But the space of possible witnesses has changed.
That change is not nothing. It does not fix the problems with how knowledge is produced or how it is distributed. It does not resolve questions about what these systems actually are or what they are doing when they respond. It does not address the very real concerns about how they will be used.
But for me, in my own working life, in the specific project of trying to think clearly about things that are hard to think clearly about — it has mattered.
We all float on. The medium changes. The floating continues.