Great post, saved in capacities to re read. One question about the travel agents case. You argue some gained and some lost, but could it be that everyone lost something? The low-decile agents lost their jobs entirely, exiting the sample. But the high-decile survivors might also earn less than they would have counterfactually. If so, the higher observed average would reflect compositional change (survivorship), not genuine gains for those who remained.
"The task is not the job" should be tattooed on the arms of every consultant currently selling AI transformation.
The travel agent data is striking — the surviving agents earn more 'because the machine took the weak part of the bundle and left them the strong one.' I used a parallel example from a different industry. The Swiss watch industry was nearly destroyed by quartz in the 1980s. Their comeback wasn't built on better timekeeping — it was built on redefining what a watch is for. Repositioning, not resistance. Your bundle framework explains why that worked: when the commodity task is gone, what remains is the part that was doing the real work.
Your Arrow quote on trust — "if you have to buy it, you already have some doubts about what you've bought" — is the best sentence I've read on why the authority question can't be automated away.
I published a piece recently on where new work appears as AI stalls at the edges — and a follow-up is coming shortly that asks whether those cracks add up to something livable or merely a service layer around machine capability. Your bundle framework sharpens that question considerably. I think we're circling the same territory from different altitudes.
It will be a while before there s an AI chef who can taste the soup and say, "This needs more cumin." The same for a vintner. AI can see into the UV and IR, but no one is interested in taste and smell. AI people aren't interested.
When a piece opens with “assume for a moment they are right about the technology”... The AI of 2030 will look nothing like the AI of today, and that it is the most predictable part of the whole debate. Compute scaling follows a stable trend line, and capability has tracked it for over a decade. What we do not know is how the curve bends once AI starts accelerating its own research. From first principles, that feedback loop has no obvious ceiling; the growth is geometric, not arithmetic, because the system is iterating on itself.
Your bundle and authority arguments only hold against a frozen version of the technology. A bundle is “strong” exactly until separability becomes cheap. And a system capable of carrying tacit context across sessions, handling unpredictable demand, and reconciling counterparties in real time is precisely what makes separability cheap. Residual decision rights rest on accountability infrastructure — registries, licenses, standing to be sued — but that infrastructure is a policy artifact, not a law of nature. Several pieces are already being adapted.
Imas keeps this in view. His argument survives a much wider range of capability scenarios because it rests on a structural feature of human preferences, not on where the frontier stops next Tuesday. Yours implicitly assumes the frontier stops roughly where today’s models stop.
One last point on framing: there are no “AI optimists” in this debate. Amodei and Suleyman are not optimists. They are reading a compute trend that has held for fifteen years and stating what it implies. You can dispute the implication, but I believe the label misrepresents the disagreement.
"The mess is what happens when human beings with competing interests try to get things done together." Translation by me: cooperation is the problem of organizations. YES INDEED
Thank you for this post. Every one of your posts makes for fascinating reading, and common sense wins every time. Your post is sofull of memorable lines! I just took one which really speaks to my heart , it's a point that is misunderstood by most of the technocratic elites all over the world. Just keep sending your posts. I'll save all of them. Cheers and onward.
Great post, saved in capacities to re read. One question about the travel agents case. You argue some gained and some lost, but could it be that everyone lost something? The low-decile agents lost their jobs entirely, exiting the sample. But the high-decile survivors might also earn less than they would have counterfactually. If so, the higher observed average would reflect compositional change (survivorship), not genuine gains for those who remained.
"The task is not the job" should be tattooed on the arms of every consultant currently selling AI transformation.
The travel agent data is striking — the surviving agents earn more 'because the machine took the weak part of the bundle and left them the strong one.' I used a parallel example from a different industry. The Swiss watch industry was nearly destroyed by quartz in the 1980s. Their comeback wasn't built on better timekeeping — it was built on redefining what a watch is for. Repositioning, not resistance. Your bundle framework explains why that worked: when the commodity task is gone, what remains is the part that was doing the real work.
Your Arrow quote on trust — "if you have to buy it, you already have some doubts about what you've bought" — is the best sentence I've read on why the authority question can't be automated away.
I published a piece recently on where new work appears as AI stalls at the edges — and a follow-up is coming shortly that asks whether those cracks add up to something livable or merely a service layer around machine capability. Your bundle framework sharpens that question considerably. I think we're circling the same territory from different altitudes.
The first piece is here if you're curious: https://rajeshachanta.substack.com/p/the-last-meter-economy. The second is in the works.
It will be a while before there s an AI chef who can taste the soup and say, "This needs more cumin." The same for a vintner. AI can see into the UV and IR, but no one is interested in taste and smell. AI people aren't interested.
There is a misspelling in your link to your "AI Becker Problem" essay - it lists https://www.silicoancontinent.com/p/the-ai-becker-problem instead of https://www.siliconcontinent.com/p/the-ai-becker-problem (extra "a" in the word silicon)
When a piece opens with “assume for a moment they are right about the technology”... The AI of 2030 will look nothing like the AI of today, and that it is the most predictable part of the whole debate. Compute scaling follows a stable trend line, and capability has tracked it for over a decade. What we do not know is how the curve bends once AI starts accelerating its own research. From first principles, that feedback loop has no obvious ceiling; the growth is geometric, not arithmetic, because the system is iterating on itself.
Your bundle and authority arguments only hold against a frozen version of the technology. A bundle is “strong” exactly until separability becomes cheap. And a system capable of carrying tacit context across sessions, handling unpredictable demand, and reconciling counterparties in real time is precisely what makes separability cheap. Residual decision rights rest on accountability infrastructure — registries, licenses, standing to be sued — but that infrastructure is a policy artifact, not a law of nature. Several pieces are already being adapted.
Imas keeps this in view. His argument survives a much wider range of capability scenarios because it rests on a structural feature of human preferences, not on where the frontier stops next Tuesday. Yours implicitly assumes the frontier stops roughly where today’s models stop.
One last point on framing: there are no “AI optimists” in this debate. Amodei and Suleyman are not optimists. They are reading a compute trend that has held for fifteen years and stating what it implies. You can dispute the implication, but I believe the label misrepresents the disagreement.
"The mess is what happens when human beings with competing interests try to get things done together." Translation by me: cooperation is the problem of organizations. YES INDEED
Thank you for this post. Every one of your posts makes for fascinating reading, and common sense wins every time. Your post is sofull of memorable lines! I just took one which really speaks to my heart , it's a point that is misunderstood by most of the technocratic elites all over the world. Just keep sending your posts. I'll save all of them. Cheers and onward.