The 250-Year Contract Just Expired
Spoiler: this week, an era that lasted two hundred and fifty years quietly came to an end. Most people haven't noticed yet. This post is about why it matters more than anything else you've read this year.
There's a phenomenon psychologists call adaptation — live next to something abnormal long enough, and you stop noticing it. I catch myself realizing that the news cycle of the past few years has systematically destroyed my ability to be surprised. If you know that feeling of low-grade anxiety, that sense that reality has slipped slightly off its hinges — you're in good company. There's at least half the planet feeling the same way. But history teaches us one constant: what contemporaries see as catastrophe, their descendants call a starting point. Did the early Christians, or the enthusiasts of 1993 clicking on Mosaic for the first time, realize they were standing at the beginning of a new world? And here we are again, in one of those moments — only the scale is fundamentally different.
There are roughly thirty people on this planet who, right now, in these weeks and months, are making decisions that will shape the world our children live in. Not presidents — presidents long ago turned into their own genre of tiresome, low-budget reality TV. I'm talking about engineers, scientists, researchers, founders of companies whose names you already know.
And these people, one after another, weeks apart, have started saying something out loud. Something that used to only be spoken in tight circles. In March of this year, Jensen Huang — CEO of NVIDIA, a company the market values at four trillion dollars, whose chips are literally the neurons of an emerging digital civilization — went on Lex Friedman's podcast. Lex asked him: "When do you think AI will be able to launch, grow, and scale a tech company to a billion dollars? Five years? Ten?" Huang paused for a second and said, almost casually: "I think now. I think we've reached AGI."
Last week, Marc Andreessen — the man who in 1993 wrote the first commercial web browser and essentially brought the internet to the masses — published a short note. Just one sentence, but what a sentence: "AGI is already here, just unevenly distributed." Dario Amodei, CEO of Anthropic, standing at Davos during a session titled "The Day After AGI," stated that within 6 to 12 months, AI will fully replace the software engineering profession. End-to-end. Sam Altman, in an interview with Axios: "Superintelligence is this close. And it's not a new technology — it's a restructuring of society."
You can argue about the precision of definitions. You can debate what each of them means by "AGI." But here's what matters more than any definition — and I have to borrow one analyst's framing: "The point isn't whether you agree with the term AGI. The point is that the people closest to the hardware and to real deployment are increasingly talking about autonomy not as a research subject, but as a shipping product feature."
That's a fundamental difference. It's the difference between "we're working on it" and "we built it and now we're dealing with the consequences." And while we're still wrapping our heads around the scale of what's been said, somewhere in an office, in an actual work chat, something very down-to-earth is already happening — the first small tremor of what those thirty people are talking about.
In Shandong province, at a gaming company, an HR specialist recently quit. Ordinary story — said her goodbyes, turned in her badge. Except the next morning, her digital twin appeared in the corporate chat. Same communication style, same logic, same phrasing, same tone. Her colleagues spent several hours unable to tell they weren't talking to a human. Not because the twin was especially smart, but because the original was especially... documented.
This isn't a dystopian horror story — I'll leave a link to the source in the comments. This is April 2026, and it has a name: "colleague.skill."
The tool appeared on GitHub on March 30th. The concept is dead simple — feed a neural network an employee's chat logs, documents, work emails, screenshots, and you get an AI agent that writes in their style, thinks with their logic, and according to its creator, even knows how to shift blame onto others. That last part was listed in the documentation as a feature, not a bug. The result: within five days of going live on GitHub, the project hit ten thousand stars — one of the best launches in the platform's history. Now take a moment. Really sit with what that means. Try it on for size with your own job, your own field. Then think about every office worker in the world. And remember — this technology will never get dumber. It will only get smarter, every hour, every day... exponentially.
The Black Mirror storyline doesn't end there. Right on cue, a defensive weapon appeared: anti-distill.skill — a counter-tool by a Chinese developer. In a viral video, she explained the reasoning: "Nobody wants to be turned into a file and lose their job. So I built this." The tool takes your work archive and runs it through a "cleansing" layer, producing documents that look complete on the surface but are quietly stripped of anything a model could actually learn from. Luddism 2.0 — elegant and terrifying at the same time.
But this is where the story stops being about China and becomes about all of us.
When people started preemptively digitizing themselves through self.skill, something deeply unsettling emerged. Years of experience, unique competencies, the status of being irreplaceable — when you try to formalize it all, it compresses into a few megabytes of predictable patterns. Communication style — a pattern. Decision-making logic — a pattern. Crisis response — a pattern of patterns.
And now for an even more uncomfortable observation. The easiest employees to distill — the most accurate copies — are likely the most diligent ones. The ones who write detailed retrospectives after every project. The ones who lay out the full logic of their decisions in heated email threads. The ones who are maximally transparent, thorough, consistent, and honest. Their "digital imprint" would come out the most complete and functional. Conscientiousness — the quality always considered the cardinal professional virtue — becomes fuel for your own replacement.
This isn't just corporate ugliness, though it's that too. It's something deeper.
Because when you look at what colleague.skill cannot copy, the picture takes on a more philosophical light. The algorithm captures your retrospective documents, but not the anguish you went through writing them. It copies your answers, but not the intuition at the moment of the decision itself. You know how an experienced programmer glances at the logs and just feels — something's off here. Or how a great negotiator takes a pause at exactly the right second and says nothing, and that silence hits harder than any argument.
That's exactly what you can't write to a file. Can't digitize. Can't "systematize." Probably because the person themselves can't fully explain it. One Chinese analyst put it precisely: "Whatever the system distills — it is always only a shadow of the person."
And here we return to the question I'd like you to think about.
Was professional experience always just a commodity? Before, there simply wasn't a tool that could so coldly dissect it and lay it bare. Specialized skills, communication style, reasoning patterns — all of it, it seems, was never truly unique. But what was and is unique is what lives between the lines. What lives not in documents, but in the person, in their being. What grows not from knowledge, but from the experience of living that knowledge.
And if I'm right about this, then colleague.skill didn't steal anything valuable. It simply held up a mirror for the first time — one that shows what our real value actually is. And how little we'd thought about it until now.
On April 6, 2026, OpenAI published a thirteen-page document titled: "Industrial Policy for the Intelligence Age: Ideas to Keep People First." Altman, for those who don't know, is the head of the company. In an interview with Axios, he compared it to Roosevelt's New Deal. Not modest — but arguably proportionate in scale.
So here's the substance. OpenAI says the existing economy will not survive AGI in its current form. Government runs on payroll taxes — and when (not if) AI eliminates paychecks, the government loses its tax base. Social safety nets are tied to employers — when the employer disappears, so do the guarantees. The class divide under unmanaged automation will become so extreme that the word "inequality" will feel like a euphemism. This isn't alarmism. It's logic.
What do they propose? First — shift taxes from labor to capital and automation profits. Rebalance the tax base so that core social programs remain funded even as payroll revenue shrinks. Second — create a Public Wealth Fund: a sovereign fund, partially financed by AI companies themselves, that invests in broad markets and pays dividends to every citizen. Think Alaska's oil dividend, but on a planetary scale. Third — AI as a basic right, on par with electricity and literacy — for schools, universities, underserved communities, and small businesses. Fourth — portable benefits. Healthcare, retirement, insurance must not depend on a single employer in a world of unstable employment — they should follow the individual across jobs, industries, and ventures. Fifth — pilot a four-day work week at full pay. The logic is simple: if AI takes on part of the work and the company gets richer from it, why should all the gains go to shareholders? The freed-up time should go back to people — not as a gift, but as a fair split of the profits.
You can debate the details, the conflicts of interest, and so on — but the fact remains: the company whose technology is about to sweep away the old order like a flood through a sleeping village was the first to start spelling out where the emergency exit is. And here's one of the key tensions of the moment: while OpenAI drafts a new framework for human existence, the overwhelming majority of the world's leaders barely grasp the scale and inevitability of what's coming. The people whose job it is to protect society from the fallout of technological disruption are an entire era behind the disruption itself. They're still treating this as a technology problem, when what they're facing is an existential civilizational challenge — an anthropological shift. What's changing isn't what people do. What's changing is why they're needed.
And instead of urgent global forums on these questions — silence. Almost total. And like a lone lighthouse, this thirteen-page document appears — from the very company that's creating the acceleration. Which makes sense: whoever stands closest to the fire understands best how it spreads.
But let's step back from the news cycle for a moment and look deeper into time. Roughly two hundred and fifty years ago, something happened that we've grown used to calling the Industrial Revolution. But behind that term lies something far more radical than the shift from horse power to steam. What actually happened is this: for the first time in history, human labor became the universal measure of value. Not lineage, not class, not proximity to the King's court — but labor. What you produce, how many hours, how efficiently, how well. It felt like liberation, and in many ways it was.
But alongside it, without any formal declaration, something else happened. Labor became not just a source of income — it became the moral basis for a person's existence in society. A working person is respectable; a non-working person is suspect. Profession became identity.
For nearly two and a half centuries, this contract held. And now, this year, in these very weeks, it's beginning to crumble. Not because someone decided it should. Not because someone is unilaterally tearing it up. But because the fundamental condition it rested on — the effective irreplaceability of the human as a productive unit in the market — is disappearing. Quietly, for now, invisible to most, but methodically — and this flywheel is only beginning to turn. One .skill file at a time.
Just five years ago, the professional consensus was that AI would first replace low-skill labor — janitors, taxi drivers. And what actually happened? AI hit society's solar plexus, and now the most intellectual professions will be among the first to go. A lawyer who drafts and analyzes contracts — replaceable. A radiologist who reads scans — replaceable. Analysts, translators, historians, coders — all can start collecting their indefinite pension, or retrain as bricklayers and electricians for now, though I suspect that won't last long either. Every one of these professions, and many more unnamed, are already being performed faster, cheaper, and in most cases more accurately. RIGHT NOW.
And here's the question I want to ask — not about technology, but about us. About everyone living on this planet right now.
If labor stops being the basis of income, what takes its place? If your market usefulness is no longer guaranteed by your skills alone, what makes you valuable in the labor market? What's left of a person if you switch off their professional social function?
Among the people I follow in the AI world, the most thoughtful ones are already trying to build new philosophical frameworks — to reimagine themselves in a coordinate system where market function is no longer the answer. Each of us, in our own time, will have to ask ourselves not "how do I become irreplaceable" — because that race, under the new conditions, is unwinnable — but "what is my value that isn't determined by the market?" That's a different question. A fundamentally different one.
The main question right now isn't whether civilization will cope — it has coped with worse. The question is who each of us will be when we enter this new contract. I'm afraid the old papers won't get you through to the new world.
And here's what I want to say in closing. Not as consolation — consolation isn't enough here — but as something I truly believe, and something I think will actually happen.
Remember how this post started? With adaptation. With how a person, living next to something abnormal, gradually stops noticing it. But adaptation has another side, one that's spoken of less often. The same mechanism that dulls surprise — when the familiar coordinate system finally collapses, something far more primal than any professional skill awakens in a person. The ability to start drawing a new map, on the fly, despite the darkness all around. That's exactly what doesn't fit into any .skill file. The thing that resists encoding.
For millennia, humanity lived by the same ancient rules of survival: tooth for tooth, resource for resource, status through dominance, value through usefulness to the system. These instincts were neither good nor bad — they were the only ones possible in a world of scarcity, where food, land, and influence had to be literally fought for.
But what happens to that logic in a world where material scarcity, with sensible design, ceases to be inevitable? Where intellectual labor is no longer a scarce resource? Where the old competitive instincts for function simply lose all meaning?
These rules cannot not change — not because people will suddenly become kinder, but because the very architecture of the world that created and sustained them is leaving. And for the first time in two hundred and fifty years — perhaps for the first time in the history of civilization — humanity has a chance to build a new civilizational order, one where the Sermon on the Mount no longer seems like such an impossible utopia.
The most important moments in human history didn't roar... they clicked quietly, like the first printed letter, like the first click of a mouse... today, something clicked quietly again... and you heard it.
Sources:
OpenAI, "Industrial Policy for the Intelligence Age: Ideas to Keep People First" (April 2026): https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf
City News Service / Shanghai Daily, "The Employee Quit But Her AI Clone Didn't – Inside China's 'Colleague Skill' Craze" (April 8, 2026): https://www.citynewsservice.cn/articles/shanghaidaily/news/the-employee-quit-but-her-ai-clone-didnt-inside-chinas-colleague-skill-craze-yn6ow4dn
4
0 comments
Vlad Praskov
4
The 250-Year Contract Just Expired
AI Agents | OpenClaw
skool.com/ai-agents-openclaw
OpenClaw builders sharing real agent setups, cost optimization, configs, and advanced workflows. Build smarter AI with hands on support.
Leaderboard (30-day)
Powered by