Nb: This piece is less about Restless Egg itself, and more about the rolling legitimacy crises AI is triggering across sectors. In education, students are already adapting to what AI implies. And in doing so, they may be pointing more clearly than anyone else to what education is actually for now—not what it was for, and not what we wish it would be for.
There’s been a slow shift in the tone of education commentary: less reform, more decay. Students are disengaged. AI is everywhere. And the rituals that once made school feel necessary now feel optional, or worse, ornamental. From Graham Burnett’s Will the Humanities Survive Artificial Intelligence? to “We Have to Really Rethink the Purpose of Education,” Ezra Klein’s conversation with Rebecca Winthrop, to New York Magazine’s “Everyone Is Cheating Their Way Through College,” the subtext is consistent: the academic project isn’t just under threat. it’s losing coherence.
If, as Winthrop argues, education is meant to prepare students for society and to help them discover what they care about, then the situation becomes easier to explain. The society has changed. The signals have changed. The students noticed. And so, what looks like disengagement might just be an accurate reading of incentives.
Maybe this isn’t a crisis in education. Maybe it’s just a synchronization error. The values have shifted, but the system hasn’t caught up. We’re still running schools as if we’re preparing clerks for a paper economy. Meanwhile, students have become fluent in an entirely different set of signals. When did we stop teaching swordsmanship? Not after a careful debate about its declining relevance. Reading and calculation may be next, not because they’ve failed, but because they’ve lost their scarcity and necessity. And educational systems built to train scarce and necessary skills tend not to survive once those things’ importance has disappeared.
Maya Angelou said, “When someone shows you who they are, believe them.” People love this quote. Most often when it confirms their prior beliefs. They love it less when it suggests those beliefs may be outdated. But let’s try it: the kids are showing us who they are. Believe them.
They’ve grown up with AI, ambiently and pervasively. Their conclusions are straightforward. Reading doesn’t matter. Calculation is automated. The highest-prestige career among their peers isn’t lawyer or engineer, but influencer. Noticing this isn’t laziness. It’s optimization. “Securing the bag” may sound flippant, but it’s functionally accurate: maximize leverage, minimize wasted effort.
Traditional office jobs are evolutionary dead ends. Pushing papers feels less like success and more like a failure of imagination or a vestigial organ. When students cheat, they’re not rejecting discipline, they’re rejecting the premise. The assignment isn’t hard, but irrelevant. The skill isn’t useless, but the ritual designed to cultivate it is. And if you can get the skill some other way that is also faster, funnier, and with social upside, it’s not irrational to do so. It’s just adaptive behavior in a system that has itself declined to adapt.
This message isn’t subtle. Students see a different future than the one school is preparing them for. The current system trains for stability, hierarchy, and paper trails. The world they’re navigating rewards flexibility, distribution, and scale. When those don’t line up, these students don’t rebel, they reroute. Not because they don’t care. Because they’ve done the math, and the cost of playing along exceeds the expected return.
In some cases, the problem isn’t just lack of incentive—it’s active disincentive. What if the reading works? What if you start to enjoy it? Now you’ve misallocated effort relative to the dominant reward function. There’s no meaningful market signal for falling in love with James Baldwin. But there’s a robust one for mastering TikTok editing, engagement funnels, and parasocial resonance. The system doesn’t merely ignore deep reading. It treats it as a kind of misalignment error—an expensive indulgence with no clear ROI.
Students aren’t staging a protest. They’re just quietly demonstrating that the values underlying education aren’t timeless. They’re contingent. We used to teach swordsmanship and scholastic theology—not because they were morally superior educational content, but because they were useful at equipping people with the traits needed to survive in their time. Then they weren’t, so we stopped. AI hasn’t broken that pattern. It’s just speeding it up. It hasn’t broken the education system either. It’s revealed that the system no longer knows what it’s for.
Education has never been primarily about content. It’s about form, the cognitive scaffolding that is built through certain kinds of engagement. Reading wasn’t important because books were, but because reading trained memory, abstraction, and interiority. But those functions aren’t exclusive to reading. Oral storytelling did it. So did rote memorization. Even within reading, the cognitive effects have changed: monastic repetition gave way to narrative immersion, each at one point considered the apex of intellectual development.
However, the idea that the education system as currently constituted is under threat by AI smuggles in the assumption that its current form must be preserved. Not because it’s optimal, but because it’s familiar. When a system has been around long enough, people start defending it on the basis of its existence alone, confusing survival with justification.
Students have already adjusted to the fact that the curriculum no longer maps to the world they’re inheriting, the discomfort in headlines isn’t really about them. It’s about the adults. The fear isn’t that the next generation is ill prepared for the adult world, it’s that they are well prepared for something else. Something the current system didn’t predict, doesn’t recognize, and does not yet know how to evaluate.
If reading and calculation are no longer valued for their own sake, but we still care about the skills they once cultivated—language, abstraction, systems thinking—then the question isn’t whether we keep them in their extant forms, but how we replace what they provided. In other words, although proxies may change, the functions remain. In a world of AI agents and coordination problems for example, the relevant training might look less like silent reading and more like real-time resource orchestration a la Starcraft: Less library, more command center.
At some point, you do have to design for the world you have not the one you wish you had. If students won’t read, the question isn’t how to make them, but how to preserve the benefits of reading through other means. You can moralize as you wish, but systems don’t run on aspiration. They run on behavior. And if turning the page is no longer the behavior, then cognitive upskilling has to come from somewhere else.
Motivation has always been a constraint. Students will engage if they love what they’re learning, or if they believe it will lead somewhere useful. In the first instance, this is mostly a question of exposure: given enough inputs, something will catch. The second requires a harder adjustment. It means aligning educational systems with the structure of the jobs that actually exist, rather than the ones we designed curricula for in 2003. That, in turn, means thinking ahead of the technical curve. Not asking for sanitized, school-safe versions of existing AI models, but developing AI tools that themselves train for relevance: coordination, judgment, synthesis. Filters won’t fix the problem. New primitives might.
Recently Cluely made headlines by more or less pitching itself as an AI platform to “cheat on everything.” The reaction was mostly moralistic: AI shouldn’t do this. But by now, that line of argument feels less like a position and more like an unprocessed emotional response. Moralizing AI use in schools is a dead end. The tools are inevitable, and students are adaptive. Tell them not to use it, and they’ll route around the restriction. That’s how desire paths work.
The far more interesting question isn’t whether Cluey should exist–it already does–the better question is: what still matters, given that it does? Deep Blue didn’t kill human game-playing. It forced people to rethink what they were actually doing when they played. Cluey does something similar for education. If AI can complete the assignment, ace the test, and generate the paper, then what’s the point of assigning, testing, or paper-writing? Not rhetorically speaking, but operationally.
Cluey doesn’t answer the question of what AI is for, because “Cheat on everything” isn’t a goal, but a stress test. But that’s what makes Cluey so useful. Systems reveal their underlying logic most clearly under failure conditions. If AI can bypass the entire educational apparatus, then it forces the obvious question: what was that apparatus actually for? And once cheating is trivial, what remains worth doing that can’t be automated?
Caution is fine, as long as it still buys you time. But once something like AI becomes ubiquitous and frictionless, caution stops functioning as strategy and starts resembling denial. Cluey isn’t notable because it was inevitable, it’s notable because its inevitability turned out to be a low bar.
The real challenge isn’t resisting the shift. It’s building the thing that renders the shift beside the point. In a system where inevitability usually wins, irrelevance is most often the only thing that beats it.
Thank you for sharing!! I found this piece incredibly insightful and thought-provoking. :)