How I'm Planning to Win Over the Next 100 Years
We're probably living in one of the most transformative periods in society in the past few centuries, but I don't need to tell you that—everyone talks about it. Furthermore, we knew these things were going to happen (speaking about AI and how it would automate jobs away); it's just that now we can see them in action, especially in how well AI can code and replace entire teams of software engineers so rapidly.
Everyone is scared about the future of jobs, about what comes next. What do humans do when AI takes over all of our tasks? People are looking for a blueprint for how to operate in this post-AGI world, since the previous one (go to school, get a degree, find a job, retire) doesn't seem to work anymore.
I'm not here to give advice, since I'm writing this for myself. But if you happened to randomly stumble upon my writing, whose primary goal is to help me understand my own thinking process better, then here's my blueprint. (You can skip these few paragraphs if you only want the blueprint.)
First and foremost, let's talk about what has already happened and what we know to be mostly true.
LLMs became widely used at the beginning of 2023, and most users' use cases back then were getting information faster, writing essays, compressing information, and coding (mainly explaining code or navigating documentation). Still, the extent to which they were used wasn't anywhere near as large as today, because they were lagging, hallucinating, and generally less capable. Despite that, a lot of people saw the exponential curve—each model was far better than the previous one—so we began thinking about adapting to a world where intelligence becomes a commodity.
Fast forward to today: one of the biggest impacts AI has had is in software engineering. Not because it's better at software engineering than any other area, but because software engineers are naturally early adopters of technology. Despite the fact that AI could be used for a variety of other jobs—such as customer support, content creation, copywriting, journalism, advocacy, consulting, and so on—software engineers adopted it first. It's clear that many more jobs will soon be affected by AI as well.
For the moment, let's look at how AI has actually impacted the job of a software engineer. What you have now is a single person being able to code what used to take a team of engineers months to build, all in just a few days. This is achieved by carefully guiding the model through the project you're building, laying down the core steps, adding guardrails, and checking its outputs. As a result, since we weren't built for such a level of output and productivity, companies find themselves needing fewer and fewer employees. Ironically, it feels like software engineers were so good that they automated their own jobs away.
To draw a parallel, I like to think about how—contrary to what the media often portrays—the differences between the rich and the working or middle class are shrinking in certain dimensions. Your average person today has access to better medicine than any multi-millionaire had in the early 1900s. He probably drives a better car and lives in a safer world. Even when comparing modern-day billionaires with average people, they have relatively similar access to technology, the same internet platforms, and similar opportunities to make themselves heard. Of course, money amplifies these advantages, but the idea is that the delta between the middle class and the higher class is shrinking in some respects. Why make this parallel? Because the newest thing being democratized is workforce. Almost anyone can now have something akin to a digital employee working for them 24/7. It still costs some money, but the costs are dropping and are nothing compared to what a human employee would cost.
Another thing to mention is that jobs aren't a universal construct; they're a human one. To think that humans need jobs for survival—and not the other way around—is the result of indoctrination shaped by how modern society operates. The post-industrial educational system taught you to specialize in something and made that thing take up the biggest part of your life. That's why entrepreneurs were often laughed at, yet they were the most prepared group for such a time. Being an entrepreneur isn't about what you do; you cannot strictly define it. It's a way of moving through life, of solving problems. It works precisely because it's vague and allows you to adapt and stretch.
So back to our software engineer. What happens to them? Do they just get discarded? Better yet, what happens to software? At the current pace, individuals empowered with autonomy will outperform large teams. Most likely, larger organizations will need to readapt to smaller teams and fewer engineers, since the demand for code hasn't grown as much as the productivity of engineers using AI—which can range anywhere between 10x and 100x. As code becomes commoditized, that shouldn't necessarily worry engineers, since their job doesn't represent their identity. AI may, in fact, liberate them from certain burdens. What happens now on platforms like X is that thousands of builders who were previously employed are becoming one-person companies and building tools that make them more money than their previous jobs did. That won't be the path for most people. As ease of technological use increases, so does the volume of shipped code, and AI-generated noise will flood GitHub. If so far the top 20% of engineers produced 80% of the usable code, you might now see 5% producing 90% of the truly used code.
I still haven't fully answered the question, because I don't know specifically what will happen to those people in particular. But I have a sense of what jobs humans will occupy in general—or rather, what AI cannot do.
AI is a tool that allows you to get things done faster. But it lacks the core human ability to want something. The meaning of an AI system is given by its user; you are the "meaning maker." AI does democratize intelligence in a significant way, but we still haven't answered the question of what intelligence actually is. Intelligence can be defined as the capacity of a system to acquire information, build internal models of the world, reason about those models, and adapt behavior to achieve goals across varying environments. AI helps us with the cognitive load of processing and explaining information. What it doesn't do is set goals or intentions for us.
Some people argue that you can just define a general goal and let a series of agents refine and delegate it. But AI only does what you ask it to do; it looks where you tell it to look. It's blind to what you don't know you don't know. By overly promoting the idea of democratized intelligence, we risk misleading people into thinking AI will think for them. Decisions are often made through intuition built upon deep understanding, which goes beyond merely holding key facts. True understanding is what separates a power user from everyone else. Having a detailed view of the playing field—a knowledge map—and understanding things deeply allows you to guide AI successfully. In the end, humans with strong understanding of the world will be needed more than ever. Software engineers have an advantage because they've learned to reduce problems to fundamental components. Their jobs will change, but humans will be in demand—not for a single task, but for setting missions.
The role of humans will be to solve unsolved problems, to act as pioneer entrepreneurs. The way to solve something unknown was best portrayed by Feynman when he spoke of triangulation—using our existing understanding to infer pieces of knowledge we don't yet have.
In a sense, everyone could become extraordinarily empowered, regardless of whether they work in tech or not. You'll have the ability to create and add value to society more than ever. Jobs may go away, and that could be a good thing. What you work on from now on is what fulfills you. You don't find meaning in life; you give meaning to life.
So, nothing is entirely clear, except that humans will still be needed. What, then, is the blueprint? I thought about what the highest-leverage thing I could work toward might be. In software, many people adopted the paradigm of "don't build for the models of today, but for those of tomorrow," because the landscape is changing so rapidly. The same could be said about any other industry. In my case, I will build for the societies of tomorrow—for the Moon and Mars colonies. That could mean tools for learning that prepare such a society, or entirely new ways to build.
So the way to win over the next 100 years is to:
- Focus on larger timescales
- Accumulate understanding across many fields
- Accumulate as many diverse experiences as possible
In this way, we focus on big missions and big problems, and we develop the intuition required to solve them.
Let's unpack each point.
Why focus on larger timescales? Wasn't that always good advice? Yes. But when the cost and time required to produce what once took months approach zero, everyone will be building apps and websites. Competition will be fierce, and without strong distribution or an audience, you likely won't survive. So while long-term thinking was always wise, it becomes almost a prerequisite in a post-scarcity world. As Thiel said, competition is for losers.
The accumulation of understanding goes deeper than you think. AI can make you feel like you understand something simply by explaining it well. The reasoning may be sound, the arguments compelling. But even a correct statement delivered by AI can create the illusion of understanding. These models are excellent at constructing logical flows. Even when a teacher explains a concept, the explanation alone doesn't create deep understanding. To truly understand something, you must understand why the wrong way is wrong.
Since AI makes shallow knowledge so accessible, many people will feel smarter simply because answers are available instantly. But without an internal map—without intuition about where to go next—they won't ask the right questions and will make beginner mistakes repeatedly. Positioning yourself as someone with deep, cross-domain understanding is one of the best ways to ensure your success in the coming years. The difference between knowledge and understanding will become more obvious. Knowledge without understanding may have sufficed when most tasks were repetitive. But unknown problems require new solutions, and discovering new knowledge requires deep understanding. Knowledge is found through triangulation. You need deep expertise in multiple domains and enough correlations to orchestrate AI effectively.
One clear effect AI has on us is pushing us one level higher in abstraction—not just in code, but in life. You have to think more meta. The best example is shaping your environment so that it shapes you in your desired way. An environment is the accumulation of experiences within a medium. So seek as many beneficial experiences as possible, since experiences are part of what makes you unique.
That's it. Now back to work.