Now that AI Can Code Well, a Scientific and Economic Revolution is Poised to Come Sooner than Most Expect -- IF Serious Obstacles Don't Slow or Derail the AI Train
- Mar 20
- 26 min read
Updated: 4 days ago
AI's newfound ability to code will greatly accelerate its progress. And the timeline for profound scientific and economic change may be shorter than many expect. But history is also littered with technologies that failed to deliver on expectations. And even successful breakthroughs don't always scale cleanly. This section looks at both the case for rapid change and the reasons AI may stall or stop. It then examines the data on what's happening right now. And it looks at what we're likely to see in the future -- if AI does succeed.

(Usual disclaimer: I'm just an investor expressing my personal opinion and am not an attorney, accountant nor your financial advisor. Consult your own financial professionals before making any financial decisions. Code of Ethics: To remove conflicts of interest that are rife on other sites, I/we do not accept ANY money from outside sponsors or platforms for ANYTHING. This includes but is not limited to: no money for postings, nor reviews, nor advertising, nor affiliate leads etc. Nor do I/we negotiate special terms for ourselves in the club above what we negotiate for the benefit of members. Info may contains errors so use at your own risk. See Code of Ethics for more info.)
In Part 1, I explained why years of AI disappointment had made me skeptical — and why Claude Code’s latest performance changed my mind. I also described how this technology marks the end of the computer programming profession as we know it today --- and marks the birth of the new profession of the AI-enabled builder.
This second article talks about the bigger question of what follows from that.
If Claude Code's improvement in a key AI area is a fair indication of what the future holds, then the implications for jobs, businesses, and the economy could be much larger than many people realize.
The revolution isn’t coming. It’s already here.
Almost overnight, AI has gone from laughable to usable. And in the process, it's become dramatically better at performing many tasks traditionally associated with white-collar knowledge work.
Anthropic has published charts illustrating how AI capability has expanded across different categories of work -- and where it has room to grow:

The red area represents tasks AI can perform today, and the blue area is what it could theoretically do. Everything in white is a "no go" area for AI and is beyond even its theoretical capabilities. These things require physical labor, human presence, etc.. This chart comes from an AI model company and that means it should be taken with a grain of salt. At the same time, the results are consistent with what independent parties are also reporting. And in a very short time, AI's gotten a lot better not only at programming -- but also at mathematics, bookkeeping, certain forms of legal analysis, and other subjects, too.
For example, New Scientist published this article a few weeks ago: "Mathematics is undergoing the biggest change in its history" and said:
The speed at which artificial intelligence is gaining in mathematical ability has taken many by surprise. It is rewriting what it means to be a mathematician. “A couple of years ago, they were basically useless for even solving high school math problems, and now they can sometimes solve problems that really appear in the research life of a mathematician,” says Litt, who is at the University of Toronto. This progress is faster than many had predicted, with mathematicians warning that their profession is undergoing one of the fastest evolutions the field has ever seen. “We are running out of places to hide,” wrote Jeremy Avigad at Carnegie Mellon University in Pennsylvania in a recent essay. “We have to face up to the fact that AI will soon be able to prove theorems better than we can.”
In the life sciences, MIT reported a medical breakthrough in August 2025 that came from AI -- in the article "Using generative AI, researchers design compounds that can kill drug-resistant bacteria" which explained:
With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA). This approach allowed the researchers to generate and evaluate theoretical compounds that have never been seen before — a strategy that they now hope to apply to identify and design compounds with activity against other species of bacteria.
What will happen next?
Due to it's recent success with coding, AI is now poised for faster-than-ever growth in the future.
"Software will eat the world"
AI companies targeted computer programming as an early milestone for an important reason.
Once AI can reliably write code, it can improve its own software. And then upgrades and new versions can be cranked out exponentially faster. This facilitates much faster progress across the other areas -- and allows the red areas on the above chart to spread more quickly.
So coding is the key that unlocks faster improvements everywhere else.
Dario Amodei is the CEO of the prominent AI firm Anthropic. In January, he said:
"The mechanism whereby I imagined it would happen, is that we would make models that were good at coding and good at AI research, and we would use that to produce the next generational model and speed it up to create a loop that would increase the speed of model development. ... I have engineers within Anthropic who say I don't write any code anymore. I just let the model write the code, I edit it, I do the things around it. I think, I don't know, we might be 6 to 12 months away from when the model is doing most, maybe all of what SWEs (Software Engineers) do end-to-end.
So it's already starting to happen and only likely to accelerate.
So the next question becomes: What happens if AI succeeds in its lofty goals?
A New Scientific Revolution
If it does, it's plausible we could see some amazing technologies, including:
1) Health, longevity, and well-being breakthroughs:
Per a detailed study by Alation:
AI is reshaping foundational areas of healthcare such as imaging, diagnostics, robotics, and personalized treatment. In 2025, the momentum has not only continued—it has accelerated. New discoveries in genomics, drug design, clinical decision support, and regenerative medicine are pushing medicine into territory that once felt like science fiction.
And in October 2024, Anthropic CEO Dario Amodei wrote an essay entitled "Machines of Loving Grace," in which he predicted that within the next ten years we'll achieve "Doubling of the human lifespan". And more speculatively there's the concept of escape velocity:
Once human lifespan is 150, we may be able to reach "escape velocity," buying enough time that most of those alive today will be able to live as long as they want, although there's certainly no guarantee this is biologically possible.
That idea may or may not be possible. But either way, AI is already making important health discoveries ...and is on pace to dramatically accelerate this.
2) Breakthrough technologies
Fusion power holds out the promise of much cheaper and more abundant energy. But the technical challenges are very difficult and it's already reaping benefits from AI.
Princeton Plasma Laboratory (PPL) said on February 3rd,
A project led by a U.S. Department of Energy (DOE) national laboratory is pioneering ways to speed up the design of twisty fusion facilities known as stellarators by using artificial intelligence (AI) to sift through data more quickly. Even with the most powerful supercomputers, calculating a feature like plasma turbulence — eddies and whirls in plasma that remove fusion-sustaining heat — is difficult and can take a large amount of time. But by training computer programs with large amounts of data from past simulations and experiments [i.e. AI], scientists can get a probable answer much more quickly, reducing the calculation time from days to milliseconds. “We can quickly determine how much turbulence a particular stellarator design might have by asking an AI program,” said Churchill. “That means we can sift through lots of possible configurations quickly and find the few that have the properties we want.”
And like many, Rob Roy -- founder and CEO of Switch -- believes AI will dramatically speed up fusion research and implementation:
“AI… is going to speed up fusion by… two or three decades.
The U.S. Department of Energy (DOE) feels the same way. And it's making a large push -- with Genesis Mission -- to use AI to accelerate scientific breakthroughs:
This project aligns with the DOE’s new Genesis Mission, a major DOE initiative to accelerate scientific discovery and enhance national security using AI, as well as with the DOE’s recently released Fusion Science and Technology Roadmap, which outlines a national strategy to accelerate the development of commercial fusion energy over the next decade by aligning public and private investment.
And fusion isn't the only tech breakthrough that AI accelerates. Quantum mechanical science is also benefitting from AI's golden touch. In July 2025, Eureked reported on some of this progress in a study called "Human-AI ‘collaboration’ makes it simpler to solve quantum physics problems":
At the forefront of discovery, where cutting-edge scientific questions are tackled, we often don’t have much data. Conversely, successful machine learning (ML) tends to rely on large, high quality data sets for training. So how can researchers harness AI effectively to support their investigations? Published in Physical Review Research, scientists describe an approach for working with ML to tackle complex questions in condensed matter physics. Their method tackles hard problems which were previously unsolvable by physicist simulations or by ML algorithms alone.
In addition, and in combination with the quantum computing, AI techniques are also bringing us closer to many other technological breakthroughs, from wondrous new materials, understanding of the human body, new drugs and much more.
So AI is already accelerating many research fields -- and poised to revolutionize them.
The Potential for Mass Job Disruption
At the same time, some of the things a successfully AI-enhanced world might bring about... are not so pleasant.
Many white-collar professions rely heavily on the types of cognitive tasks AI is beginning to automate (and in the blue area in the chart).
And if the technology improves enough (i.e. the red area expands), existing jobs could be threatened or lost across multiple ranges of professions.
And depending on the timing, severity, and replacement options, that could also profoundly affect the economy and our society as a whole.
"Don't worry... be Happy?"
Some people respond to this concern with a historical reassurance.
They argue that previous technological revolutions eventually created more jobs than they destroyed. Workers displaced from one industry eventually moved into new industries, and overall economic prosperity increased.
Economists often point to several historical examples where technological revolutions initially destroyed jobs, but ultimately created brand new jobs and industries and higher overall prosperity:
The First and Second Industrial Revolution: at the turn of the 19th century, mechanization displaced large numbers of artisans and manual laborers in industries such as textiles. And then again at the turn of the 20th, mass production streamlined jobs even further. But industrialization ultimately created vast numbers of new jobs first for machine operators, factory workers, and builders of the steamship and railroad, and then additional positions in management, engineering, urban services and more. The clerks and middle-managers produced by the First Industrial Revolution were the beginning of the existence of the middle class.
Agricultural Mechanization: Around 1900, roughly 40% of Americans worked in agriculture. Today it is less than 2%, largely because tractors, combines, and modern farming techniques replaced human labor. Yet the workers who left farms moved into other jobs in manufacturing, construction, services, healthcare, and education — helping build the modern economy.
The Computer Revolution: Beginning in the 1980s, computers automated large numbers of clerical jobs such as typists, filing clerks, and switchboard operators. At the same time, entirely new professions emerged — including software developers, IT specialists, cybersecurity experts, and data analysts.
The Internet Economy: The rise of the internet reduced and/or eliminated some traditional jobs such as travel agents, newspaper classified staff, and video rental workers. But it also created massive new industries including e-commerce, digital advertising, cloud computing, app development, and social media platforms.
And in Part 1, I talked about the birth of a new profession that's already happening -- the AI-enabled Builder. So this dynamic is already starting to happen.
At the same time, reasonable people can also come up with many legitimate reasons to worry about job disruption.
Jobs don’t always come roaring back: A more recent example is the automation of U.S. manufacturing between 2000 and 2010. During that decade, the United States lost roughly 5.7 million manufacturing jobs — about one-third of the workforce in that sector. Some of those jobs disappeared due to offshoring. But research suggests that roughly 30% to 50% of the losses were directly caused by automation. Even today, more than two decades later, many of those workers have never regained comparable wages.
And that disruption continues to influence social and political tensions today.
Looking back on history is different than living it: The transition periods for the two Industrial Revolutions lasted decades and produced enormous hardship for many workers. And the upheaval was so severe that it triggered widespread protests and even violent uprisings such as the Luddite revolts.
If we were to see mass job loss, similar sentiment and disruptive events might happen.
Speed and breadth: Technological change has been accelerating over time, because new technologies build on and amplify the ones that came before. As an example -- someone born today sees more improvements in a few years than all of humankind saw for a thousand years in our early history. So the AI change is likely to be more rapid than past revolutions.
And AI has the potential to disrupt a much broader set of occupations than previous technological revolutions. This could make it harder for workers to adapt (versus narrower disruptions).
All of these things could make the AI transition more intense and disruptive than some are expecting / hoping. And if so, then dealing with it may even require a different kind of response than what worked in the past (to assist those who are displaced).
And we'll talk about the different proposals for doing that in a minute. But for the moment, the point is that there is disagreement over what the ultimate AI job disruption will look like. And ultimately only time will tell. For now, let's take a look at what employment data is telling us.
Looking at the jobs data
On November 13, 2025 , researchers at Stanford published: "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence"
And they looked closely at AI exposed industries like software developers and customer service. Here's what the the showed for programmers:

Clearly something important happened in mid-2022. Before that, early career programmers were growing at the same rate as later ones. At that point, they suddenly diverged sharply downwards (and have continued to do so ever since). The drop started in mid-2022. This was about a quarter before the release of ChatGPT 4 by OpenAI (in November 2022). So some have claimed this disproves AI was the cause -- or at least the sole cause) -- and perhaps changing interest rates or something else caused the drop-off. At the same time, earlier AI models like GPT-3 and its coding tool, Codex, were available when the decline began. And these were able to generate code from plain English, autocomplete functions, and handle many of the simpler tasks often assigned to junior developers. And the researchers could find no correlation with interest rates (because interest-rate-sensitive industries and non-sensitive moved the same way). Nor could they find any other correlation -- other than AI tool adoption.
Also interesting is that higher level programmers continued to grow strongly -- even while juniors were being phased out. This will be a key metric to watch over the coming year.
Let's turn now to customer-service head-count:

Again, something important happened in late 2022. The drop starts a little later (versus programmers). And interestingly, the effect is wider and affecting more senior staff than programmers.
This could be read as a sign that different industries are likely to react differently to AI job disruption.
So in November 2025, the researchers concluded that:
We find substantial employment declines for early-career workers in occupations most exposed to AI, such as software development and customer support. ..young workers experienced a 16% relative employment decline in the most exposed occupations. ...Entry-level employment has declined in applications of AI that automate work, with muted effects for those that augment it. ... While our main estimates may be influenced by factors other than generative AI, our results are consistent with the hypothesis that generative AI has begun to affect entry-level employment significantly
And we will continue to watch these numbers very closely, this year.
How to cushion the shock of AI job-loss
As discussed above, previous technological revolutions caused jobs shocks that led to severe economic hardship for the displaced -- as well as societal anger, backlash, rioting, and revolts. How might we avoid experiencing these repercussions from AI job replacement? There are many ideas being floated around. These include paying AI dividend payments tied to productivity gains, giving public ownership stakes in AI infrastructure, and offering Universal Basic Income (UBI) funded by some portion of the AI-created profits.
One interesting proposal was put out a few weeks ago by Gina Raimondo, a former Governor of Rhode Island and Secretary of Commerce.
She points out that we've seen mass job-disruption before. And sometimes we've handled it very well -- like after World War II and after the Covid crisis:
After World War II, the G.I. Bill and land grant universities sent millions of veterans to school while public research funding seeded advancements in manufacturing, aerospace, semiconductors and computing. Decades later, the financial crisis and Covid sparked new growth industries with millions of new jobs in clean energy, fintech and health care.
On the other hand, we've also collectively fumbled the ball -- after the U.S. manufacturing job crash in the 1980's:
When the Bulova watch factory in Providence, R.I., closed in the 1980s, my father faced an abrupt end to his 30-year career. ... There were no effective public or private initiatives to help him or millions of other Americans transition to new jobs in the new economy, leaving many American cities hollowed out and helping produce the politics of division that plague us today.
Raimondo proposes for AI a new, multi-faceted public–private “grand bargain” -- which would also transform universities to better deal with the new reality.
Employers: They define the skills they are needing and provide real-time demand and hiring data.
Government: Provides funds to new educational programs for retraining unemployed workers with new skills, and also provides transition support
Pays schools only upon actual job outcomes (not merely enrollment)
Uses tax credits to reward effective employer training, hiring, and retention
Education:
Shifts focus from multi-year degrees (when producing obsolete/non-employable skills) to short, job/employment-linked credentials
Emphasizes speed, modularity, and lifelong learning
Performance-based: School funding depends on job placement
Workforce Model Upgrade: Scale apprenticeships (“earn while you learn”) for fast-changing fields.
Taken together, proposals like this outline a credible path to cushioning the impact of large-scale job displacement — and show that the outcome is not at all predetermined. The open question mark is: can we all collectively agree on what to do (and when to do it)? And this may end up being a defining moment of our times.
Is AI inevitable?
But now...let's take a step back and look at the alternate possibilities.
All of the previous scenarios rely on a critical assumption: that AI continues to scale, deploy, and reshape the economy at the expected pace.
And that raises the next vital question: is the AI train an unstoppable, inevitable force?
The quick answer is: No, it's not.
And the promise of AI could plausibly fail to be fulfilled (or end up taking much longer to deliver) for a number of reasons:
1) Difficulties with duplicating advanced aspects of human intelligence
The human brain is complicated and we don't understand how most of it works.
As the Nobel-Prize winning computer scientist and "Godfather of AI" Geoffrey Hinton said
"We know very little about our own minds."
So designing a computer to replicate something we don't understand ourselves is delving into a lot of uncertain territory.
And there easily could be multiple aspects of intelligence that end up being a lot more difficult to recreate than AI companies are hoping.
And if so, this could push the goal out of reach for years, decades or longer.
2) Over-reliance on "emergence" comes back to bite researchers
It's a bit stunning to think about this. But we still don't fully understand how large AI models have developed complex abilities like advanced programming in the first place.
That's because these skills weren't explicitly programmed into them by humans.
Instead, the AI models were merely trained on massive data-sets that included large amounts of human-created materials. And the capabilities just "emerged" on their own as researchers scaled up the size of the models, the training data, and the amount of compute used during training. The paper "Emergent Abilities of Large Language Models" by Google researchers, (dryly) describes it like this:
“We consider an ability to be emergent if it is not present in smaller models but is present in larger models ... In many cases, these abilities cannot be predicted by extrapolating from smaller models.”
The larger the model, the more such emergent capabilities emerged. And those included impressive higher reasoning skills, including:
Improved multi-step reasoning
Chain-of-thought explanations
Translation between languages without task-specific training
Limited tool usage
and more...
And since we don't fully understand how those happened in the first place, it's not unreasonable to imagine that there may be upcoming limitations that we also don't yet understand.
The counter-argument by AI creators is to say that the models have been extremely successful so far (after addressing this concern for about a decade). And going forward, they're likely to have the increased brainpower of the AI itself on the job -- helping build things out further.
So we will see how these two opposing dynamics play out.
3) AI scaling stops working
AI's scaling behavior (i.e., the result from building it bigger, adding more computer power, training data and parameters) has been one of its most amazing properties and biggest strengths. And it's also a potential weakness and reason it might fail in the future.
Back in 2020, AI researchers discovered something unusual about their AI models. Usually when things are scaled up, improvements hit a plateau. But they found that when they scaled AI up, not only did its performance consistently improve, but even more surprising... it did so in a predictable way.

This is an incredibly powerful dynamic. It means that models automatically kept getting better as they got bigger. As mentioned above, researchers had already noticed that new abilities appeared as size increased. But this showed that there was a real correlation, and AI was gaining these skills -- reasoning ability, coding ability, language fluency, factual recall, and multi-step planning -- simply through larger scale.
Model | Estimated Training Compute | Breakthrough |
GPT-3 (around 2020) | ~3×10²³ FLOPs | General-purpose language tasks |
GPT-4 (around 2023) (est.) | ~10²⁵ FLOPs | Reasoning + coding |
Next-gen frontier models | likely far higher | Agents, advanced coding, multimodal reasoning |
So AI companies have used this dynamic to justify those massive data center buildouts (even while hugely unprofitable and with break-even years away at best). The problem is... not everyone agrees scaling will continue indefinitely. Skeptics argue that:
Internet training data may run out
Returns could diminish
Advanced reasoning might require new architectures
For example, in the paper "Will we run out of data? Limits of LLM scaling based on human-generated data", researchers argue that data exhaustion is likely to occur relatively soon:
"If trends continue, language models will fully utilize this stock between 2026 and 2032...likely be exhausted before 2030."
If so, that could cause a major setback or even stall-out. And even the AI model companies themselves -- while they're pushing scaling -- acknowledge uncertainty about how far the approach can go.
But their current bet is this: if scaling worked to produce the last decade of breakthroughs, it may produce the next ones, too. A few weeks ago, Anthropic CEO Dario Amodei claimed that the company does not see scaling “hitting a wall” in the near future. And he said:
"I think this year is going to have a radical acceleration that surprises everyone."
Again, we'll see what happens.
4) AI Scaling Gets Too Expensive and/or Difficult
Another possibility is that scaling could continue to work in principle, but that AI progress slows (or stalls) because it becomes too difficult or expensive to scale in practice.
The main challenge is that AI scaling follows a power law. This means it does scale (which is good). But it also means that each incremental improvement requires disproportionately more compute (which is bad).
So this is very different than the exponential growth that many of us are used to from the computer industry. Also known as "Moore's law" -- it says that processing speed doubles every 18 months. So this meant easier and easier gains over time. And Moore's Law drove rapid tech growth for decades and the remarkable breakthroughs that transformed computers from room sized behemoths into tiny devices that fit into everyone's pocket. AI scaling doesn't work that way. Every new step is much more difficult than the last and is hard-won.
At October 2025, Sapien published a report called "When Bigger Isn’t Better: The Diminishing Returns of Scaling AI Models". It describes:
In 2025, researchers studying advanced reasoning systems found that ... returns from scaling AI models are slowing, while the costs to run them are accelerating across the board. ... In the tests, each extra round of reasoning improved accuracy by a few percentage points, but every gain came at an enormous cost. This pattern repeats across every benchmark. Adding more context or more examples helps at first, but the benefits taper off quickly
The AI model company's solution to this problem has been to throw increasingly large amounts of of hardware at the problem. And they argue this has worked for the past decade. However some analysts are reporting newer challenges with continuing this strategy:
Reuters recently reported in "US AI Boom faces electric shock", AI expansion plans are already “likely to be hobbled by severe power-infrastructure bottlenecks,”.
One industry analysis noted in "Scaling AI In the Real World": “the real limits to scaling AI are increasingly physical,” with “power, cooling, and facility design” determining how fast systems can expand.
According to Morgan Stanley report "Energy Markets Race to Solve the AI Power Bottleneck, "The boom in energy consumption comes after years of insufficient investments in electric grids that’s left data-center developers concerned they could face power shortages, particularly in 2027 and 2028."
More broadly, the World Economic Forum warned in "The AI-energy nexus will determine AI’s impact" that the relationship between AI and energy “will dictate how AI progresses.”
So there is a risk that scaling could still work technically, but become economically impractical.
Data centers may not grow large enough, plentiful enough, or fast enough to keep up with what's needed. Chip supply or manufacturing capacity could become a bottleneck. And other infrastructure challenges, such as power availability, networking limits, and/or cooling requirements, could make it difficult to continue increasing capabilities at the pace seen over the past decade.
5) AI Financing Bubble Bursts
As we discussed, the foundations of AI financing aren't the most solid. And something could go wrong (potentially taking down one or more AI firms or even the entire stock market and economy along with it). A recent S+P Global Report called "Where Are AI Investment Risks Hiding?" reiterated many of the same risks we discussed in Part 1.
A healthy skepticism toward the AI landscape is emerging in the North American technology sector due to debt-fueled spending for massive AI infrastructure investment and technological breakthroughs. As the U.S. economy and stock market become increasingly tethered to AI’s perceived potential, the risk of bubble behavior from both AI participants and investors grows. ... The rising tide that lifted AI investments over the past two years is transitioning into a fiercely competitive environment. ... Challenges to the AI trajectory include physical limits and reliance on the capital markets to support a higher-leverage AI ecosystem. Key credit factors within the AI ecosystem include the massive timing gap between funding frontier AI labs and the potential return on those investments as well as the long term profitability profile of Big Tech incumbents given their significant investment.
All of the above factors could stand in the way of AI succeeding.
And if that happens -- then depending on the severity of the obstacle and timing --we could see a severe stock-market and economic crash.
Will AI succeed?
At the same time, history shows that revolutionary technological breakthroughs have occurred often -- and in greater frequency over time.
And AI has a lot success to date -- and the current pace is accelerating (see Part 2). Also AI also a lot of very smart minds and unprecedented amounts of money backing it -- all cooperating to push further and make it happen.
So AI overcoming these challenges and becoming a success is also quite plausible too.
So now -- let's go back to looking at that side of the coin.. And we'll ask the question "What happens in a world where AI does succeed?"
The strangeness of exponential growth
If AI succeeds...then things could get very weird, very quickly.
Our brains evolved in the natural world to evade predators, with virtually everything around us happening at a linear rate. So we understand that rate very well.
Say you see a lion hiding 8 feet away on the savanna. The next second he jumps at you and is only 6 feet away. Your brain is very good at recognizing a linear rate of movement: he'll be 4 feet away in one second, and then 2 feet away in another. And you can use that intuition to dodge or escape.
But when things grow exponentially, that takes us by surprise.
Lilly pads: A good example is lily pads growing in a lake. The number of lily pads doubles each day. At this exponential rate, and starting with just one single lily, a pond surface that takes 30 days to be covered will nonetheless be only quarter full on day 28 and only half full on day 29. The sudden jump from “barely noticeable” growth to total coverage is difficult for us to intuitively grasp and anticipate.

Human Genome Project: This had a deadline of 14 years. At the halfway mark (year 7), only a small percent had been sequenced. Critics pounced and argued it would take centuries. However the speed of genome sequencing was increasing exponentially during the project. (And that has continued… with costs collapsing from roughly $100 million in 2001 to about $1,000 today.) As a result, the project finished on schedule. And futurist Ray Kurzweil later cited the episode as a classic example of exponential technological progress.
The stunningly rapid improvement of Claude Code (in just 6 months) suggests we are already in a visibly accelerating phase of AI development -- and exponential growth. And if AI can improve its own software, that acceleration is likely to compound further.
So the next question is: What would the result of this be for jobs?
The case for AI killing lots of jobs
Here’s what one researcher and top AI firms are projecting:
Citrini Research posted a thought experiment about what could happen with AI that recently shook markets. It was stated as just a scenario and not a prediction. But it envisioned significant job market disruption occurring within two years -- including 10.2% unemployment.
Mustafa Suleyman (former creator of one of the first of the AI’s: Deep Mind), wrote a fantastic and prophetic book about AI back in 2023. “The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma.” It’s still very topical and I feel it should be required reading for everyone. Suleyman is now CEO of Microsoft's AI department and in a recent Fortune Magazine interview predicted that “most, if not all” white-collar tasks will be automated by AI within the next 12 to 18 months (i.e. end of 2027).
Additional AI leaders mentioned in the "Fortune" article linked above have suggested that AI replacements will include half of entry-level white collar jobs, or half of all white-collar jobs within similar time spans.
Dario Amodei is the CEO of Anthropic and publishes occasional essays and articles. I feel all of them are “must reads.” He projects that: “By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. It will be able to use all the senses and interfaces of a human working virtually—text, audio, video, mouse, keyboard control and internet access—to complete complex tasks that would take people months or years, such as designing new weapons or curing diseases. Imagine a country of geniuses contained in a data center.” (Bolding added.)
To the average person, these projections are alarming.
The case for AI killing tasks -- not taking over any human profession
At the same time, both Suleyman and Amodei cited replacement of "tasks" and not "jobs." And some argue this could be an very important distinction. Lawyer Ralph Losey, specializes in AI and quantum technology and also forsees AI taking over many/most of the current tasks in AI exposed fields. But he argues most jobs will not be turned over to AI, because AI can't do the whole job and humans will still be needed to oversee.
In "The Human Edge: How AI Can Assist But Never Replace" he argues:
AI can process immense datasets and identify correlations far beyond human capacity, but it lacks the context to understand what those correlations mean. It cannot ask, “Why does this matter?” or “Should this be done?” These are uniquely human questions that reflect our morality, empathy, and sense of purpose. AI excels at pattern recognition but stumbles when asked to innovate beyond existing paradigms. For instance, AI can compose music in the style of Mozart because it analyzes patterns in his work. But could it invent jazz? Could it disrupt an entire genre? Humans possess the unique ability to innovate—to break patterns, challenge norms, and envision entirely new possibilities.
Some would make the point that Losey is accurately describing the limitations of AI -- as this exist today. But emergence (see Part 2) makes it quite plausible that AI will attain higher level reasoning skills in the future. At the same time, we also discussed in detail why that outcome certainly isn't guaranteed (again, see Part 2).
So we will have to wait and see what happens.
Losey also points out that most jobs require a human to take responsibility for what is said or done. If lawyers get something wrong, they can be sanctioned or disbarred. If AI errs, it can't.
I'll add that Anthropic's own chart (shown again below) backs up this concept.
Notably it shows areas in even the most AI-penetrable industries (like computers programming and office administration), as beyond even the theoretical capabilities of AI (and an AI "no-go" zone). See yellow highlighted areas below:

So this suggests this even in the most disrupted fields, humans will still be needed.
A third way?
Others suggest a third outcome, something in-between the mass job disruption forecasted by creators of AI and other professionals' idea of very little disruption. The AI backers know that whether we believe the hype or the doom-and-gloom, believing in AI's disruptive power buoys their stock prices and the significant investments riding on them. And the professionals expressing doubt have a vested interest in staying employed. And we will see what happens.
Driving responsibly
Many of the people quoted above make the point that AI's great power comes with great responsibility, especially during this transition (or "adolescent" period as Amodei calls it). My impression is that AI is already incredibly impressive, but that doesn't mean it's ready to drive down the highway alone.
According to Losey:
"The greatest short-term danger is... over-delegation. It is professionals putting systems on autopilot. It is institutions adopting tools without supervision, audit trails, and verification. The solution is not panic. It is disciplined integration. Trust but verify."
In other words, if AI is in its teenage years, we humans still need to be very careful parents. We may choose to let it take over a lot of work, but we should do it slowly and carefully and only after a lot of practice runs.
"I'm sorry Dave. I'm afraid I can't do that."
No discussion of AI is complete without discussing an uncomfortable subject.
Some fear AI will ultimately take over humanity or wipe it out.
And I wish I could say this is just a fantastical science-fiction trope. But it's not.
In 2023, the CEOs of leading AI companies—including OpenAI, Google DeepMind, and Anthropic—signed a statement warning that:
“Mitigating the risk of extinction from AI should be a global priority…”
In other words, even the people building these systems (who stand to profit the most from them) acknowledge that extreme outcomes—while uncertain—can't be ruled out.
The Mustafa Suleyman book we talked about above (and that Bill Gates says is his favorite book an AI) goes further. In his book, “The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma.” Suleyman talks about how keeping AI in check will require massive effort and changing a lot of what we do now at many levels of society. And he believes it will require walking a very "narrow path" where missteps could easily lead to dystopian outcomes.
Okay, so why don't we just shut the whole thing down?
Once people understand that AI might threaten humanity, they logically ask:
“Why don't we all just agree to stop developing AI?”
Again, the book “The Coming Wave” by Mustafa Suleyman, answers this question in detail.
Time and time again, those that failed to adopt transformative technologies have often found themselves at an enormous disadvantage against those that did. And many villages, cities, kingdoms, countries and empires have found themselves on the losing end of the blade (after the invention of the knife and sword), the cannon fire (after the invention of fireworks) or the smart bomb (after the invention of electronic sensors and GPS etc.) And even if every country says “no” to AI… it only takes one rogue to push it forward and do it anyway. So useful technologies have almost always proliferated -- rather than being voluntarily abandoned by humanity. And if AI actually works, it's probably safe to assume that it will almost certainly be used.
Bottom line on the Uncomfortable Truths of AI
My perspective is that the negative potential of AI is very concerning to me -- and should be to everyone. And I hope that we collectively learn to take the risks seriously, outgrow our technological adolescence and properly control and/or contain AI.
At the same time -- if any worst-case scenario does happen, then none of my investing choices will end up mattering.
So I've brought up the topic for completeness. But for the purposes of this article series, it makes a lot more sense to be optimistic on this issue and assume we will manage AI successfully.
So with that point of view in mind....let's move to the next question. How does an investor respond to all of this?
Part 3: Investing in an AI-Driven Economy
That's the subject of part 3 and we'll talk about various investment ideas for navigating an economy that's powered by AI. Click here for part 3: How to invest in the AI Economy.
-----------------
Discussing AI and more
Private Investor Club members have been discussing AI in depth. And there's also been detailed discussion on thousands of sponsors and deals (including due diligence and real-life investor experiences).
If you're already a member, click here to discuss further:
If you're not yet a member, then joining is free.
To protect all members and keep the conversations useful and confidential, all applicants are required to verify that they're solely investors (and not investment sponsors, platforms or their affiliates)
Click here to learn more or to join.

