top of page

“My Claude-Code revelation”: How the 2026 update turned a long-time AI skeptic into a believer — and why a major shift in jobs and the economy could now be closer than you think

  • 5 hours ago
  • 25 min read

So far, the AI boom has been an expensive disappointment, involving massive hype, even larger spending and very little to show for it. But the latest release of Claude Code delivered a stunning and dramatic improvement in a surprising and crucial area. The risk of an AI bubble remains real. But if AI continues to improve at this pace, the effects on jobs and the economy could be substantial — and might arrive sooner than many expect.

Deep-dive into my Seven Figure Alternative Investment (and Real Estate) Portfolio: 2023 Q2 Update

(Usual disclaimer: I'm just an investor expressing my personal opinion and am not an attorney, accountant nor your financial advisor. Consult your own financial professionals before making any financial decisions. Code of Ethics: To remove conflicts of interest that are rife on other sites, I/we do not accept ANY money from outside sponsors or platforms for ANYTHING. This includes but is not limited to: no money for postings, nor reviews, nor advertising, nor affiliate leads etc. Nor do I/we negotiate special terms for ourselves in the club above what we negotiate for the benefit of members. Info may contains errors so use at your own risk. See Code of Ethics for more info.)



There are many good reasons to feel that AI financing is currently in a precarious, bubble-like state.

AI spends like a sailor on leave…

The dollar amount of investment pouring into artificial intelligence over the past several years is staggering. Estimates say that AI datacenter spending alone could reach roughly $400–$450 billion annually. And total AI infrastructure investment by major tech firms is expected to approach $650 billion in 2026

That is an almost unbelievable figure — equal to roughly 1% - 2% of U.S. GDP. And very few industries in history have absorbed that much capital so quickly.

…while earning money like a pauper.

At the same time, the companies leading this spending are not just deeply unprofitable -- they are burning cash at levels that are unprecedented in corporate history. OpenAI (one of the most prominent AI firms), alone, is expecting massive losses from 2023 through 2028 (losing $14 billion in 2026 and totaling $44 billion throughout)... before hoping for a profit in 2029. It's burn rate alone, dwarfs every existing record and easily qualifies as the most massive in history:

Anthropic (another prominent AI firm) is also losing billions every year (and as of last November is aiming for profitability by 2028).

 

On top of this, startups often take longer to reach profitability than their early projections suggest. And technology booms have often followed a pattern where optimistic forecasts fail to materialize.

Circle of trust? Or circular firing squad?

Even more concerning is how the current AI boom is being financed. A growing number of deals involve complex circular financial relationships between the same small group of companies. These arrangements often involve cloud providers, chip manufacturers, and AI startups investing in each other while simultaneously serving as each other’s largest customers.

 

Bloomberg described the phenomenon in its March 11, 2026 report on “Circular Deals in the AI Boom”:


“The result is an increasingly interconnected web of dependencies between technology manufacturers and AI startups. The risk with these ‘circular’ deals is that they can create skewed incentives that may lead to bad decision making and magnify losses if demand for AI fails to match today’s lofty expectations. The stakes are high as the AI boom has sucked in gargantuan sums of money from debt and equity markets and buoyed multiple industries.” (bolding added)

Et Tu, Stock Market?

This also raises serious questions about whether today’s unusually generous stock market valuations are also out of sync with reality.

Since 2020, most of the wealth created in the S&P 500 has come from just seven companies — the so-called “Magnificent 7,” all of which are deeply tied to the AI boom.

And periods of extremely narrow market leadership like this are historically unusual. Similar episodes occurred during the “Nifty Fifty” era of the early 1970s and the internet boom of the late 1990s. And in both cases, those periods were eventually followed by market crashes of roughly 50%.

 

That doesn’t mean a crash is inevitable. But it does mean the market has become unusually dependent on a very small number of companies. And that kind of concentration is rarely healthy.

FOMO (“Fear of Missing Out”) leads to billion-dollar corporate AI disappointment

Worse has been what AI had delivered so far.


Many companies live in fear of falling behind.  So together they spent billions on AI in 2025. And the average large company alone invested a whopping $110 million. So what do companies have to show for this unprecedented spending spree? Studies have consistently found the answer is “very little”

  1. In October 2025, Boston Consulting Group reported that 95% of companies had derived no meaningful value from AI initiatives.

  2. Around the same time, a Gartner survey found that 72% of firms had achieved no positive return on investment from their AI deployments.

  3. And in February 2026, a National Bureau of Economic Research survey of roughly 6,000 executives found that more than 80% of companies reported no productivity gains from AI despite billions of dollars in spending.

 


“Tell me about it”

Many of us have had similar, disappointing personal experiences with AI.

Anyone who has used AI extensively has likely seen it confidently deliver answers that are completely wrong. They've seen it fail at tasks as basic as elementary school math. And they've seen it frequently produces explanations that sound convincing, but collapse under closer scrutiny.

These experiences became so common that this humorous meme became very popular on social media:

This perfectly expresses the frustration that so many of us have felt when interacting with AI tools.

"AI can’t program"

Before my “second career” as an entrepreneur, I was a professional software developer. I spent decades honing my craft and eventually advanced into senior developer and project leadership roles. Later, when I started my own companies, I often served as the IT manager as well.

 

About six months ago, I decided to test the AI hype myself. And I used it to build a Monte Carlo simulation program to analyze our personal finances.

The experience was excruciatingly frustrating.

On one hand, the AI occasionally did things that were genuinely impressive. For example, it could refactor sections of code — something that is error-prone for human developers and can take weeks — in just a few seconds.

But most of the time, it felt like working with a genius who had the memory of a goldfish.

It constantly forgot what it had just done a few minutes earlier. I had to repeatedly re-upload the same files hundreds of times. It would forget important instructions and introduce unnecessary mistakes that broke working code.

It struggled to maintain architectural consistency. It ignored established protocols. And once the codebase became moderately large, it had major difficulty extending or debugging the program without creating new problems.

From the perspective of a professional programmer, all of this was beyond unacceptable.

After a few months of struggling with workarounds after workarounds, I finally gave up.

And once again, AI had fallen far short of the enormous hype surrounding it.


 

Writing on the wall

 

So as recently as December 2025, my view was that AI would likely follow the same path as many other overhyped technologies that ran into unexpected limitations.

 

History is full of similar examples:


  1. In 2016, Elon Musk said Tesla would demonstrate full autonomy with a coast-to-coast drive:

    “We'll be able to do a demonstration drive of full autonomy all the way from LA to New York… dropping you off in Times Square… and then have the car go and park itself by the end of next year.” (i.e. 2017)

    That demonstration never occurred. And nearly a decade later, fully autonomous consumer vehicles still don’t exist.

  2. Since the 1930’s, nuclear fusion has famously been described as perpetually out of reach, “just a few decades away.” As one widely repeated saying among physicists puts it: “Fusion is the energy of the future — and always will be.” Even today, commercial fusion power plants remain years or decades from reality.

And by late 2025, I felt that the AI boom would end in a similar way.  And a stock market crash was inevitable.


Revelation: 2026 Claude Code changes everything

 

A few weeks ago, I tried the latest version of Claude Code. An acquaintance said it had “gotten a lot better.” So I sighed and tried it… yet again.

This time, the results were so surprising, they rocked my world.

The first test I gave it was simple. I gave it my partially completed Monte Carlo program — the one that the previous version of AI had struggled with for months. And I asked it to finish it.

To my great surprise, it fully completed it within a few hours.

Even more surprising, the resulting code looked very clean. No messy spaghetti code, unlike what had accumulated during earlier attempts. The architecture was consistent. And the program worked.

Also, Claude Code didn’t repeatedly forget the files or instructions I had given it. The constant “goldfish memory” problem had largely disappeared.

And this time, the result actually looked usable.

Okay. So I decided to test it again, on something a lot more difficult.

 

In 2024, I hired a developer to build a custom program to track airplane flights (to help us with potentially purchasing a lot). It was fairly complex because it involved tying into multiple APIs from different companies (Google maps, airplane tracking data, etc). I found a great programmer online and he finished it in just six weeks, costing me about $3,500. That price was a bargain because he lived overseas. And domestic development would have cost me a lot more. So in the end, I was very happy with the developer, the quality of the work and the price.

Now I wondered, how would Claude Code do with this challenge?

To make the comparison fair, I gave Claude the exact same requirements and simply let it go to work. It asked me some questions and I answered them.

Well, Claude created the entire program in about five hours. And it cost roughly $0.69 (calculating the cost of 5 hours out of my $100 monthly Claude Code subscription).

And the quality was remarkable.


  1. Claude required far less detailed instruction than the human developer had needed. It quickly figured out complex API integrations and even identified an error in the API documentation that I myself had not noticed.

  2. It also produced a more polished user interface than the original version.

  3. And it added several useful features I had not thought to request (and never would have requested from a human developer because of the cost).


And it worked great.

Map view
Map view

Flight view
Flight view


So I was completely blown away.


And I could clearly see a fairly easy path to massive improvements on this as well.  The only reason it took as long as 5 hours is because Claude couldn’t see my desktop (where I ran the code).  So I spent that time as an intermediary between the AI and my computer — running the code, capturing errors, and feeding them back to Claude.

Once AI tools can directly view and interact with a user’s desktop — which is something already being worked on (see conclusion of article) — the debugging process will become automated too. And if that happens, the same project could probably be completed in something closer to 15-30 minutes. That would imply a cost closer to three to seven cents.

 

This was just astounding.

Remember, programming is something I spent decades learning. It took years of experience to develop the skills and procedures to design reliable software systems. And yes, I took pride in how I was able to do this much better than others (and it allowed me to make a good living for many years).

Yet, here was a machine performing the same work almost as well as I could have done — but doing it exponentially faster and at virtually zero cost.

So on one hand it was a bit shocking and even sad. But it also filled me with wonder at the new world of exciting possibilities this creates for entrepreneurs and for everyone who wants or needs to build software.

And if the improvement continues at this pace, it won’t be long before the code produced by AI matches that made by the very best human developers.


The revolution isn’t coming. It’s already here.

What does this mean?

Almost overnight, AI has become dramatically better at performing tasks traditionally associated with white-collar knowledge work. If you're not seeing it, it's probably because you're using a free version of the AI on search engines or in your internet toolbox, and these free public versions are a good year behind the paid ones.

Anthropic has published charts illustrating how it claims AI capability is expanding across different categories of work:

The red area represents tasks AI can perform today, and the blue area is what it could theoretically do. Everything else, in white, is beyond the current theoretical capabilities. This is, I imagine, in part because some things require physical labor or face-to-face service. And while this chart should be taken with a grain of salt (coming from an AI company), the result looks at least roughly consistent with what I'm seeing when playing with the latest models. In other words: it's not only gotten a lot better at programming -- but also at mathematics, bookkeeping, certain forms of legal analysis, and other subjects, too. And there's a lot of room for potential growth.



The Rosetta Stone to AI Growth

There's a reason AI companies targeted computer programming as an early milestone.  

Once AI can reliably write and improve its own software, it can iterate much faster. And this will allow for much faster progress across the other areas (and allows the blue and red areas on the above chart to spread more quickly). So coding is the key that unlocks faster improvements everywhere else. And if AI succeeds, it's plausible we could also see some amazing technologies including break-through new medicines, extensions of our lifetimes to points I had not thought possible, and much more.

At the same time, the implications for existing jobs could also be profoundly disturbing. Many white-collar professions rely heavily on the types of cognitive tasks AI is beginning to automate. If the technology continues to improve, there could be job disruption across multiple ranges of professions.


“Don’t worry, be happy”

 

Of course, many people respond to this concern with a familiar reassurance:

They argue that every previous technological revolution eventually created more jobs than it destroyed. Workers displaced from one industry eventually moved into new industries, and overall economic prosperity increased. So: don’t worry and it’ll all be great.

And there is some truth in that argument.

Economists often point to several historical examples where technological revolutions initially destroyed jobs, but ultimately created new industries and higher overall prosperity:


  1. Agricultural Mechanization: Around 1900, roughly 40% of Americans worked in agriculture. Today it is less than 2%, largely because tractors, combines, and modern farming techniques replaced human labor. Yet the workers who left farms moved into other jobs in manufacturing, construction, services, healthcare, and education — helping build the modern economy.

  2. The First and Second Industrial Revolution: at the turn of the 19th century, and again at the turn of the 20th, mechanization displaced large numbers of artisans and manual laborers in industries such as textiles, and then mass production streamlined jobs even further. But industrialization ultimately created vast numbers of new jobs first for machine operators, factories, and builders of the steamship and railroad and then additional positions in management, bookkeeping, engineering, and more.

  3. The Computer Revolution: Beginning in the 1980s, computers automated large numbers of clerical jobs such as typists, filing clerks, and switchboard operators. At the same time, entirely new professions emerged — including software developers, IT specialists, cybersecurity experts, and data analysts.

  4. The Internet Economy: The rise of the internet reduced/eliminated some traditional jobs such as travel agents, newspaper classified staff, and video rental workers. But it also created massive new industries including e-commerce, digital advertising, cloud computing, app development, and social media platforms.

At the same time, there could be reasons to worry.

  1. The curse of “May you live in interesting times”: Technological revolutions rarely feel comfortable while they’re happening. The transition periods for the two Industrial Revolutions lasted decades and produced enormous hardship for many workers. And the upheaval was so severe that it triggered widespread protests and even violent uprisings such as the Luddite revolts.

  2. Jobs don’t always come roaring back: A more recent example is the automation of U.S. manufacturing between 2000 and 2010. During that decade, the United States lost roughly 5.7 million manufacturing jobs — about one-third of the workforce in that sector. Some of those jobs disappeared due to offshoring. But research suggests that roughly 30% to 50% of the losses were directly caused by automation. Even today, more than two decades later, many of those workers have never regained comparable wages.


    And that disruption continues to influence social and political tensions today.

  3. Speed: Technological change has been accelerating over time.  Someone born today sees more improvements in a few years than all of humankind saw for a thousand years in our early history. So the AI change could be more rapid than past revolutions. And that may make the transition more disruptive and require a different kind of response.

  4. Breadth: AI has the potential to disrupt a much broader set of occupations than previous technological revolutions. This could make it harder for workers to adapt (versus narrower disruptions).


 

Is this inevitable?

 

No.

The promise of AI could fail to be fulfilled (or end up taking much longer to deliver) for a number of reasons:

Unexpected technical problems

The human brain is complicated and we don't understand very much of how it works. So designing a computer to replicate human intelligence is delving into a lot of unexplored territory. And there easily could be aspects of intelligence that end up being a lot more difficult to recreate than AI companies have yet realized. If so, this could push it off for years, decades or longer.

Overreliance on "emergence" comes back to bite researchers

We still don't fully understand why large AI models develop complex abilities like advanced programming in the first place.


These skills weren't explicitly programmed into them by humans. Instead, the AI models were merely trained on massive data-sets that included large amounts of human-created materials. And the capabilities just "emerged" and improved on their own as researchers scaled up the size of the models, the training data, and the amount of compute used during training. The larger the model, the more such capabilities emerged. Even though researchers hadn't specifically engineered the AI for these things, the model on its own developed impressive higher reasoning skills, including: improved multi-step reasoning, chain-of-thought explanations, translation between languages without task-specific training, and limited tool usage. And the AI's improvement in general reasoning is very useful because this skill transfers across many domains. So since we don't fully understand how it all happens in the first place, it's not unreasonable to imagine that there may be upcoming limitations that we can't yet see. The counterargument by AI creators is to point out that researchers have been extremely creative and overcome every obstacle so far (for about a decade). And going forward, they're likely to have the increased brainpower of the AI itself on the job, helping build it further. So we will see.


What if AI scaling stops working?

AI's scaling behavior (i.e., the result from building it bigger, adding more computer power, training data and parameters) has been one of its most amazing properties and biggest strengths. And it's also a potential weakness and reason it might fail in the future. Back in 2020, AI researchers discovered something unusual about their AI models. Usually when things are scaled up, improvements hit a plateau. But they found that when they scaled AI up, not only did its performance consistently improve, but even more surprising... it did so in a predictable way.   

This is an incredibly powerful dynamic. It means that models automatically kept getting better as they got bigger. As mentioned above, researchers had already noticed that new abilities appeared as size increased. But this showed that there was a real correlation, and AI was gaining these skills -- reasoning ability, coding ability, language fluency, factual recall, and multi-step planning -- simply through larger scale.



Model

Estimated Training Compute

Breakthrough

GPT-3 (around 2020)

~3×10²³ FLOPs

General-purpose language tasks

GPT-4 (around 2023) (est.)

~10²⁵ FLOPs

Reasoning + coding

Next-gen frontier models

likely far higher

Agents, advanced coding, multimodal reasoning

So AI companies have used this dynamic to justify those massive data center buildouts (even while hugely unprofitable and with break-even years away at best). The problem is... not everyone agrees scaling will continue indefinitely. Skeptics argue that:


  1. Internet training data may run out

  2. Returns could diminish

  3. Advanced reasoning might require new architectures


And even the companies themselves, while they're pushing scaling, acknowledge uncertainty about how far the approach can go. But their current bet is simple: if scaling worked to produce the last decade of breakthroughs, it may produce the next ones too. A few weeks ago, Anthropic CEO Dario Amodei claimed that the company does not see scaling “hitting a wall.” And he said "I think this year is going to have a radical acceleration that surprises everyone."

Too Difficult to Scale

Another possibility is that scaling could continue to work in principle, but that AI progress slows because it becomes too difficult or expensive to scale in practice. The required data centers may not grow large enough, or plentiful enough, fast enough. Chip supply or manufacturing capacity could become a bottleneck. Or other infrastructure challenges, such as power availability, networking limits, or cooling requirements, could make it difficult to continue increasing training compute at the pace seen over the past decade. So scaling could still work technically, but become economically impractical.


Financing Bubble Bursting

As we discussed, the foundations of AI financing aren't the most solid. Something could go wrong here, as well.


The strangeness of exponential growth

But if success does happen, then things might get very weird very quickly.

Our brains evolved in the natural world to evade predators, with virtually everything around us happening at a linear rate. So we understand that rate very well.

Say you see a lion hiding 8 feet away on the savanna. The next second he jumps at you and is only 6 feet away. Your brain is very good at knowing he'll be 4 feet away in one second, and then 2 feet away in another. And you can use that intuition to dodge or escape.

But when things grow exponentially, that takes us by surprise.


  1. Lilly pads: A good example is lily pads growing in a lake. The number of lily pads doubles each day. At this exponential rate, and starting with just one single lily, a pond surface that takes 30 days to be covered will nonetheless be only quarter full on day 28 and only half full on day 29. The sudden jump from “barely noticeable” growth to total coverage is difficult for us to intuitively grasp and anticipate.


  2. Human Genome Project: This had a deadline of 14 years.  At the halfway mark (year 7), only a small percent had been sequenced. Critics pounced and argued it would take centuries. However the speed of genome sequencing was increasing exponentially during the project. (And that has continued… with costs collapsing from roughly $100 million in 2001 to about $1,000 today.) As a result, the project finished on schedule. And futurist Ray Kurzweil later cited the episode as a classic example of exponential technological progress.

 

In my opinion, the stunningly rapid improvement of Claude Code suggests we are already in a visibly accelerating phase of AI development (and exponential growth). And if AI can improve its own software, that acceleration could compound further.

Here’s what some AI researchers are projecting (and a quick warning: these are entirely speculation and range into dystopia and utopia):

 

  1. Citrini Research posted a thought experiment about what could happen with AI.  And while not intended to be a prediction, it envisioned significant job market disruption occurring within two years.

  2. Mustafa Suleyman (former creator of one of the first of the AI’s: Deep Mind), wrote a fantastic and prophetic book about AI back in 2023. “The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma.” It’s still very topical and I feel it should be required reading for everyone. Suleyman is now CEO of Microsoft's AI department and in a recent Fortune Magazine interview predicted that “most, if not all” white-collar tasks will be automated by AI within the next 12 to 18 months (i.e. end of 2027). Some other AI leaders mentioned in the "Fortune" article linked above have suggested that AI replacements will include half of entry level white collar jobs, or half of all white-collar jobs.

  3. Dario Amodei is the CEO of Anthropic and publishes occasional essays and articles. I feel all of them are “must reads.” He projects that: “By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. It will be able to use all the senses and interfaces of a human working virtually—text, audio, video, mouse, keyboard control and internet access—to complete complex tasks that would take people months or years, such as designing new weapons or curing diseases. Imagine a country of geniuses contained in a data center.” (Bolding added.)


To the average person, these projections sound almost absurd.


But you may notice both Suleyman and Amodei said "tasks" and not "jobs." This is an important distinction. Lawyer Ralph Losey, who specializes in AI and quantum technology and who advocates for responsible, supervised use of AI for legal tasks, writes that he believes most jobs will not be fully turned over to AI, at least in the field of law. And instead AI will always be used as just another tool in their toolbox. He points out that these jobs require a human to take responsibility for what is said. If lawyers get something wrong, they can be sanctioned or disbarred; if AI errs, it cannot.


Maybe the truth will be something in between what the creators of AI forecast (buoying AI's stock) and what lawyers in the AI field believe will happen in practice (spoken with a vested interest in staying employed).

History does show that humans consistently underestimate exponential technological progress. And even if progress continues at a slower, linear pace, it's difficult to imagine AI not having a major economic impact within a longer time frame, say the next five to ten years.


All of the people quoted above make the point that AI's great power comes with great responsibility. Especially during this transition (or "adolescent" period as Amodei calls it), AI is incredibly impressive, but not quite ready to drive down the highway alone:


According to Losey: "The greatest short-term danger is... over-delegation. It is professionals putting systems on autopilot. It is institutions adopting tools without supervision, audit trails, and verification. The solution is not panic. It is disciplined integration. Trust but verify."


In other words, if AI is in its teenage years, we humans still need to be very careful parents. We may be letting it replace a lot of jobs, but we should do it slowly and carefully and only after a lot of practice runs.


 

Will we choose to just stop it?


Many people who fear the changes that a successful AI would bring have been asking: “Why doesn’t society simply decide to stop developing AI?”

 

The book “The Coming Wave” by Mustafa Suleyman, mentioned above, explores this question in detail.

Historically, technologies that provide meaningful advantages have almost always spread once they're invented. Also, there are powerful geopolitical incentives driving adoption.

Countries that failed to adopt transformative technologies have often found themselves at a disadvantage against those that do. And many villages, cities, kingdoms, countries and empires have found themselves on the losing end of the blade (after the invention of the knife and sword), the cannon fire (after the invention of fireworks) or the smart bomb (after the invention of electronic sensors and GPS etc.) And even if every country says “no” to AI… it only takes one rogue to push it forward and do it anyway.  So useful technologies have almost always proliferated rather than being voluntarily abandoned. And if AI actually works, then it will probably be used.


“This isn’t your daddy’s AI”

So that's why I’ve gone from being an AI skeptic to an AI believer.

 

And I now believe AI may become one of the most economically consequential technologies of the next several years. If so, it will likely also have an effect on investing strategy.


If you looked at AI as recently as six months ago and felt it was awful – I highly recommend that you take the time to download the latest models and see what's been happening more recently.

I've also found different models can behave very differently than others. For example, I've found Claude Code to be particularly strong at programming tasks, but awful for research. And ChatGPT performs better than Claude with research, but its coding isn't so great. So it's a mistake to try just one model and conclude "AI can't do that"... because another model just might already be doing it.

For people who want to follow the industry closely, I also recommend subscribing to “The Information,” a Silicon Valley news service. The subscription is expensive — roughly $749 per year. But their reporting on the AI industry is consistently deeper and earlier than most other publications.

 


Investing to catch the AI wave

Let's say AI does end up working. What would an investment strategy look like? I'll be fleshing this out as this series progresses (and welcoming your ideas and input). For now, here's what I'm exploring:

  1. AI itself:

    1. Public stocks: The "Magnificent Seven" (Apple, Microsoft, Alphabet, Amazon, Nvidia, Meta Platforms, Tesla)

    2. Infrastructure: Semi-conductors and data centers

    3. Venture capital: Stakes in AI startups

  2. Productivity enablers (the companies helping others use AI well)

    Not the model builders, but the businesses that help companies actually plug AI into day-to-day operations safely and effectively. If firms trim even 5–10% of white-collar roles, they’ll need better systems to manage the automation, data, and security.

    1. Workflow automation (including implementing AI agents)

    2. Cybersecurity (AI's such as ClawdBot are already increasing attack risk… )

  3. AI labor-scarce / physical sectors

    If white-collar roles get squeezed, money and talent could shift toward areas AI struggles with: hands-on, physical work (at least until robots potentially arrive sometime in the future... but that seems a lot further away)

    1. Skilled trades

    2. Infrastructure maintenance

  4. Education & retraining

    When technology reshapes jobs, people need new skills. That tends to create demand for practical training.

    1. Vocational training platforms

    2. Trade schools

    3. Certification programs, Apprenticeship models

  5. Legal / compliance / governance Every big technology shift creates new rules and oversight. AI likely won’t be different:

    1. RegTech

    2. Compliance service providers

    3. Audit infrastructure

    4. AI liability insurance (could see enormous growth)

    If AI becomes more autonomous, oversight likely becomes more important.


  6. Energy beyond data centers

    Everyone is already talking about data centers. But if AI keeps growing, the real bottleneck could become power and infrastructure.

    1. Grid upgrades

    2. Transformer manufacturing

    3. Nuclear / small modular reactors

    4. Natural gas infrastructure

    5. etc.

    Power could quietly become one of the most important parts of the AI story (and a "pick and axes" play).


  7. Dislocated white-collar asset plays

    Periods of change usually create both winners and stressed sellers. If certain sectors struggle, there may be opportunity for those with appetite for more risk:

    1. Buyouts of businesses slow to adopt AI

    2. Consolidation in service sectors

  8. Human-centric premium brands

    Sometimes tech growth increases demand for things that feel "human". If AI makes things more automated and generic, people could end up valuing the "human touch" more.

    1. Luxury goods

    2. High-touch services

    3. Experiences

This is just a start.

Please add your comments and suggestions in the Private Investor Club forum discussion on AI investing. (If you're not a member, you can join for free and just have to confirm you have no business connection with any sponsors or their affiliates). And I'll be fleshing out this list as this article series progresses.

For now, let's dive into one that looks like very low-hanging fruit and very interesting. "#3 AI labor-scarce / physical sectors".

Rise of the "AI-Resistant investment"


The last major technology revolution was the internet. And that created Amazon and caused the retail apocalypse.

Sears, Toys r Us and other retail stores that had been around for decades were wiped out.

And at first there was a huge panic and flight from all retail real estate because many people thought it all would completely disappear. But very quickly people realized that not all retail could be Amazoned. And by 2017-2019, the big focus in retail was on "Amazon resistant" retail. And ultimately that strategy ended up doing very well over the following years -- for many investors:



So I thought...what would an "AI-resistant" investment fund look like?


Jobs and businesses that are hardest to automate probably share three characteristics:

  1. Physical work in unpredictable environments

  2. Human trust / empathy

  3. Local service delivery


And research often shows that skilled trades, healthcare services, education, and hands-on service jobs are among the least vulnerable to AI automation.


Home services

  • Plumbing

  • HVAC

  • Roofing

  • Electrical

  • Landscaping

Property maintenance

  • Fire safety inspections

  • Elevator maintenance

  • Pest control

  • Parking lot maintenance

Local service providers

  • Childcare centers

  • Assisted living

  • Physical therapy

  • Veterinary clinics

  • Spa/Beauty/Hair Salons

Industrial services

  • Equipment repair

  • Specialty manufacturing

  • Waste handling

Why these work:

  • Local monopolies

  • Skilled labor shortages

  • Hard to automate

  • Hard to scale nationally


These are often called “small boring businesses that print money.”


And if AI companies do manage to succeed, these would be a good place to be. And if not, they're not a bad place to invest in anyway. Also, this is also the type of businesses that search funds traditionally invest in. So they are an easy and logical entry point into the strategy.

The micro-cap company strategy

If you've never heard of them before, search funds are a niche asset class that's performed very strongly in past recession vintage-year groups.


They focus on buying microcap market companies with strong growth, recurring revenue streams and stable free cashflow generation. So this put them somewhere between the typical venture capital (VC)  and private equity (PE) funds. The companies are more established and stable than VC funds, but too small for a typical PE fund.


Historical performance has been strong with 30%+ gross IRR and 5x+ gross ROI (over multiple decades and cycles from 1984). And they've had positive performance in multiple vintage year groups containing a recession.


A search fund sponsor is one who creates a fund that invests in multiple search funds (for added diversity).


And in my recent portoflio update, I described how I invested in three new search fund sponsors since last time (bringing my total to six). And performance has been good -- and one every produced a massive early distribution (which was a nice surprise).

So in the next week, I'm going to approach the top search fund sponsors and propose they create an "AI-Resistant" search-fund for Private Investor Club members (and others who are probably also wanting this).


You can discuss the topic of AI-Resistant investments in the Private Investor Club forum discussion on AI investing. (If you're not a member, you can join for free and just have to confirm you have no business connection with any sponsors or their affiliates).


What’s up next


I am in the middle of a deep-dive on AI (in this is only the beginning). And I’ll share what I learn via more articles.

The next article will probably focus on the revolutinoary new development of desktop AI agents. These systems move beyond using AI in a chat in the cloud and instead operate directly on your computer to perform real tasks. One example is OpenClaw, an open-source project created by an independent developer.

In just three weeks, it became the most downloaded open-source project in history.  And this is a record that took the former champion (Linux) three decades to achieve.


As Jensen Huang (founder and CEO of NVIDIA) said a few days ago:

Mac and Windows are the operating systems for the personal computer. OpenClaw is the operating system for personal AI. This is ... the beginning of a new renaissance in software.”

So there are lots of interesting and exciting developments to explore.

Conclusion

For the past several years, I’ve been skeptical of the AI hype.

And a lot of that skepticism is still justified. The industry has spent extraordinary amounts of money, most companies have seen very little real-world value, and many of the predictions being made today sound almost absurd.

But my experience with Claude Code changed my perspective.

For the first time, I saw a clear example of AI performing complex professional work at a level that would have seemed entirely impossible only a few months ago. And it did so far faster, cheaper, and better than any human developer could have managed.

If that improvement was a one-time leap, then AI may still end up following the path of many past overhyped technologies.

But if it represents a technological exponential improvement curve — similar to what we saw with genome sequencing, computing power, or the internet — then the implications could be enormous.

And if exponential growth takes hold, the future could be here sooner than we expect.


 

Discussing AI and more


Private Investor Club members have been discussing AI in depth.  And there's also been detailed discussion on thousands of sponsors and deals (including due diligence and real-life investor experiences).

  • If you're already a member, click here to discuss further:

  • If you're not yet a member, then joining is free.

    • To protect all members and keep the conversations useful and confidential, all applicants are required to verify that they're solely investors (and not investment sponsors, platforms or their affiliates)

    • Click here to learn more or to join.

 

 


 

 

 

 
 
About Ian Ippolito
image1 - headshot.jpg

Ian Ippolito is an investor and serial entrepreneur. He has been interviewed by the Wall Street Journal, Business Week, Forbes, TIME, Fast Company, TechCrunch, CBS News, FOX News, USA Today, Bloomberg News, Realtor.com, CoStar News, Curbed and more.

 

Ian was impressed by the potential of real estate crowdfunding, but frustrated by the lack of quality site reviews and investment analysis. He created The Real Estate Crowdfunding Review to fill that gap.

More information
Subscribe
join our mailing list
X Postings
  • White Facebook Icon
  • White Twitter Icon
  • White Google+ Icon
bottom of page