Welcome to this week’s edition of Cape May Wealth Weekly. If you’re new here, subscribe to ensure you receive my next piece in your inbox. If you want to read more of my posts, check out my archive.
Ever wrote a blog post that crashed the stock market?
If you’re reading any financial publication on a regular basis, chances are that The 2028 Global Intelligence Crisis (or an article mentioning it) made it across your desk. In the article, Citrini Research outlines a scenario in which the AI bull case of a rapid increase in agent capabilities turns out to be true. But rather than bringing the economy to new heights, it turns out to have fatal consequences on the wider economy. The article rattled the market, with some companies mentioned in the market (such as DoorDash, American Express, or Uber) falling as much as 10%.
More than one client sent the article to us, asking us about our view on the potential impact of AI, and of course, the subsequent impact on portfolios. But even before the article, few things have been more ‘top of mind’ than AI agents. One client shared how he was planning to hire ten developers for his new company, but made it work with two AI-savvy ones and a number of agents. And at an investor lunch that we hosted last Friday in Stuttgart, more than one family officer - who are typically not the most tech-savvy individuals, even if they invest in tech - lauded the substantial productivity gains from the latest version of Anthropic’s Claude.
So in today’s newsletter, let’s talk about the potential impact of AI on investor portfolios. First, we will take a look at the Citrini article, sharing our thoughts on what we think is realistic, and where we disagree. Secondly, we will think about how investors should think about AI impact to their portfolios, and how they should think about risk and asset class positioning.
Let’s dive in!
If you’ve already read the Citrini article, you can skip from here…
The Global Intelligence Crisis: A Recap
Let’s begin by taking a look at the original article. If you’ve already read it, or as one client said, ‘had AI summarize it for you’, feel free to skip to the end of this section. I’ve still proudly summarized this for you manually - no AI involved.
The Citrini article is not a research report, but rather a hypothetical Macro Memo from the future (June 2028), detailing the progression and fallout of the Global Intelligence Crisis. It is broken down into various sections, beginning with a hypothetical job numbers print from June 2028, before looking back on how this hypothetical scenario came out to be, and what more to expect.
How It Started
In 2026, agentic coding tools start to take off. Enterprise procurement managers wonder why they should accept yet another 5% price increase for the SaaS, instead wondering if they can just ‘vibecode it’ themselves - forcing enterprise software firms to accept substantial discounts on their renewal rates.
SaaS doesn’t go away, but as agentic coding costs see a race to the bottom, more and more competitors for any sort of software solution pop up. As companies lay off employees due to AI-driven efficiency gains, the number of ‘seat-based’ software licenses drops. To quote the article, the same AI-driven headcount reductions that were boosting margins at their customers were mechanically destroying their own revenue base.
Software firms do the only thing they can do and drive AI adoption further, leading to more job cuts. Individually, such cuts make sense, but collectively, they result in a loop of headcount losses at software firms, driving down losses in enterprise software revenues, requiring further job cuts. And so on.
When Friction Went to Zero
By 2027, LLM usage had become a default even among regular individuals. Every phone, every computer, has an agent running locally.
As everything becomes agent-driven, the ‘rent-extraction layer’ of intermediation starts to go away. Agents don’t care about preferences and design - they care to radically optimize their owner’s life, for example by independently cancelling unused subscriptions or renegotiating them (think consumer apps or insurance policies) or comparing prices directly across product providers rather than on aggregation websites (think travel booking or even food delivery).
Eventually, agents go beyond optimizing the interactions with companies, and even optimize how they interact with each other - for example by moving from traditional payment networks (2,5% merchant fees) to using crypto (0,01% transaction fees).
In other terms: Any business relying in any way on friction rather than actually adding value didn’t have a moat anymore.
From Sector Risk to Systemic Risk
Initially, this AI impact, including layoffs, was seen as a sector story focused on industries such as software or consulting. However, few realized the impact on the significance of the white-collar services economy on the overall US: While white-collar workers represented 50% of employment, they drove 75% of discretionary consumer spending - and were now losing their jobs.
Unlike traditional ‘booms and busts’ of overbuild in capex and a subsequent downturn due to an overhang of supply, AI had no ‘natural brake’. As AI got better, companies needed fewer workers, leading to layoffs. And as those laid-off workers cut back their spending, companies see their profits decline - requiring them to further invest in AI efficiency initiatives, which made even more white-collar workers obsolete, and so on. Or as written in the article: Interest rate cuts and debt repurchasing programs by the government won’t change the fact that a Claude agent can do the work of a $180,000 product manager for $200/month.
Those countries that were home to AI infrastructure winners (like South Korea or Taiwan) saw their economy flourish. But on the other end, countries who previously had been winners of the outsourcing trend, such as India, were crushed as the competitive advantage of low-cost workers is wiped out by AI coding agent replacement.
The Intelligence Displacement Spiral
In 2027, the real impact becomes visible. White-collar workers lose their jobs and struggle to find employment with similar wage prospects (think Senior Product Manager becoming Uber driver). The remaining white collar workers have to work twice as hard to keep on their jobs with few to no prospects of wage increases or raises.
While normal recessions see a broad distribution of job losses, the actual loss of jobs in this scenario is comparably low - but the impact much more meaningful: The top 10% of earners make up 50% of consumer spending, any even a ‘small’ decline of 2% in white-collar employment results in a multiple of that in lost discretionary consumer spending.
The impact deceptively also takes longer to play out - while a laid-off blue-collar worker might immediately reduce their spending, white-collar workers hold off on stopping their mortgage or their restaurant visits.
The Daisy Chain of Correlated Bets
As software companies really start to reprice to levels more fitting of their new situation, the pools of capital behind them start to come apart. PE-backed companies are written down, and software loans start defaulting - especially for the largest, most ‘generalist’ software companies (think Zendesk - nobody needs customer service software if there are no customer service agents).
While this private credit crisis might not directly affect banks like in 2008, it affects another source of funds: Pools of permanent capital - and most notably, the life insurers that PE had bought up in the prior decade. As losses become real, it’s not just institutional investors who take the hit, but also the ‘Main Street’ individuals who see their annuities whither away.
Most importantly, the hits to consumer spending start to take their toll on the mortgage market. As laid-off white-collar workers deplete their savings and become unable to pay their home loans, asset prices in tech hubs (think San Francisco, Seattle, Austin) see double-digit declines in asset crisis. The key assumption of the mortgage industry - that most borrowers will retain their income level over the lifetime of their loan - has become structurally impaired.
The Battle Against Time
Soon, the feedback loop of the real economy (AI capabilities improve, payroll shrinks, spending softens, margins tighten, driving more AI investment) turned financial. Income impairments affected mortgages, and banks saw credit losses.
In the hypothetical future, the government struggles to respond - simply finding itself unable to react to a crisis that isn’t temporary, but structural. Government transfers to households skyrocket at the same time as one of its key sources of funding, income tax receipts, drop substantially. Or as the article says, AI capability is evolving faster than institutions can adapt.
Proposed responses differ between parties, ranging from a tax on AI inference compute to a public claim on AI-generated outputs. Social unrest reaches a level akin to the GFC (Occupy Silicon Valley) as the few beneficiaries see their wealth skyrocket.
The Intelligence Premium Unwind
The hypothetical scenario outlined in the scenario is driven by a key change to a core assumption of modern history: That human intelligence, not capital or resources, was the true scarce input. Even with technological progress, humans could adapt and analyze, decide, create faster than the machine. This “intelligence premium” was now being unwound.
However, in its final lines of the meme, the Citrini describes this not as a collapse, but a repricing. As technological progress produces fewer and not more jobs, the existing economic framework no longer works - and needs to be rethought.
… to here.
AI Productivity Risk
So let’s take a closer look at a number of the thesis outlined in this hypothetical scenario. Where do we agree? Where are we sceptical? And are there many cases where the reaction outlined here doesn’t go far enough?
Let’s begin with the obvious: I think that the AI productivity ‘risk’ to white-collar jobs is absolutely real. As the article says, there will be no going back once a Claude agent can do the work of a $180,000 product manager for $200/month. There’s two main categories where I personally see real risk of jobs going away:
Jobs that could be done by technology already today, but don’t have to be automated (yet). Think controllers or assistant tax accountants. You can already automate a lot of their day-to-day tasks, but you still needed a human in the loop for the edge cases. If AI doesn’t have edge cases anymore, the need for a human employee doing just that work (data aggregation, data analysis, report preparation etc.) with little ‘value-add’ goes away.
‘Pure-play’ corporate jobs. Think all the jobs that simply exist because there’s a need to coordinate employees across a large enterprise. Project managers, data analysts, or the occasional mid-level manager with a Napoleon complex.
However, where I am more sceptical about the expected job losses is more complex work. That comes two-fold:
Complex work aggregating technical and ‘soft’ knowledge. Think lawyers, tax advisors, or maaaybe even a good wealth manager. Yes, that technical knowledge is technically out there for anyone to find and implement. But you rarely see individuals defend themselves in court, or see them do their own tax return for more complex fortunes. Even if AI gets to replace them on the technical side, I think most individuals still want a human in the loop to sign things off from both a legal (i.e. liability) and psychological perspective.
Complex work combining technical and ‘business model’ knowledge. Think of a product manager or developer building a highly specific piece of technology for a certain industry or business model. Can your AI agent or vibe coding tool build you a personalized piece of CRM, ERP, etc., in the near future? Likely yes. But do you really know everything you want or need? Don’t you sometimes buy a piece of software (or service) and end up surprised by functionalities you didn’t know you wanted, but do need? Furthermore, especially for very specific sectors, there is the question if you can even find this personalized information - a client of ours active in a technology application for real estate wanted to use AI, but simply realized that even the most advanced models simply don’t have any information on his industry to draw on. Maybe using fax and never having digitized your industry suddenly will become a competitive advantage.
So while I do think that we will see some job losses (or at least a lack of new job postings) from AI, I think that maybe we will not be as widely affected as outlined.
Risk to Software Firms & “Friction Beneficiaries”
With my humble technical expertise, I do think that there is some risk to AI software firms - in the future, firms might indeed be able to build software themselves more easily. But does that mean that they will all go away? Once again, I don’t fully agree, and see a few winners that will persist despite AI:
Software firms that are built on technical and ‘business model’ knowledge. Think software tools such as ERP or CRM made specifically for a certain industry. Once again, you could likely build such a software tool through vibe coding - but do you as an industry (but not software!) expert have all the knowledge required to build a tool? Or wouldn’t you rather outsource this to someone so you can focus on your core business? If anything, such companies will be winners of AI, as better software development capabilities make it even easier to build highly specialized AI. The losers, on the other hand, will be those without such differentiation.
Software for processes with zero risk tolerance. No problem if AI vibecodes you a CRM that forgets to automatically add a client note here and there. But what about software that helps you run core processes 24/7, with service level agreements that have you pay penalties if things are not up and running? Or ‘sources of truth’ like your ERP or your banking software with zero tolerance for a number being off? That also reminds us of one key reason why we buy software (but also services): So that we don’t need to have ownership and full (legal) accountability for everything, but to pass that off to a trusted outside party who takes on this responsibility and risk in return for the prospect of a financial return. I don’t see agents reaching this level anytime soon.
While the consequences of the shift away from large-scale, undifferentiated enterprise software might have big economic implications, once again I don’t find it bad from a ‘societal’ perspective. Just imagine how much more efficient we all could be if we had software actually tailored to our particular needs rather than struggling with SAP or Salesforce - also fueling the small businesses behind these bespoke software solutions rather than massive corporates with bloated overhead.
Moving on now to the second group of endangered companies, the ‘friction beneficiaries’. Think the business equivalents of the white collar jobs benefitting from the laziness of individuals and companies to not want to search the internet for cheap flights or cheap printing paper.
The clear losers, once again, will be those companies that don’t provide actual value, or in the past have even created negative value. Think companies that charge money to provide services that could technically be accessed for free (like charging for access to a database of technically free government records), or companies that aggregate data across the web but don’t add any additional value on top (like a simple price comparison website). Those companies were the winners of Web 1.0 and Web 2.0, but as AI drops the cost of slightly more complex busywork (of aggregating data) to a record low, they simply get disrupted away.
There’s also clear risk to the (pardon my language!) enshittifiers: Companies that started out with a clear value proposition, often also fueled by very cheap (venture) capital, that are now quickly declining in quality to their partners amid a shift to profitability. Think Airbnb, which has moved from a true hotel alternative to something more expensive (and more of a hassle) than hotels, Amazon Marketplace which has over the years actively made steps to increase its profits at the loss of the company, or ‘gig work’ businesses that are making it harder and harder for the workers that they rely on to make a living. (In other words, a clear risk to the technofeudalists - also something I don’t think is bad for our society, and the market.)
The winners are those that aggregate data, and build additional benefits at a fair price. Think Airbnb and Amazon before enshittification - companies that give small businesses the chance to build a better business by drawing on services and software-enabled efficiency from a platform. Or businesses that make it their relentless goal to make a profit not by maximizing revenue but by minimizing costs: While not a software business, one firm that comes to mind is asset manager Vanguard. They have chosen to relentlessly drive down prices of their products, keeping their competitors on their toes - and perhaps also making it AI-resilient by simply offering such products much cheaper than even AI could build it for you.
And Citrini might even be missing a winner - all other, non-technical businesses. While Citrini talks about contagion beyond the software industry, and even mentions a few winners (AI hardware producers or energy-related companies), there’s little talk of most of the ‘real economy’. And those firms could actually be winners as well: Your old-timey manufacturing firm is likely not who you would look to for software innovation, but could now have a few savvy engineers rapidly rework bottlenecks across white-collar functions, or perhaps even allow for more rapid iterations on the hardware side. It’s also those companies who are often struggling to find talent, but AI-enabled efficiency might simply allow them to do more with the same employees rather than the same with fewer. A win all around for shareholders and employees.
If certain software is no longer needed at the same level as before, software companies selling that certain software will suffer - that’s simple business. But the idea that software companies can increase their recurring revenues per seat into perpetuity, as outlined by the large software PE firms such as Vista or Thoma Bravo, didn’t just stop being true with AI - it already started happening in 2022: After the first ~10-15 years of most large-cap, SaaS enterprise businesses, the economy first saw its first rough patch since the financial crisis, and many companies looked to cutting back or at least stabilize their software spend. I don’t think that AI will make all “off the shelf” software obsolete, as outlined above, but it will likely further increase competition and price pressures.
I also don’t want to ‘beat the dead horse’ of private credit - enough people are doing that these days. But as with other industries in the past (Citrini draws a comparison to the energy sector in the 2010s), it is not surprising that the very high valuation expectations of software should one day come down. Paying 25x EBITDA for an enterprise software business, partially paid with 5-7x in debt, simply is a lofty valuation, requiring confidence into substantial growth or stability into the future. And the key word here is confidence: Even if those businesses continue to be growing, profitable firms (which I think some will be, further fueled by AI), AI is simply adding a degree of uncertainty into the 10+ year growth timeline underlying the DCF. And that uncertainty will be reflected by investors in valuations. Nothing unhealthy - more of an overdue repricing.
What is more troubling is the potential effect on mortgages, especially in software hubs. If lots of white-collar workers lose their jobs, that might indeed affect their ability to pay mortgages on million-dollar homes. That, in return, might need to fire sales, driving down prices, or even larger-scale defaults, also affecting the financial industry. But once again, maybe that is also an overdue repricing of a very overheated housing market: There are countless ideas out there on what’s wrong with the global housing market, and how you can fix that. I like the thesis outlined in Abundance the best, which says that the housing markets were brought to such lofty levels because the government wrongly incentivized on the demand side (by providing very cheap long-term mortgages) rather than the supply side (i.e. enabling more housing constructions). More demand simply drove up prices over time, which over the ZIRP time simply reached unsustainable limits - crowding out those without high-paying white-collar jobs. Of course, such a repricing would be painful, but might’ve otherwise also happened one day if we simply managed to eventually build more housing.
The Macro-Level Impact
Just as I am not a technologist, I am also not really an economist. But it goes without saying that the prospects of AI, even if less dire than outlined here, will have an impact on not just individuals, but countries as a whole.
I think there is merit in the idea that AI might reshape the world away from the relentless globalization of the recent years. Firms will no longer look for an ever-cheaper destination for software engineers and other white-collar support stuff (think China to India to Philippines to wherever it might be next), but actually reverse that step towards a technological reshoring as ‘local’ AI becomes cheaper, and more efficient, than even the cheapest worker somewhere around the world. Those prior winners of globalization, like India mentioned in the article, will likely suffer. But it might not also just be them: I think it could also bring the danger of further enhancing the threat of unfavorable demographic distributions (think Western Europe or Japan) as older employees, often the least tech-savvy employees, are laid off or simply retired earlier.
The fiscal impact on countries is harder to assess. There will likely be some sort of effect on labor policy as employees are laid off or if companies simply hire fewer new employees, making due with the same, more efficient workforce. In return, this requires more educational measures to make unemployed individuals “AI-ready”, or to simply send them to another, perhaps blue-collar profession. In terms of budgets, it remains to be seen if the impact is large enough to actually make a dent in the already stretched budgets of major economies amid rising costs of healthcare, unfunded pension obligations, and of course, newly-required defense budgets.
But I think it does make it necessary for society to talk about how AI winners give back. I am a cautious supporter of capitalism, although with a clear responsibility of the government and the ‘economic winners’ (think large corporations and their owners) to do their part. If AI really results in a shift of human labor to ‘agent labor’, we maybe need to consider if it would be fair to also treat digital workers as taxable individuals. But of course, there’s clear challenges in that: One, it would be an unfavorable treatment to the most efficient firms, effectively putting a tax on winners while not (or less) taxing those less efficient in rolling out AI. Two, and perhaps more importantly, there might simply not be someone to tax at the prior level: A 15.000€/month product manager might’ve paid almost half of that in payroll taxes and social charges. But what if their work is replaced by a 200€/month Claude agent? Would we put a 7.500€/month ‘tax’ on that AI worker? That also doesn’t quite seem straightforward. More clear paths of course are continued fair taxation of large enterprises and their various tax avoidance strategies (tax havens, IP transfers, and so on). But even then, those gains might ‘only’ be taxed at ~20-30% rather than the prior 40-50%, also leaving a fiscal budget hole. Either way, it gives stretched government budgets another funding gap to address.
Preparing for the Intelligence Crisis (Portfolio Level)
We’ve taken a look at the Citrini paper’s hypothetical future, and already touched upon a few winners, losers, opportunities, and risks that we see. But if we see this hypothetical future to be somewhat likely - how should we prepare ourselves? How can we protect our portfolios against the looming threat of superintelligence?
Before we talk about specific asset classes, we should always take a step back and assess our risk tolerance as an investor. Whether it’s the looming threat of AI reaching new levels, geopolitical crises (as we’re seeing right now in the Middle East), or other events, we should first think about how much risk we are willing, and able, to take in our investments - and also consider what the easiest ways are to reduce our risks.
This is particularly true in the current, more turbulent market environment. Amid geopolitical troubles and high valuations in many asset classes (especially US equities, a major building block in most portfolios), we actually prefer to run client portfolios at lower risk levels than intended over the long term. We would consider rebalancing to a higher risk level (i.e. higher expected risk, higher expected return) during a market correction, or if we feel like the market sentiment is calming down a little bit - hard to imagine in a time of ‘polycrises’, but not impossible.
To make this a bit more tangible: Many of the tech entrepreneurs we work and talk with come to us with portfolios that consist mostly of equities, and at best, of a little bit of gold and crypto. They spend a lot of time thinking about how they should position within those asset classes (esp. equities) to protect themselves from AI-driven threats - but forget that the much easier way to reduce their overall risk is actually by simply reducing equities in favor of less risky asset classes, in particular, bonds. Or in other words: If the threat of a 50-70% drawdown in an equity-heavy portfolio is too much for your risk tolerance, maybe you should just have less equities. It sounds simple, but after many years of strong equity markets, it can sound almost counterintuitive.
Preparing for the Intelligence Crisis (Asset-Class Level)
Perhaps the ‘portfolio talk’ is not enough for you - you want specific examples. Let’s run through the different asset classes:
First, fixed income. Amid an environment of likely stable interest rates that are above historic inflation rates, we think that bonds can be a great diversifier for equity-heavy portfolios. We would be mindful to diversify widely, but also pay particular attention to tight credit spreads and duration risks. We also continue to like inflation-linked bonds to hedge against higher inflation rates amid increased government spending, and continue to see opportunities in more ‘complex’ but investment-grade structured credit, such as AAA CLOs. Lastly, if you think that the AI crisis could cause severe consequences, you can also think about adding some duration to your fixed income portfolio in expectation of interest rate cuts.
We do share some of the scepticism around private credit, which might turn out to not be as stable as promised amid AI disruption and tighter credit provision, especially to software companies. However, I personally think that there might be opportunities in certain oversold vehicles that primarily invest in software debt (otherwise certain smart hedge fund investors wouldn’t offer to buy out illiquid stakes, I think).
Second, equities. Frequent readers know that we don’t make bets on individual companies. However, it goes without saying that if the aforementioned scenario comes true, there will be a much wider dispersion of return outcomes than in recent years, which could make active management attractive. We would continue to be mindful of concentration risks in large global indices, esp. when it comes to US tech stocks. While some of the heavily-weighted ‘Mag 7’ might continue to be winners, their fate could also change rapidly. On a pure valuation basis, we see opportunities elsewhere, i.e. in Europe or Emerging Markets. Ironically, ‘boring’ Europe and its asset-heavy companies could actually benefit more if asset-light software and service businesses (which make up a lot of the US indices) struggle due to an AI disruption. Lower valuations also provide a natural buffer when the outlook for those macroeconomies worsens, rather than the “priced-to-perfection” US stock market. Once again, if you feel uncomfortable with the level of equity risk in your portfolio, consider simply cutting back - but be careful around making active bets, i.e. pivoting your portfolio entirely towards perceived ‘AI winners’.
Third, commodities. This is where we see the biggest opportunity for diversification: One, in terms of gold, which could see a continued price rally as investors stay cautious around increased government debt and uncertainty (for further reading, check out Gold: A Primer). But it’s also broader industrial metals and other commodities that could profit - while intelligence will no longer be scarce, and while capital seems to rally relentlessly towards anything that has an AI sticker, one thing that will likely be scare for the foreseeable future is physical materials: Even with most white-collar workers obsolete, we can only mine so many materials required to build more chips and data centers. And especially the huge demand increase for energy infrastructure and more advanced chips will be hard to match by the mining capacity, which takes years, if not decades, to be ramped up after well over a decade of underinvestment.
Fourth and last, alternatives. While readers know me as cautiously bullish around private equity, I would understand if investors want to wait out the next months to see where AI will really go. I continue to be optimistic around smaller (non-tech) businesses, which are less prone to being disrupted by global trends and have fewer workers and tasks, making AI enablement perhaps easier than for a large, established firm. The same scepticism applies to venture, where I would go as far as to wonder whether an AI future as outlined in the model might be a death blow to investing in conventional high-tech, asset-light start-ups - they simply might not need external funding anymore.
However, where we see a big opportunity are real assets: As with commodities, we only have, and can only build so many houses. There is the outlined risk of asset price depreciation, there likely will be opportunities in scarce assets (as one AI-savvy client told me, we can build many data centers, but we can’t just build another quaint Tuscan old town). The same applies for energy infrastructure, which might see almost limitless demand in the foreseeable future - think renewable energy and energy storage.
Finally, you might wonder: Those are all fairly generic bets. What about the specific bets, like trying to find the next OpenAI, the next NVIDIA?
That is a valid thought, and one that we have as well (as another client said: Jan, let me know if you find a small-cap Chinese chip producer that can be the next NVIDIA). If I knew which companies could be the next NVIDIA, I likely wouldn’t be writing this newsletter. As a wealth manager, we don’t quite see it as our job to find the next outlier stock, rather than making sure that clients can be on track without ever having to ‘hit it big’ beyond their initial liquidity event. But we definitely see the attraction of why you’d want to find such an opportunity. So if you want to make a few bets, make sure to size them correctly: Properly size your Aspirational Bucket that can hold such bets, and properly size them as individual, tactical trades.
Unfortunately, none of us have a crystal ball. Perhaps AI will fizzle out and stay at the current level. Perhaps we will all be obsolete workers next year and live according to the rules of Fully Automated Luxury Communism. Or more likely, somewhere in the middle. AI is without a doubt one of the biggest opportunities and risks of our lifetime. But we should not lose sleep until things are clearer - and be informed investors, with a clear plan on how to react in different scenarios.
Hopefully, it won’t be an Intelligence Crisis, but rather an Intelligence Boom.
Liked what you read? If you enjoyed this piece, make sure to subscribe by adding your email below. I write about topics covering the world of family offices, asset allocation, and alternative investments.


