Key points
-
The release of DeepSeek’s model, with its cutting-edge performance,
shoestring development cost, and open-source availability, challenged
assumptions about which country is the AI leader. -
The White House unveiled an ambitious plan and adjusted its chip
export policy in an attempt to secure an edge in the AI race. -
The growth of alternative financing for data centers reveals the scale
of capital now required to fuel the AI boom – and the risks associated
with it. -
AI’s potential is immense, but too much may be expected of it in the
short term.
Chinese AI enters the stage … with a bang
On Jan. 29, 2025, a little-known Chinese tech company, DeepSeek, released
an AI model, R1, that shook the industry. R1’s cutting-edge capabilities
make it seemingly as good a model as those created by U.S. leader OpenAI,
the maker of ChatGPT. But what unsettled the industry most was its
shoestring development cost – just $6 million, a fraction of what comparable
U.S. models required.
Barred from importing state-of-the-art chips from the U.S. due to the
export restrictions imposed by the former Biden administration in October
2022, DeepSeek had to rely on older hardware. To compensate, the company
pushed efficiency to the limit, making careful tradeoffs between accuracy
and computing power, replicating this method at scale, and fine-tuning
every other aspect of performance. This approach kept development costs
remarkably low while still producing a highly capable model.
Adding to the excitement, DeepSeek-R1 was released as an open-source
model – i.e., publicly available at no cost. Users can download and run it
on their own computers or servers, keeping data private, and they can
retain, modify, or adapt it for their own needs. This means anyone, from
individuals to large companies, can build tools or applications without
seeking permission or paying for expensive access.
Open- vs. closed-source
Open-source models stand in contrast to most of today’s leading AI
models, such as OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s
Gemini, which are closed-source. It wasn’t always so. OpenAI initially
produced open-source models, but the San Francisco-based lab shifted to
a closed approach in 2018.
Closed-source model creators share only parts of the code, or describe
some of the training process but keep the full details private, much
like a secret recipe. Users must rely on those companies to maintain the
entire AI system, and are thus vulnerable to changes in model creators’
prices or access rules.
For instance, in the case of ChatGPT, OpenAI is responsible for training
the model, maintaining the code that runs the model, running the model
on powerful servers to enable users to access ChatGPT instantly, and
adding features. If OpenAI were to stop doing all that, users could not
run it themselves.
DeepSeek’s decision to make its cutting-edge AI model both cheap and
widely available gave China an unexpected competitive edge, as its model
could spread and be adapted far more widely and cheaply than those of its
Western rivals. This shook the U.S. tech ecosystem to its core. Until
then, the U.S. had been confident in its position as the world’s leading
AI force.
Chinese AI: More to it than DeepSeek
DeepSeek may have captured the attention of the West, but China’s AI
achievements go far beyond a single company. That should not be a
surprise as millions of engineers and scientists graduate from Chinese
universities every year, the country has the spare grid capacity
necessary to run the power-hungry AI models, and its permissive planning
laws allow data centers to be built swiftly.
Still, China faces a critical constraint: it lacks the sufficient
domestic supply of cutting-edge chips. Huawei, the country’s hardware
champion, cannot yet produce top-end chips in sufficient quantities.
Nevertheless, the combination of these favourable conditions, remarkable
ingenuity, and a relentless effort at squeezing as much efficiency as
possible from older-generation chips has enabled Chinese firms to
release frontier AI models (i.e., highly capable general-purpose AI
models that can perform many tasks matching or exceeding the
capabilities of today’s most advanced models).
In July 2025 alone, Alibaba, one of China’s largest tech and e-commerce
companies, released Qwen3, a model approximately one-quarter the size of
the most prevalent AI models, making it significantly more
energy-efficient while maintaining comparable performance. Meanwhile,
Moonshot AI, a lesser-known AI company, unveiled Kimi K2 in July, one of
the largest open-source models ever released at that time. Kimi K2
excelled in benchmarks like MATH-500, which tests mathematical
reasoning, outperforming frontier class U.S. models from OpenAI (GPT-4)
and Anthropic, according to Venturebeat, a publication that focuses on
technology news.
Despite this progress, China still lags the U.S. in
productization – turning AI models into agentic tools, or fully integrated
tools that can autonomously assist users in complex workflows. Agentic
tools take initiative, make decisions, and complete multistep tasks for
the user, such as reading incoming customer queries, selecting the
urgent ones, drafting replies, and escalating complex issues to a human.
America’s response
The rivalry in technology between China and the U.S. has been ongoing for
years. Both countries see winning the AI race as a strategic advantage – not
only will the winner be able to extend their geopolitical influence
through the supply of AI systems to other countries, but there are also
important implications for military applications.
In 2025, the White House initiatives to secure an edge in the AI race
included releasing its AI Action Plan in July and changing chip export
restrictions to China.
Winning the race: America’s AI Action Plan
Early in his second term, U.S. President Donald Trump directed his
administration to develop an AI strategy for the country. The AI Action
Plan is built on three core pillars: accelerating innovation, expanding
data center infrastructure, and promoting American technology abroad.
According to the Brookings Institution, an American think tank, the plan
can be lauded for focusing on advancing and democratizing basic and
applied AI research, and addressing the need to develop an AI-ready
workforce. However, it has a few concerns, including that adequate safety
measures may be compromised by the strong emphasis on accelerating U.S. AI
innovation and global competitiveness. Insufficient oversight – particularly
in financial AI applications – could pose significant systemic risk.
Despite these misgivings, Brookings scholars think that overall these
factors could strengthen the regional innovation ecosystems in the U.S. so
long as the federal government provides adequate support and funding.
President Trump’s plan for the U.S. to achieve “unquestioned global
technological dominance”
Key initiatives of the U.S. AI Action Plan
| Initiatives | Goals |
|---|---|
| 1. Accelerate AI innovation | |
| Deregulate | Roll back regulations seen as obstructive to AI innovation |
| Support open-source AI |
Improve access to AI compute and datasets through initiatives such as the National AI Research Resource |
| Facilitate adoption | Accelerate AI use in government and defense |
| Upgrade workforce and manufacturing |
Expand AI literacy and retraining programs, invest in robotics and next-generation manufacturing |
| Foster scientific advancement | Invest in AI-enabled laboratories and innovation test beds |
| 2. Build American AI infrastructure | |
| Streamline permitting |
Accelerate approvals for data centers, energy projects, and semiconductor facilities while safeguarding national security |
| Strengthen the power grid |
Modernize infrastructure for generating and distributing electricity |
| Reshore semiconductor manufacturing | Expand domestic chip production |
| Set up secure data centers |
Build high-security data centers for military and intelligence uses |
| Develop the workforce | Train a skilled labour force to maintain AI infrastructure |
| Establish cybersecurity resilience | Ensure “secure by design” AI systems are widely used |
| 3. Lead in international diplomacy and security | |
| Focus export strategy |
Promote U.S. AI hardware, software, and standards abroad while tightening enforcement and closing loopholes in regulations for AI compute and semiconductor export |
| Counter authoritarian influence |
Push back against Chinese influence in international AI governance forums such as the United Nations |
| Assess security risks |
Proactively evaluate cutting-edge AI systems for national security vulnerabilities including chemical, biological, and nuclear threats |
Source – Executive Office of the President, July 2025
Export controls: Chips as a geopolitical weapon?
The Trump administration is also using export controls, a strategy used in
Trump’s first term and followed up by former President Joe Biden to
respond to the Chinese threat. This time, however, the controls have sent
confusing signals.
In April 2025, the Trump administration banned exports of NVIDIA’s H20
chips to China over concerns the technology could strengthen Beijing’s
defense industry. After strong lobbying from the semiconductor industry,
the ban was lifted in July. A few weeks later, Washington announced a new
framework: NVIDIA and Advanced Micro Devices would be granted export
licenses to sell specific chips to China so long as they shared 15 percent
of their revenue from these chip sales with the U.S. government.
U.S. chip export controls span three administrations
Timeline of U.S. strategy for semiconductor exports to China since 2018
April 2018: Initial restrictions
Trump administration blocks sales of advanced chips to Chinese telecom
firm ZTE. (R)
June 2020: Expanded restrictions
Trump administration extends prohibition of advanced chips sales to
Chinese conglomerate Huawei. (R)
2020–2021: Pressure on allies
Trump administration pressures allies to impose similar restrictions
(e.g., the Dutch government restricts ASML from selling its most
advanced semiconductor equipment to China). (R)
October 2022: Continued export ban
Biden administration imposes broad restrictions on export of high-end
chips (e.g., H100) to all Chinese entities based on national security
concerns. (D)
November 2023
NVIDIA unveils the H20 chip, which complies with U.S. export
restrictions.
April 2025: Further expansion of restrictions
Trump administration bans export of H20 chips, believed to be powerful
enough to support Chinese defense industry. (R)
May 2025: Further expansion of restrictions
Export ban expanded to include newer models (e.g., cutting-edge H200
chip) to curb China’s technological advancement. (R)
July 2025
U.S. government reverses ban on H20 chips; H100 and H200 export bans
remain in effect. (R)
August 2025: Revenue sharing agreement
NVIDIA and AMD agree to pay 15% of their China chip revenue to the U.S.
government in exchange for export licenses to resume sales to China. (R)
R = Republican government initiative; D = Democratic government
initiative
Source – RBC Wealth Management
This policy U-turn points to the difficulty of calibrating the security
and economic interests of the U.S., and exposes tensions between China
hawks in the U.S. government pushing for tighter export controls and
businesses eager to access the world’s second-largest economy.
The Trump administration seems to have adopted NVIDIA CEO Jensen Huang’s
view that providing China access to NVIDIA’s AI chips could serve both the
company’s interests and U.S. strategic goals by creating Chinese
dependence on American technology. NVIDIA supplies not just the chips, but
also the hardware and infrastructure that support entire data centers. If
major Chinese AI firms such as Alibaba, ByteDance, and Tencent build their
data centers around U.S. hardware, it could give Washington a geopolitical
advantage and greater leverage in any future negotiations with China. The
logic is that if U.S. technology were completely banned from China,
Beijing would likely accelerate efforts to develop its own AI
infrastructure.
H20: NVIDIA’s export-compliant chip for China
The H20 chip had been developed two years earlier by NVIDIA specifically
for the Chinese market and to comply with the Biden administration’s
2022 export controls. It was designed mainly for AI inference – the
process by which trained models generate insights and suggest
decisions – but lacks the power needed to train new models. By offering a
chip that China could use but one that is markedly weaker than the
next-generation H100 and H200, Washington sought to maintain China’s
dependency on American hardware, while at the same time limiting its
ability to advance in frontier AI.
Many observers worry that engineering a financial payout for the U.S.
government has now taken precedence over national security. Reversing the
H20 export ban, they argue, may be a strategic mistake, effectively
providing China with the hardware it needs to surge ahead in AI.
But Chris Miller, acclaimed author of
Chip War and professor of international
history at Tufts University, offers a more nuanced perspective, drawing on
his expertise in the global semiconductor industry and geopolitics. He
believes that despite the 15 percent financial arrangement, national
security is still at the heart of Trump’s policies. He argues that despite
criticizing Biden’s CHIPS Act, the Trump administration continues to
disburse grants promised under the 2022 law to semiconductor companies and
research institutions. Moreover, the U.S. government has taken a 9.9
percent equity stake in Intel, the only U.S. semiconductor firm that both
designs and manufactures leading-edge logic chips. Miller believes the
U.S. administration sees Intel as relevant to the broad future of U.S.
technological leadership.
Whether the strategy of making China “addicted” to U.S. tech, in Commerce
Secretary Howard Lutnick’s terms, will prove successful remains to be
seen. In August, China urged local companies to avoid using NVIDIA’s H20
processors particularly for government-related purposes, according to
Bloomberg. Then in September, it banned its largest tech firms from buying
NVIDIA’s AI chips in an effort to foster domestic production.
China’s technological rise: A longstanding U.S. concern
Slowing China’s technological progress has long been a preoccupation of
U.S. administrations, and it is instructive to look at how U.S. export
control policy evolved over time.
The first Trump administration realized that:
-
Semiconductors were key to not only a wide range of day-to-day
technologies, such as smartphones and computers, but also to military
applications and winning the AI race. -
The U.S. had a real competitive advantage in designing cutting-edge
semiconductors. -
By blocking semiconductor exports, the U.S. could slow China’s
technological advancement.
As a result, its strategy was two-pronged:
-
The administration tried to attract some of the largest foreign
semiconductor manufacturers to the U.S. (hence TSMC, the Taiwanese
behemoth that produces most of the world’s cutting-edge
semiconductors, started its multibillion dollar investment in advanced
semiconductor manufacturing operations in Arizona). -
The White House also imposed a ban on chip exports to Huawei, the
Chinese technology giant, and pressured U.S. allies to enforce similar
restrictions.
Under the Biden administration the strategy evolved as follows:
-
The CHIPS Act funded and strengthened the domestic semiconductor
industry (the administration recognized that while designed in the
U.S., most cutting-edge semiconductors were manufactured abroad,
leaving the U.S. vulnerable to supply chain disruptions). -
Biden also tightened export controls on semiconductor equipment and
imposed broad restrictions on the export of AI chips to China one
month before the release of ChatGPT.
New sources of financing
Another key AI development in 2025 has been the shift in funding sources
for the substantial investment needed to build the infrastructure to
support AI models.
According to McKinsey, a consulting company, global data centers will need
between $3.7 trillion and $5.2 trillion by 2030 to meet demand for AI
computing power, including hardware, processors, memory, storage, and
energy.
Much of this will be shouldered by Big Tech. Companies like Alphabet
(Google), Amazon, Microsoft, and Meta Platforms (Facebook) – also referred
to as “hyperscalers” – are building large data centers to support their
cloud services and AI initiatives. Traditionally, they often preferred to
self-fund these investments, and were able to while maintaining robust
balance sheets with minimal debt. But this is changing due to the scale of
financing needs.
Other tech companies are boosting the demand for data center financing as
well. OpenAI has formed a joint venture with Oracle and SoftBank to invest
up to $500 billion in AI infrastructure across the U.S. in the next four
years. Property developers are also increasingly building data centers,
further fueling the financing demand.
Financing requirements are so large that companies are turning to
different sources of funding. Debt is gaining in popularity:
investment-grade borrowing by U.S. tech firms was up 70 percent year over
year in the first half of 2025, according to Bloomberg, with Alphabet
issuing bonds for the first time in five years in April 2025. Smaller or
fast-growing firms, such as CoreWeave, are even turning to borrowing
arrangements that use graphics processing units (GPUs) – specialized chips
that accelerate AI computations – as collateral.
Debt securitization is growing, whereby data center-related borrowing is
pooled and sold to investors in tranches, much like mortgages are. The
data center-related debt securitization market, virtually negligible five
years ago, has grown rapidly and is now valued at around $30 billion,
according to AInvest.
Private capital is playing an important role too, with large private
equity firms increasingly acting as direct lenders to businesses and
infrastructure projects, alongside their traditional equity investments.
In August 2025, Meta finalized a $29 billion deal for its Hyperion data
center project including a debt portion of $26 billion led by PIMCO, the
investment management firm.
Data center lending and investing carry additional risks beyond cost
overruns. Overcapacity from rapid capital investment can leave assets
underutilized. For example, in the late 1990s, U.S. telecom companies laid
more than 80 million miles of fiber optic cables across the country after
overestimating future demand. Prices plummeted and many companies entered
bankruptcy proceedings.
Technology risk is also substantial. Much of the current spending goes to
data centers built to train powerful AI models, but as demand shifts
toward running those models, the need for computing power could drop,
lowering the value of these assets. Newer, higher-performance chips could
make older facilities less useful, and some may even require innovative
cooling systems, leaving existing data centers obsolete.
Hyperscalers are diversified enough to weather these challenges, in our
view, though they now carry substantial infrastructure and capital
commitments – they are no longer asset light. We believe smaller investors
and lenders will need to be particularly vigilant.
Superintelligence around the corner?
Some observers are optimistic that progress in AI will be swift, raising
hopes that Artificial General Intelligence (an AI model with human-like
cognitive abilities) and even Artificial Superintellingence (an AI model
with an intellectual scope beyond human intelligence) could be achieved
within their lifetimes. On July 30, 2025, Mark Zuckerberg, CEO of Meta
Platforms, perhaps driven by ambition and competitive positioning, stated
that “developing superintelligence is now in sight.”
Such enthusiasm is understandable. The pace of AI progress has been
remarkable. OpenAI’s GPT-2, released in 2019, could write coherent
paragraphs but often lapsed into meaningless output, while by early 2023,
the company’s GPT-4 model had advanced enough to pass the U.S. bar exam,
scoring in the top 10 percent of test takers, as reported by Reuters.
Remarkably, this leap in performance was achieved without changing the
science behind AI, but rather just by feeding the model more data and
using more powerful GPUs.
Yet others guard against too much enthusiasm. Rodney Brooks, robotics
pioneer and former director of the MIT Computer Science and Artificial
Intelligence Laboratory, and best known as the founder of the company that
developed and supplied the search-and-rescue robots used at Ground Zero
after the 9/11 attacks, offers a clear-eyed take on AI’s current hype.
In a February 2025 interview with Newsweek, he
emphasized that while AI models can use language fluidly, they are
essentially pattern recognizers, really good at spotting and repeating
patterns in data. That, in his view, is not the same as truly
understanding or thinking for themselves.
He believes change will come more slowly than is generally expected
because rolling out new technology almost always runs into practical
hurdles like cost, integration with other systems, and regulation. And
because today’s AI models are pattern-matchers, they still need a great
deal of careful oversight – they are far from being plug-and-play as the
hype often suggests. Finally, he emphasized that corporate adoption of the
new technology will be based on return on investment, not “glitziness.”
On that front, a July 2025 report from the Massachusetts Institute of
Technology revealed that 95 percent of generative AI (GenAI) pilot
programs in enterprises yielded no measurable return on investment,
despite the $30 billion–$40 billion enterprise investment in GenAI so far.
The authors conceded that over 80 percent of organizations surveyed have
explored or piloted tools such as ChatGPT and Copilot, with 40 percent
reporting to have purchased an official large language model subscription.
Interestingly, they found that workers at more than 90 percent of
companies surveyed say they use personal AI tools like chatbots for work,
but the report highlighted that these tools seem to primarily enhance
individual productivity, not profitability.
Promise … and challenges?
2025 has been a pivotal year for the AI industry, and 2026 will likely be
as eventful. Some of the significant developments to watch in the next
year include:
-
OpenAI rolling out its first in-house AI chip, potentially reducing
third-party hardware dependence; -
The evolution of the OpenAI-NVIDIA relationship now that the latter has
taken a $100 billion stake in the former – a move that could reshape the
balance of power in AI hardware and model development; -
NVIDIA’s new Rubin AI chip, which holds promise to be even more
efficient; -
Meta’s massive multi-gigawatt AI data center, called Prometheus, being
built in Ohio, which illustrates the escalating scale of investment
needed to support cutting-edge AI.
Beyond these developments, which will likely keep enthusiasm high, we
believe investors should also keep on eye on whether such investments and
the application of AI in business are generating adequate returns. As has
almost always been the case in the past, the risk remains that investors
may overestimate what the new technology can deliver in the short to
medium term.
The material herein is for informational purposes only and is not directed at, nor intended for distribution to or use by, any person or entity in any country where such distribution or use would be contrary to law or regulation or which would subject Royal Bank of Canada or its subsidiaries or constituent business units (including RBC Wealth Management) to any licensing or registration requirement within such country.
This is not intended to be either a specific offer by any Royal Bank of Canada entity to sell or provide, or a specific invitation to apply for, any particular financial account, product or service. Royal Bank of Canada does not offer accounts, products or services in jurisdictions where it is not permitted to do so, and therefore the RBC Wealth Management business is not available in all countries or markets.
The information contained herein is general in nature and is not intended, and should not be construed, as professional advice or opinion provided to the user, nor as a recommendation of any particular approach. Nothing in this material constitutes legal, accounting or tax advice and you are advised to seek independent legal, tax and accounting advice prior to acting upon anything contained in this material. Interest rates, market conditions, tax and legal rules and other important factors which will be pertinent to your circumstances are subject to change. This material does not purport to be a complete statement of the approaches or steps that may be appropriate for the user, does not take into account the user’s specific investment objectives or risk tolerance and is not intended to be an invitation to effect a securities transaction or to otherwise participate in any investment service.
To the full extent permitted by law neither RBC Wealth Management nor any of its affiliates, nor any other person, accepts any liability whatsoever for any direct or consequential loss arising from any use of this document or the information contained herein. No matter contained in this material may be reproduced or copied by any means without the prior consent of RBC Wealth Management. RBC Wealth Management is the global brand name to describe the wealth management business of the Royal Bank of Canada and its affiliates and branches, including, RBC Investment Services (Asia) Limited, Royal Bank of Canada, Hong Kong Branch, and the Royal Bank of Canada, Singapore Branch. Additional information available upon request.
Royal Bank of Canada is duly established under the Bank Act (Canada), which provides limited liability for shareholders.
® Registered trademark of Royal Bank of Canada. Used under license. RBC Wealth Management is a registered trademark of Royal Bank of Canada. Used under license. Copyright © Royal Bank of Canada 2025. All rights reserved.
Managing Director, Head of Investment Strategy
RBC Europe Limited
