What to do on AI governance? (Photo by Alex Wong/Getty Images)
To lead or not to lead in the boardroom on AI, that is the dilemma facing the Berkshire Hathaway board and their shareholders.
PwC research estimates that global GDP could be almost 15% higher by 2030 as a result of artificial intelligence (AI), adding the equivalent of $15.7 trillion in economic growth and output to current GDP levels. PwC describes this impact as “the biggest commercial opportunity in today’s fast changing economy.”
The advancements being made around non-biological intelligence, i.e., AI and the use and injection of this technology into business processes to replace or augment human intervention, decision-making and action is a big thing for business value propositions, which makes it a big thing for investor interests.
But is it a big enough to force corporate boards to evolve beyond their entrenched governance as-usual mindset and antiquated oversight policies? Should AI be driving a new wave of boardroom transformation? One where corporate boards refresh director expertise, and change how they organize themselves and their activities to strengthen their role as a control within the AI system capable of not only understanding how AI works, but how AI is being used throughout the company and what it’s implications are?
Berkshire investor Tulipshare Capital LLC thinks so and has proposed that Berkshire Hathaway transform their corporate governance approach around AI through the addition of an AI committee on the Berkshire board. This issue is up for vote on the Berkshire proxy now and will be highlighted during the annual BRK general meeting taking place in Omaha on May 3.
In describing the reasoning behind their shareholder proposal for a BRK AI committee, Tulipshare says:
RESOLVED: Shareholders request that Berkshire charter a new committee of independent directors on Artificial Intelligence (“AI”) to address risks associated with the development and deployment of AI systems across its own operations, portfolio companies, and new investments. The committee charter shall authorize the committee to meet with employees, customers, suppliers, and other relevant stakeholders at the discretion of the committee, and to retain independent consultants and experts as needed.
Supporting Statement: Shareholders support the responsible use of AI to drive growth, improve efficiency, and maintain competitiveness within Berkshire and its portfolio. However, AI technologies also pose regulatory, societal, and human rights risks that require proactive management. The White House Office of Science and Technology Policy’s ethical guidelines for AI stress the importance of safety, transparency, algorithmic fairness, and human oversight, and the National Institute of Standards and Technology established a “Risk Management Framework” outlining a proper approach to AI risk which evaluates harm to people, organizations, and ecosystems-all of which are increasingly relevant as AI usage expands across industries.
AI systems, if not responsibly governed, can cause significant harm, as seen when Amazon, Berkshire’s portfolio company, scrapped a biased hiring tool and Alexa spread false claims about the 2020 US election, highlighting risks to fairness, public trust, and democracy. In 2024, Glass Lewis and ISS supported a shareholder proposal to Apple, another Berkshire portfolio company, arguing that improved transparency would allow shareholders to better evaluate the risks associated with the use of AI and would not be overly burdensome on the company.
Berkshire’s substantial investments in AI-driven companies amplify the need for strong governance. As Warren Buffett has warned, the irreversible nature of AI development demands robust oversight to mitigate significant risks associated with its misuse and a lack of understanding. Without it, Berkshire risks falling behind in a rapidly evolving market, especially as institutional investors like Norges Bank publicly set its expectations regarding governance of AI by its portfolio companies, and Legal & General Investment Management has also promulgated its expectations for AI adoption and publicly supported the AI proposal to Apple alongside Norges.
Berkshire and its portfolio are increasingly exposed to financial, legal, and reputational risks as AI systems become more complex, fail to function as intended, or yield problematic outcomes. Companies failing to implement ethical AI governance face growing legal challenges, including lawsuits for discrimination and violations of privacy laws. An independent AI committee would help Berkshire anticipate risks, ensure regulatory compliance, avoid legal battles, and protect its reputation and consumer trust by establishing clear ethical AI guidelines.
We urge shareholders to support the creation of this independent AI committee to better manage the risks and opportunities of AI, ensuring the long-term value and reputation of our Company so that Berkshire remains at the forefront of responsible corporate governance in an increasingly AI-driven world.
While AI presents new and unique challenges for boardrooms around the world, it is not the only information technology challenge that boards have had to recently face. Boardrooms are also facing challenges in governing cybersecurity and in the not-too-distant past they have been confronted with other information technology developments such as social media, cloud computing, IoT, and even the advancement of the internet itself.
Despite their encounters with these prior developments and the new challenges of effectively governing AI, the boardroom remains remarkably slow to adapt and evolve as an oversight control within the digital business systems that power the companies they serve given the pace, magnitude and scope of digital disruption over the last few decades. But boardrooms need to be as adaptive as these issues are disruptive, to be effective, however most of them are showing that they are not up to the challenge. This leaves investors to bear more risk than they should have to and to foot the bill when these risks become reality.
Boards have both a responsibility to govern the positive value creating opportunities of these technologies alongside their negative consequences. But most U.S. public company boards have done little to nothing to strengthen their ability to govern these issues, other than to actively resist common sense board reform that would serve the interest of investors and other stakeholders. While there are some boards that are leading by adding directors to the board with the relevant expertise to understand these technologies and by changing how they organize themselves to bring more focus to these issues, they are the exceptions, not the rule.
The mistakes that boards have made with cybersecurity oversight range from viewing it as a general risk like other enterprise risks, to believing that effective oversight can be fulfilled within the construct of an antiquated legacy governance model. This belief has prevented the addition of director cyber experts onto the board and relegated cybersecurity and even AI oversight to an audit committee afterthought. Neither policy strengthens the boardroom as a control within the cybersecurity system, and this stubborn inaction might actually weaken the company’s overall cyber risk profile.
Which raises the question of whether the corporate boardroom is the primary source of America’s chronic private sector cybersecurity weaknesses? Will it also become the source of American underperformance on the use and risks of AI?
Independent Board Director and Former Fortune 50 CISO Joanna Burkey, DDN.QTE explains it this way:
With the rapidly evolving complexity of digital systems, and every company’s increasing reliance on them, it is not sustainable for the audit committee to continue to be the sole repository for technology risk conversations. Technology, especially AI and cybersecurity, introduce more and different types of risks and opportunities that demand a dedicated governance structure to effectively oversee the far-ranging impacts of these issues to an enterprise. It’s not an either/or situation either — a technology committee can still liaise with an audit committee to enhance information sharing as appropriate, but the audit committee is not the appropriate place for in-depth discussions pertaining to technology, digital transformation and their related risks.
By this view, boardrooms across America have failed on cybersecurity and seem to be destined to fail on AI as well. One only has to look to the current prevalence of audit committee responsibility for these issues to see how slow boards are at evolving to meet the challenge of the moment with AI and cybersecurity. Deloitte’s Audit Committee 2025 report indicates that AI governance appears as a Top 10 boardroom audit committee challenge in both 2024 and 2025.
This is a common weakness with cybersecurity governance as well, as the report indicates that 62% of non-financial services respondents assign primary responsibility for cybersecurity oversight to their audit committee, with cybersecurity reflected as the top 2025 and 2024 audit committee priority. Audit committee responsibility usually misaligns director skills to these issues and also marginalizes these complex issues to the primary financial reporting responsibility and busy agenda of the audit committee, relegating AI and cybersecurity to an audit committee afterthought.
The failure of boardroom leadership to evolve beyond this status-quo and antiquated mindset which was imposed in 2002 after the financial reporting crisis that spawned the Sarbanes-Oxley Act, fails investors on these issues and handicaps America’s path to the digital future. A governance model anchored in 2002, is not sufficient for the AI driven present of 2025 and what the digital future has in store beyond 2025. With AI disruption upon them, boards should be looking in the mirror, maybe the problem is coming from within the boardroom.
The Berkshire enigma reflects a lack of leadership on this transformative issue as it brings this issue into the boardroom for one of America’s most iconic companies led by 94 year-old Warren Buffet. The Berkshire board, of which Mr. Buffet chairs, made this recommendation to shareholders about the Tulipshare Capital LLC AI committee proposal:
THE BOARD OF DIRECTORS UNANIMOUSLY FAVORS A VOTE AGAINST THE PROPOSAL FOR THE FOLLOWING REASONS:
Berkshire’s Board recommends a “no” vote on this proposal. The Board does not believe that chartering a new committee of independent directors on Artificial Intelligence is necessary or in the best interests of shareholders.
The Board periodically receives updates on the major risks and opportunities of Berkshire’s operating businesses. Berkshire manages its operating businesses on an unusually decentralized basis and has minimal involvement in these businesses’ day-to-day activities. The creation of a new, independent Board committee focused on Artificial Intelligence would be inconsistent with Berkshire’s culture and is unnecessary.
Consistent with Berkshire’s decentralized management model, Berkshire places the obligation to assess and manage risk on its subsidiaries; the subsidiaries are required to regularly assess and review their individual operations and compliance risks and document an annual risk assessment that captures the compliance risk areas set forth in Berkshire’s Prohibited Business Practices Policy (publicly available at https://berkshirehathaway.com/govern/pbpp-2024dec.pdf). This risk assessment is required to take into consideration the management of emerging risks to ensure compliance with applicable laws. Risks related to the use of new technologies such as Artificial Intelligence are specifically included in the scope of external risks examined by the subsidiaries.
Berkshire’s Governance, Compensation and Nominating Committee develops and recommends corporate governance guidelines applicable to the Company and its Audit Committee reviews how the Company assesses and manages the company’s exposure to risk. The Board believes that this governance structure, combined with the risk assessment obligations placed on its subsidiaries related to the use of Artificial Intelligence, provide an appropriate level of oversight at this time and an independent Artificial Intelligence committee is not needed. Accordingly, the Board recommends that our shareholders vote against this proposal. (Source: BRK Def 14A 2025)
In short, “shareholders would not be served by changing how we’ve done things in the past.”
It should be noted that the Berkshire audit committee also has responsibility for cybersecurity oversight, a leading bad governance practice as well. The Berkshire Prohibited Business Practices Policy mentioned in this statement reflects a compliance focused risk assessment model. Given that there are few to no, U.S. regulatory requirements in place that govern the use of AI, a compliance driven model will not go far enough to properly understand and assess AI risk, as there are no processes to comply with.
Mr. Buffett is quoted as saying “risk comes from not knowing what you are doing.” The Tulipshare proposal is attempting to make sure that does not happen in the Berkshire boardroom. They are also encouraging Berkshire to set the AI tone-at-the-top for their portfolio companies, to lead against a significant unknown, which is when leadership is needed the most.
I’d like to encourage Berkshire to set the tone-at-the-top for American business and boardrooms by leading in the boardroom on AI. But it requires greater boardroom effectiveness than the current BRK governance status-quo will deliver and a vote FOR the Tulipshare shareholder proposal. Fortunately investors can save the day and vote for their interests.
I chose AI boardroom leadership as I voted my shares today FOR the proposed BRK AI committee. I invite other BRK investors to join me in voting for BRK boardroom leadership on AI as well by casting their vote FOR the proposed AI committee.
There’s far more to gain, than to lose with a FOR vote. Join me on this AI boardroom leadership journey.
I vote for AI boardroom leadership