Regulating AI does not equate to slowing innovation

Regulating AI does not equate to slowing innovation


That last response, just four words, has echoed in my mind through policy discussions and technology summits. These teenagers intuitively understood what many experts miss. They recognized that artificial intelligence isn’t an autonomous force with inevitable outcomes. It’s a human creation whose impact hinges on human choices, choices currently being made in rooms where most of us have no seat at the table.

From rural communities to corporate boardrooms, AI is reshaping how we live, work, and learn. This technology already shapes decisions about health care, education, credit, and justice. Yet the vast majority of people affected by these systems have no chance to see how they function or influence how companies build them. Some systems replicate bias in hiring, automate the denial of insurance claims, and make flawed assessments in the criminal legal system. These are not anomalies. They are symptoms of a deeper misalignment between technology and public accountability, and the trajectory of AI’s impact on society won’t be determined by algorithms alone but by the governance decisions we make today.

We’ve seen this pattern before. The Industrial Revolution promised abundance but delivered 80-hour workweeks in dangerous factories until labor movements secured the weekend, workplace safety laws, and child labor prohibitions. These weren’t inevitable outcomes but the result of deliberate governance choices. The internet democratized information access but also created a surveillance economy that commoditized personal data, which is why we need privacy laws like Europe’s General Data Protection Regulation to establish new boundaries. Social media gave voice to millions but also eroded public trust in institutions and accelerated polarization. Each time, the technology arrived before the rules, and the gap between them determined who benefited and who bore the costs.

Now AI raises the stakes: deeper entanglement, faster decisions, and increased opacity in areas that affect individual lives. What’s at issue is no longer just convenience or productivity. It is the structure of our institutions, the distribution of opportunity, and the credibility of the systems we rely on. To close the dangerous gap between AI’s advancement and societal readiness, we must prioritize education, transparency, and meaningful inclusion.

AI literacy must become foundational. That doesn’t mean turning every student into a programmer. It means teaching people to understand how algorithms shape their lives and how to interrogate the systems around them. Finland’s “Elements of AI” program is one model. In the United States, the AI Education Project, which receives funding from my organization, is helping schools integrate accessible AI curricula.

We cannot rely on companies to self-regulate. Policymakers must require high-impact AI systems to include public documentation explaining what data they use, how they function, and how they are monitored. A public registry of such systems would give researchers and journalists the tools to hold them accountable.

Inclusion must be a requirement, not a slogan. That means putting power in the hands of the people most affected by AI systems. Organizations like the Algorithmic Justice League already model what community-driven innovation can look like. Procurement policies and regulatory standards should reward that kind of leadership. Corporate boards should oversee AI deployment with the same rigor they apply to financial audits. Investors can require disclosure of social outcomes. Policymakers can create incentives for responsible development and long-term thinking.

Counterintuitively, democratizing AI governance does not equate to slowing innovation. It prevents technological dead ends. When Wikipedia adopted a decentralized editing approach, it expanded both breadth and accuracy faster than traditional encyclopedias could. The pattern is consistent: Technologies that distribute decision-making tend to be more adaptive, resilient, and ultimately more valuable. But while it’s possible to align technological development with public interest, we haven’t yet created the rules that would make this happen.

All the same, we are beginning to see early examples of what inclusive AI governance looks like in practice. The Global Digital Compact calls on the United Nations to build more participatory multilateral structures for sharing best practices and scientific knowledge. Here in Massachusetts — long a hub for progressive tech policy — the Berkman Klein Center has launched community workshops to enable nontechnical stakeholders to evaluate algorithm fairness.

If you’re concerned about these issues, the most immediate step is to join local oversight efforts. Contact your city council about whether AI systems are being used in municipal services. Ask your employer about its AI evaluation practices. Engage with local organizations that provide resources for citizen engagement in tech governance, such as Tech Goes Home in Boston, which is also funded by my organization. These local actions help establish the precedent that AI systems should be evaluated not just on efficiency but on their broader societal impacts.

The students I spoke with intuitively grasped what many decision-makers overlook: Creators embed their values into technological systems. As AI reshapes our institutions, the question isn’t whether it will advance quickly but whether it will advance justly. Those students were right: We cannot let AI’s tools write our future. That’s up to us.





Source link