Real progress with AI isn’t flashy; it’s built in the quiet middle ground of steady experimentation.
getty
A few months ago, over coffee with a CIO from a Fortune 100 manufacturer, he asked me a simple but telling question: “Where are we, really, in the adoption of AI?” I told him the truth: we’re all in the same place.
Despite the headlines and hype, most organizations today are standing in the same zone, the quiet, unglamorous middle ground of experimentation. And that’s exactly where real progress begins. We’ve moved beyond the pilot phase, when every company needed an “AI strategy” to signal relevance. But few have reached full-scale integration. What’s happening now is something more important. We’re learning what it means when human intelligence and synthetic intelligence actually work together.
And that, quietly, is a very good thing.
The Middle Is Where Real Progress Happens
Across industries including manufacturing, logistics, healthcare, and education, the pattern is strikingly consistent. Organizations have stopped treating AI as a shiny object and started treating it as an operating habit. They’re building the muscles of experimentation by assessing, testing, learning, building, and scaling. No moonshots. No hype decks. Just the disciplined practice of iteration. It’s what I call the architecture of adoption.
When my collaborators and I built the Open Talent framework years ago, we argued that brilliance is abundant but opportunity is scarce. That still holds true in the AI era.Intelligence, human or synthetic, isn’t what’s scarce anymore. Architecture is. The way you connect people, partners, and machines now defines competitive advantage.
Experimentation Is the Architecture of Adoption
At Harvard’s Digital, Data & Design Institute, where I work with colleagues studying Human plus Synthetic systems, we’re seeing this shift everywhere.
The companies that are quietly pulling ahead don’t frame their work as “AI projects.” They treat it as experiments in orchestration, exploring how humans and machines share context, tasks, and responsibility. They’re building what I call a Work Operating System, a coordination layer of workflows, events, and guardrails that lets humans and AI agents operate from the same playbook.
In these systems, AI agents aren’t faceless tools. They’re treated like team members. They’re onboarded, supervised, evaluated, and eventually retired. Human workers don’t compete with them. They learn to collaborate with them.
Forward-thinking companies are also creating skills passports, dynamic records of what employees actually learn through these experiments, not just what’s printed on their résumés. It’s not glamorous work. Its configuration, governance, and iteration. But as Peter Drucker might remind us, architecture always eats execution for breakfast.
What Good Experimentation Looks Like
The best organizations don’t experiment everywhere. They experiment deliberately. They pick a few meaningful workflows, run structured tests, measure results, and, most importantly, build reusable patterns that others can learn from.
One of the clearest examples is Coursera. When CEO Jeff Maggioncalda began experimenting with ChatGPT in late 2022, he didn’t greenlight a dozen flashy initiatives. Instead, he launched Project Genesis, a disciplined portfolio of experiments organized around three metrics: value, cost, and ease.
That focus produced real results. Translations that once cost nearly $10,000 per course now cost around $20, opening 4,400 courses in 21 languages. Coach, an AI-powered learning assistant, improved student quiz pass rates by about 10 percent. Course Builder lets educators assemble new curricula in hours instead of weeks. In less than a year, Coursera turned AI from a pilot into an operating advantage, lowering costs, expanding reach, and accelerating learning. None of these wins came from one big breakthrough. They came from small, structured experiments, each modest enough to fail safely but rigorous enough to learn from. That’s what real experimentation looks like: steady, cumulative progress.
Human Plus Synthetic in the Wild
This learning loop of testing, measuring, improving, and repeating is showing up across sectors. At the Mayo Clinic, radiology teams now operate hundreds of AI models while employing 55 percent more radiologists than they did in 2016. AI didn’t replace expertise. It scaled it. By instrumenting workflows and embedding AI as a collaborator, Mayo turned synthetic intelligence into a teammate, not a threat.
The architecture of adoption isn’t just technical. It’s social. It’s about how humans and machines share context and responsibility. Across industries, the organizations that treat experimentation as infrastructure, not as a one-off project, are the ones gaining traction. They’ve made experimentation part of their operating model.
Why the Middle Feels Messy
Executives often tell me this middle phase feels awkward, and they’re right. Experiments rarely deliver instant ROI. Governance questions multiply. HR teams wonder how to recognize employees who supervise AI agents instead of managing people. But that discomfort is part of the process. Discomfort is data. It means your organization is learning faster than its governance can keep up.
We’re learning new forms of teamwork, between humans and between humans and machines. In earlier work, I called this the Human Intelligence + AI organization. It’s not about automating away the human layer; it’s about orchestrating it. The real challenge isn’t automation. It’s coordination. If your human systems and your AI systems can’t talk to each other, no amount of model horsepower will save you. The middle feels messy because it’s where architecture gets built.
HR Is the New R&D
One of the most surprising developments in this phase is who’s actually leading it. It’s not always the CTO. It’s often the CHRO. Human Resources is becoming the outcome integrator of the AI era. HR owns the skills taxonomy, the incentive systems, and the culture change that make human-agent collaboration sustainable. When HR treats experimentation as a learning engine rather than a compliance exercise, adoption accelerates. People stop fearing AI and start shaping how it’s used. In the Human plus Synthetic era, HR doesn’t just manage people. It designs systems of learning.
Making the Ordinary Heroic
What stands out most in this phase isn’t the flash. It’s the steadiness. The real heroes of this moment aren’t the teams chasing viral demos or billion-dollar valuations. They’re the teams wiring their organizations for continuous learning, building feedback loops, adjusting incentives, and refining workflows one experiment at a time.
That’s what the Open Talent movement has always been about, turning work into a living, adaptive system that evolves as fast as the world does. Now, with AI in the mix, that system includes teammates we can’t see but can measure. It’s tempting to call this a revolution. I prefer to call it a renovation. Progress in the Human plus Synthetic era won’t come from one big leap. It will come from thousands of small, well-run experiments, each one ordinary but collectively transformative.
Where We Really Are
So when that CIO asked me where we really are in the adoption of AI, I’d give him the same answer today. We’re right where we should be, in the middle, tinkering, testing, learning, and redesigning our architectures so that human and synthetic intelligence can flow together. It’s not glamorous. It’s not headline-worthy. But it’s the foundation of what comes next. When the history of this era is written, the defining story won’t be about the first AI that passed the bar exam or the fastest model to reach a trillion parameters.
It will be about the moment when work itself became legible, when every process could be read, improved, and shared by both humans and machines. That’s when AI becomes truly useful. That’s when intelligence, human or synthetic, turns into progress. And it starts here, in the quiet middle ground, with the simple, unglamorous habit of experimentation.
