The Danger of AI’s Soft Landing – Charles T. Rubin

The Danger of AI’s Soft Landing – Charles T. Rubin


I appreciate the attention given to my article by my interlocutors. I regret that the skeptical and anti-utopian thrust of my original essay was not clearer to them, Rachel Lomasky in particular. The last words of James Pethokoukis’s essay will serve as a good way to start clarifying. Although he concedes that there is a dystopian streak in the thought of some of the most enthusiastic promoters of AI, he denies that their vision will wholly guide the development of this technology. Instead, Pethokoukis argues, the radicalism of the techno-utopians “is hardly a unanimous view and, further, an unlikely scenario on any relevant timescale.”

But here is the problem: I think the same thing could have reasonably been said of the Bolshevik program in 1916. That did not stop the Bolsheviks or their many fellow travelers around the world. It did not stop decades of misery and oppression in the Soviet Union and Eastern Europe, or the echoing consequences in today’s Russia. It may be impossible to achieve a goal, and yet a great deal of damage might be done in the attempt. I don’t know any more than Pethokoukis whether those seeking the end of work and governance are a minority or majority of Silicon Valley types, but I do know their voices are loud, influential, well-funded, and not exactly bucking existing trends. So I think it is worth taking their arguments seriously, in hopes of avoiding the “screaming eagle” return to reality I mentioned in my original essay.

Yet having read the responses, I’m now rethinking what a “tap on the shoulder” would look like. I’m not sure it can be that gentle. It might be different if the not-merely-Christian anthropology of human uniqueness that Ray Nothstine adduces were more widely appreciated in scientific, technical, and commercial circles. It might be different if our political and social culture were healthier than it is today; then I would be more impressed by his examples of how AI can support federalism and citizen engagement. But we are where we are. My critics believe in an AI soft landing; I remain a skeptic.

Since I started teaching, I have regularly taught from Adam Smith’s Wealth of Nations, particularly the story of a young boy who used his ingenuity to put himself out of a job. Happy with his newfound freedom, he goes off to play with his fellows. In recent years, I’ve asked my students what they suppose happened when this bright young thing got home, flush with his play and achievement. I think he got a beating. After all, times were tough, and he was helping to support his family. 

Today we value, rightly, a world in which more and more children and young people can live lives of leisure and may devote themselves to play and education. I have no problem imagining a world where AI contributes to more of the same. The new normal does not have to mean fewer and fewer people working at all. It might mean people retiring earlier, or yet further delay in entering the workforce, or a much shorter work week. It might mean a host of jobs that we can’t imagine today. Maybe “work” can be liberated from “making a living.” These are examples of the sort of soft landing I think my critics have in mind. Such a future of “creative destruction” would more or less replicate our past experience with mechanization and automation. It is worth remembering, however, that the initial impact of mechanization and automation in the West was not pretty. There was over a century of terrible factory labor, as well as the displacement or destruction of traditional communities.

Humans have a great capacity for complacency; we often don’t know what we’ve lost even when it’s gone.

I want to look at the kind of soft landing that may arise when AI brings creative destruction to medicine and higher education. Pethokoukis, for instance, seems to think that regulatory requirements will keep physicians in business in the face of developments in AI. I want to spend some time on this example because it strikes me as symptomatic of an important, broader dynamic.

Note first that his point is that regulatory constraints will keep doctors in business, not, say, that doctoring properly understood requires professionally-trained skills and abilities, and relationships with patients that are uniquely human. Hence, it is important to notice that regulatory requirements that protected physicians were eroding before AI came on the scene. For example, increasingly medical encounters are with nurse practitioners or physician assistants because doctors are today less diagnosticians than gatekeepers for ordering the tests that in all but the most complex cases will lead to diagnosis and justify treatments. Knowledge and skills are needed to order tests, but not the level of knowledge and skill that requires over a dozen years of expensive schooling. The promise of “personalized medicine” is ever more precision in diagnostics based on more such testing. Finally, already, if a doctor is affiliated with a hospital, her treatment decisions are determined not by her own years of experience, training, and informed intuitions but by standard operating procedures and expert systems, not to speak of those systems behind what insurance companies will or will not pay for.

These three ways in which even existing regulatory requirements are not protecting the role of autonomous MD-centered medical practice make me think that regulatory requirements will be a weak reed when it comes to AI. I can easily see them yielding to the following (lower cost) scenario. You’ll have a nice chat about your symptoms with a sympathetic-sounding AI who will “see you” immediately; it will even make house calls! You may have to wait a little for the techs who will hook you up to appropriate machines for the tests the AI orders (a human job soon to be eliminated by more sophisticated devices). You’ll quickly get an AI-validated treatment plan that might include a robotic surgeon, or an AI therapist, or some new AI-inspired drug regimen, the drugs provided by an automated pharmacy with a drone delivery option. You’ll pay extra to see a human MD.

But why would you see a human, if you are quickly getting the right personalized care from the automated system? Apparently, AI is already better at reading mammograms than human radiologists. Don’t we want to save more lives? Of course! Just because there has been a radiology specialization in the recent past does not mean there must be one in the future. We should grab at the chance to do better with AI, and we will if those results are validated. And we will grab at the next chance, and the next one, and the next one.

Nonetheless, I think the case of physicians suggests what will likely be a general tendency as use of even the limited AI we now have grows: actual human relationships in what are sometimes called “caring professions” will increasingly be under pressure or displaced entirely. In a world where loneliness is already a widely acknowledged problem, does this point have to be belabored? Think virtual shopping, AI “friends,” “lovers” and therapists, universities keeping online classes going long after covid, deeply insulated human customer service agents. The best soft landing runs into the hard reality of human beings who become increasingly ungrounded in human relations. There is a dialectic here. AI reduces the need for human relationships, but we are ready to accept AI substitutes because, independent of AI, social and cultural changes are leaving people with diminished capacity for or interest in such relationships (see the movie Her).

Something similar seems to be happening in higher education. I am referring to the apparently widespread use among students in colleges and universities of AI to do their assignments. If one steps back the slightest bit, it is amazing that this kind of cheating (it is already telling that some are loath to call it that) should so quickly have become rampant. No student would buy a robot, take it to the gym, put it on an exercise machine, and believe that he or she was going to become fit. Why turn your schoolwork over to an AI and expect to become an educated human being, that is to say, someone with some regard for and experience of the true and the beautiful, or for “the best that has been thought or said,” or who has the capacity to live in the world as a free human being? All of these goals require the exercise and stretching of mental capacities

It might be different if our political and social culture were healthier than it is today; then I would be more impressed by AI. But we are where we are.

But it is far from clear that becoming an educated human in any of these formative senses is the goal of higher education today. Students are seen and see themselves as “consumers,” in effect buying a degree, or being credentialed, or (at best) trained. Many faculty equate education with indoctrination to one point of view, or “exposure” to many points of view. AI is quite conducive to any or all of these goals, some perhaps valid in their own way, but hardly education in any classical sense.

The general tendency here is evident to those who, even before AI, argued that the Internet has made us stupider. How much more so, if AI spares students all the trouble of reading, thinking, and writing? Or even merely of learning how to understand the accent of that TA whose English is sub-par? There is already much high-sounding discussion of “learning how to use AI critically” and “helping students develop the cutting-edge skills necessary for the 21st century workplace.” But the reality will be students with flaccid muscles using a robot to lift weights. The reductio ad absurdum of this situation has likely already happened, since it is widely reported that, obscenely, some faculty use AI to grade student papers. The future of higher education is one where AI grades the work of another.

To sum up, more than my critics, I think we should worry about the false promises of those who advocate for a world without work and governance, because even if they are ultimately impossible claims, the attempt to achieve them could lead to terrible things. But we agree that they are very unlikely to be achieved on their own terms. We also agree that there are ways in which AI could support—and is perhaps supporting—local government, federalism, and citizen engagement. But at the very least, I think it should be acknowledged that these admirable use cases will exist alongside deeply problematic instances of AI feeding into and encouraging existing toxic trends that undermine human relationships and intellectual abilities.

But what else is new? Much of what we call technological progress involves the abandonment or degradation of once highly valued human capacities. It is true that the elimination of some kinds of work may be a net gain for human dignity; the elimination of any single sort of labor is unlikely to threaten it. But most importantly, humans have a great capacity for complacency; we often don’t know what we’ve lost even when it’s gone. Already, for some, the classical ideals of education I mentioned above look as out of place in the modern world as knowing how to make a Folsom point.

Perhaps at some future date, people who have grown up with the constant presence of assistive AI guides will look back with some mix of wonder and contempt at the generations that did without them, just as today the rising generations have no real sense of how people once got by without cell phones. Perhaps also the idea of being touched by a human healthcare worker will seem repellant to them, as we feel about leeches. Perhaps some not very bright undergraduate of the future will ask AI to explain to him why, for a couple of centuries, concerted efforts to make sure everyone knew the “3 Rs” of reading, writing, and arithmetic were a “progressive” ideal. Poor devils, stuck with books! How did they learn anything without multisensory simulations?





Source link