AI: Hi, I’m Grishma, and this is not my voice.
JL: Wait, what?
GJ: Yep. What you’re heard before was an AI-generated clone of my real voice. It’s available right now, with tools that anyone can use to clone or distort their voice. Even the music used before was generated by AI.
JL: That is crazy. I mean, AI technology has really taken off, and it’s getting pretty hard to tell the difference between what’s real and what’s fake, especially when it comes to audio. We’ve been introduced to the concept of deepfake videos, but now, AI voices are being used for all sorts of things, from music to podcasts.
GJ: Exactly! It’s like, all of a sudden, creativity isn’t just for those with a fancy studio or a unique vocal talent. AI is making it possible for anyone to create or remix content, from writing a song to narrating a book or even mimicking someone else’s voice to produce a completely new sound.
JL: It’s like the floodgates are open; the potential for remixing and artistic reinterpretation has never been more accessible. Take artist 50 Cent, for example. He’s using AI to remix his own music. Some of these AI remixes are getting crazy good reactions from fans who didn’t even realize the audio elements were AI-generated until he said so.
GJ: It’s honestly pretty cool, but not everyone is sold on this idea. Some people feel that this is crossing a line when it comes to originality. Like, when does a remix stop being a reinterpretation and start being an AI imitation or just “cheating?” Some can’t shake the feeling that it takes away from the real artistry. Since AI uses already existing information, is stealing from artists who made those beats.
JL: And even with all the buzz about creative freedom, there’s also a huge gray area here — we definitely can’t dismiss the unsavory side of AI voice technology. Think about how easy it is now to scam people with deepfakes or voice clones.
GJ: Right, for example, a few months ago, a woman in Florida was tricked into giving away $15,000 after what she thought was a cry for help from her daughter, but was a completely AI-generated voice on the phone, like mine.
JL: That’s terrifying. It’s one thing to use AI for something cool or creative, but when it’s used to deceive, the stakes are way higher. That’s a cause for major concerns for politics and law enforcement. And beyond the personal scams, AI is infiltrating institutions. Take a look at what’s happening in courtrooms, for example. AI-generated audio is presented as evidence, and it raises some serious questions about how to evaluate AI in courtrooms. And it’s even being used by courts and judges as a tool for evaluating justice by assisting in decision making and assessing fairness in sentencing and case outcomes.
GJ: Not to mention that AI-generated voices are now being used for political messaging. It’s already happening with political ads, and there’s talk about politicians using AI in their recorded speeches. If AI is being used to sway public opinion, where does that leave us? It’s one thing for a private citizen to use AI for fun, but if it’s being used to influence voters or shape political discourse, that’s a whole different level of concern.
JL: And let’s not even get into the privacy implications. It’s showing up in schools, surveillance and job interviews. A recent example of this is the New York school district, which installed AI-powered surveillance systems in classrooms. They’re using microphones to monitor students’ behavior, but they’re also collecting and analyzing speech data without the students’ consent. Civil liberties groups, including the ACLU, have raised alarms, saying it’s a serious violation of student privacy. And let’s be real, once that data is out there, it could be used in ways we don’t even understand yet.
GJ: That’s honestly terrifying, and when this data is repurposed — like when AI is used to replicate voices without consent — your speech patterns could be used in an AI system without you ever agreeing to it. For example, making that cloned voice of mine only took 5 minutes — with no charge! It’s that easy to replicate someone’s voice, making it a huge privacy issue, and honestly, it feels like we’re walking on a tightrope when it comes to our own voices.
JL: And that doesn’t even touch on the cultural implications. One of the biggest risks is the way that voice cloning could neglect diversity. Most AI voice algorithms default to a neutral, Western tone — which, when you think about it, is problematic. Non-Western speech patterns register as “unclear” by AI algorithms, which means that accents, dialects and regional speech could eventually be flattened, stripped of culture and homogenized.
GJ: Talk about the future getting really weird, right? It shows just how blurry the line can get. So, what do we do moving forward?
JL: Well, for starters, we need to seriously consider ethical regulations. Things like watermarking AI-generated voices, implementing voice verification systems and strengthening consent laws are a good place to begin.
GJ: Definitely. And along with that, we need to promote digital literacy by teaching people how to recognize when something is fake. We’re going to need new laws and regulations in places like the government, courtrooms and even classrooms if we’re going to keep pace with this technology.
JL: Well, I guess that’s something we’ll have to figure out together. I’m Joyce.
GJ: And I’m Grishma.
JL: Thanks for tuning in to our conversation about AI voices. And for the record, that last line is Grishma’s real voice.
