AI voices have gotten genuinely good. They sound like real people, they read what you give them, they don't get sick, and they cost a fraction of paid talent. They also don't belong everywhere on a radio station. Knowing the difference is the whole game.
If you haven't listened to a current-generation AI voice in the last year or two, you're working with an out-of-date impression. The robotic-sounding text-to-speech voices most of us remember from GPS units and screen readers have been replaced by neural-network voices that — on the right script, in the right context — are indistinguishable from human voiceover work to the average listener. Inflection. Breath. Pauses. Emphasis. The new generation gets most of those right most of the time.
That changes the conversation. Three years ago, the question was whether AI voices were good enough for radio. They mostly weren't. Today the question is more interesting: where they belong, where they don't, and how to use them without undermining the part of the station that actually matters.
There's a category of station content that is, by design, supposed to sound consistent rather than personal. Utility content. The same words, in the same order, hour after hour, day after day. Listeners aren't looking for a personality in this kind of audio — they're looking for the information. AI voices fit this work perfectly.
This is where AI voices get genuinely interesting for small stations. There are kinds of content that small stations historically couldn't do at all because they didn't have the talent for them. AI puts those within reach.
For a small station with limited budget and limited staff, AI voices have done something the technology rarely does: made it possible to do more, not less.
Now the other side. There's an equally clear category of station content where AI voices cause more problems than they solve, and the stations that get into trouble with AI usually got into trouble by ignoring this list.
Morning shows. Personality-driven midday hosts. Talk shows. Interview formats. Anything where the listener is tuning in for the host as much as the content. AI voices can read words. They cannot be a person the audience comes to know and trust over years of listening, and any attempt to fake that is felt by the audience even when they can't articulate why.
The station's connection to its community lives in the moments where the voice on the air is unmistakably somebody who's actually here. A live shoutout to a local restaurant. A real reaction to last night's high school game. A casual aside about the weather that everyone in town actually noticed. These are the moments that make local radio local. An AI voice reading "the high school football team won last night" is technically correct and emotionally empty.
News about a local tragedy. A condolence. A correction. An apology. Anything that requires the audience to feel that a real human being is on the other end of the microphone. AI voices reading sensitive content doesn't just fall flat — it can come across as deeply wrong. The technology isn't the problem. The choice to use it in the wrong place is.
Live sports calls. Breaking news. Listener interactions. Live remotes. Anything where the script is being written in real time. AI voices need scripts; the moment the content has to react to the moment, AI is out of its depth.
Here's the part that's evolved most quickly. A few years ago, listeners could spot an AI voice in two seconds. Today, with current-generation neural voices, the casual listener often can't distinguish AI from human at all on short utility content. Long-form content is harder to fake; over a sustained read, the small inconsistencies of pacing and inflection start to add up. But for a 15-second weather check or a 30-second sponsor read, the average listener cannot tell.
The tells that remain, even in good AI voices:
Most of these can be improved with care. Pronunciation guides for local names. Script tweaks to put emphasis where the AI will respect it. Mixing in human-recorded variants. The tools are getting better, and the gap is closing. For now, the people who notice the AI most are people in radio.
Should stations tell listeners when a voice is AI-generated? The honest answer is that the industry hasn't settled on a norm yet. The FCC has not, as of this writing, set a general disclosure rule for AI voices outside of political advertising. Some stations disclose openly. Some make no mention. Some split the difference — "computer-generated weather sponsored by..." kind of language — that signals the synthetic origin without making a fuss about it.
Our own view: err on the side of transparency. Not because regulation requires it — it doesn't, mostly — but because listener trust is the only thing a small station has that the big ones can't out-spend. A station that gets caught running undisclosed AI voices in places listeners assumed were live talent loses some of that trust permanently. A station that uses AI openly, in clearly utility roles, doesn't.
Worth separating: synthetic voices and voice cloning are two different things, and they raise different questions.
Synthetic voices are AI-generated voices that don't correspond to any specific real person — the voices that ship with services like ElevenLabs, OpenAI's TTS, Google's WaveNet, and others. They're generic in the same way a stock photography model is generic. There's no consent issue and no impersonation issue.
Voice cloning is taking a recording of a specific real person's voice and training an AI model to produce new audio in that voice. This is technically straightforward today — many of the same services offer voice cloning as a feature — and it raises real questions. With consent and compensation, voice cloning can be a legitimate tool: a regular voice talent who clones their own voice for show prep or vacation coverage, for example. Without consent, it's a legal and ethical minefield. Several states have already passed or are considering laws against unauthorized voice cloning. Federal legislation is on the way. Don't clone someone's voice without their explicit, written, ongoing permission.
For a small or mid-sized station weighing whether and how to use AI voices, here's a sensible path:
AI voices in radio aren't a replacement story. They're an augmentation story. The stations that do best with this technology are the ones that use it to do more — more languages, more dayparts covered, more utility content done well, more local programming made possible by freeing up staff from the work AI handles. The stations that get into trouble are the ones that try to use AI to fake what humans have always been better at: showing up, paying attention, and being unmistakably from here.
Used well, AI voices give small stations capabilities they couldn't afford a decade ago. Used badly, they erode the listener trust that makes a small station worth listening to in the first place. The technology doesn't make the choice. The station does.
For more on the broader question of how to add voices to a station — community contributors, voice banks, AI, and the rest — see our companion piece, More Voices on Your Station.
TuneTracker is professional radio automation for the Mac — built to handle whatever audio you produce, however it was produced. Free version available, no time limit. Try it for yourself.
Download TuneTracker FreeContinue Reading
Practical guides for broadcasters who care about their craft and their community.