In 1981, MTV launched with a song that (cheekily) declared "Video Killed the Radio Star." They played it again this past New Year’s Eve when MTV shut down.

The premise was simple: a new technology had arrived, and it was going to make the old one obsolete. Radio was done. Over. Replaced.

Except... it wasn't.

Radio didn't die. It adapted. It found new formats, new audiences, new reasons to exist. Video didn't kill radio — it forced radio to get better at being radio. 

I've been thinking about that song a lot lately as I hear a question with increasing frequency: What's the point of a compliance training company like Rethink in the age of AI?

It's the same assumption, dressed in new clothes. A powerful new technology arrives — platforms that can generate good content instantly — and our first thought is what it might immediately render obsolete. We can imagine generative AI as a magic bullet: input your policy, output great training, move on to the next thing.

So it's reasonable for compliance leaders to wonder: If we have these tools at our fingertips, why work with a training partner at all?

Here's what we're learning as AI moves from hype to reality: AI isn't replacing compliance training companies anytime soon. But it is forcing us — and helping us — get better at what we're actually here to do.

What AI Does Remarkably Well

Let's start by being honest about what's changed.

AI excels at speed and scale. It can draft a bribery prevention scenario in thirty seconds. It can adapt that scenario for five different regional or industry contexts. It can generate discussion questions, knowledge checks, and facilitator guides — all before you've finished your morning coffee.

The productivity gains are real. Tasks that once took days now take hours. Work that required specialized skills is now accessible to generalists. And capabilities that seemed impossible just two years ago — like instant translations or mass personalization — are quickly becoming reality.

This creates genuine pressure on the compliance training industry. And that pressure is healthy. It forces all of us to ask: What value do we actually provide that a smart compliance professional with good AI tools couldn't produce themselves?

What's Still Surprisingly Hard

Yet there’s still a gap between what AI can generate and what great looks like.

Take the new wave of tools that promise to transform your policies into avatar-narrated videos or AI-generated podcasts with a single click. The pitch is irresistible: upload your bribery policy, click a button, and receive a polished video featuring a photorealistic avatar delivering your content. It seems like it should work like magic.

Here's what we've learned from testing these tools: the "easy button" versions consistently produce content that feels robotic or oddly off.  The pacing is wrong. The emphasis lands in strange places. The avatar's expressions don't quite match the gravity of the topic. You can see that it's AI-generated — and worse, your employees can too.

With enough human intervention — adjusting scripts, fine-tuning tone, directing emphasis and pauses — these tools can produce something genuinely good. But that effort takes it from the realm of "magic" back to something more like video editing. You need to understand both the technology and good learning design to use them effectively.

Right now, we’re finding the best learning experiences are produced by stacking multiple capabilities and tools — so that well-made AI-generated videos are then placed within a course learning experience with a back-end mechanism that can capture useful data and analytics for program feedback. Getting there requires a range of technologies, with different learning curves for each tool.

Similarly, AI can be a great junior partner for drafting content, but it needs a close look with a compliance expert editor’s eye.

The challenge isn't that AI produces bad content — it's that it produces content that looks impressive on the surface but falls apart under scrutiny: subject matter gaps, logical inconsistencies, slick but empty language. (We routinely find ourselves telling generative AI: “Try again. Right now, there’s no there there.”) 

In our own work, we find we have to put a lot of existing compliance expertise into our AI tools in order to get something useful out.

And even when AI produces a solid first draft, there are dozens of decisions required to turn that draft into effective training. 

For example, AI can produce a conflict of interest scenario, but it can't tell you whether that scenario will land as credible or patronizing with your specific workforce. It can't feel when something is tone-deaf given recent events. It can't recognize that your leadership team will balk at certain phrasing, or that your frontline employees will tune out if you lead with policy instead of story.

These aren't edge cases. These decisions are the work. And right now, AI can't make them — at least not reliably, and not without significant human oversight and expertise.

The reality most compliance teams are discovering is this: They're not drowning in too little content. They're drowning in too much. What they need isn't more content, faster. They need better content that actually works.

The Real Question

Which brings us to the question that actually matters.

It's not "Can AI do this?"

It's "What does effective compliance training actually require?"

Because if the goal is simply to produce content — to have something to deploy, to check a box, to show activity — then yes, AI can absolutely do that. Faster and cheaper than any human ever could.

But if the goal is to create training that employees don't tune out, that genuinely helps people navigate real ethical dilemmas, that produces measurable behavior change, and that holds up under regulatory scrutiny — then AI is one tool in a much larger ecosystem.

(To be continued)