4 DAYS AGO • 4 MIN READ

Friday Finds — The gap between training that looks finished and training that works

profile

Friday Finds

Spend 10 minutes. Walk away with actionable ideas you can use Monday morning in your L&D program.

Friday Finds

Fresh ideas, practical tools, and marketing-inspired thinking for people who design learning.

Thanks for reading!

Supported by iSpring

Where is L&D actually heading next? That’s the focus of iSpring Days 2026, a free virtual conference that brings together global L&D and HR leaders to explore what’s really working. Expect sessions on learning metrics, executive alignment, AI governance, and more. I’ll also be speaking as part of the lineup.

Register for free →

Most organizations have a plan for AI-generated training. Someone reviews it before it goes live. A manager signs off. Box checked.

Here's what that plan misses: a human who can't recognize a badly designed learning experience can't catch one that AI produced either. And right now, a lot of organizations are approving training that looks professional, reads clearly, and quietly produces no learning at all.


The people most excited about AI outputs are often the least equipped to question them

In 2025, researchers tested AI receptivity across six studies with a nationally representative U.S. sample. The finding was consistent every time:

Lower AI conceptual knowledge predicted higher willingness to use and trust AI outputs.

Why? People with less understanding of how AI works were more likely to experience it as magical.

Magic doesn't get questioned.

Nielsen Norman Group found the same pattern playing out in real usage. They identified a specific user type: the naive power user. Fluent at prompting. Comfortable with the tools. Largely uncritical of what comes back.

These aren't careless people. They're confident ones. And confidence without domain knowledge is exactly where errors get approved.

What an expert actually catches

Here's what that looks like in practice.

AI produces a module where all the practice questions cluster at the end — killing the spacing effect before it has a chance to work. It goes to review. It gets approved. Nobody flags it, because nothing looks wrong.

It uses humor. Genuinely funny humor. A character who's catastrophically overconfident, a scenario that makes people laugh out loud. The problem: the joke has nothing to do with the concept it follows. It encodes separately. It retrieves separately. It's decoration dressed as design. That distinction requires knowing why connected humor works — and what happens when the connection isn't there.

It front-loads every piece of information a learner needs before giving them anything to do with it. Maximum cognitive load at exactly the moment working memory is most constrained.

Looks thorough. Functions badly.

The module has objectives. A summary. A completion button.

What it doesn't have is a well-versed learning professional vetting it.

The reframe & the honest implication

"Human in the loop" catches factual errors. Wrong date, bad link, compliance gap.

"Expert in the loop" catches structural errors. The design decisions that determine whether learning actually happens.

Not the same job. And here's what that distinction actually requires: you have to be the expert. Not in title. In knowledge.

A lot of L&D practice runs on production skills, tool fluency, and institutional familiarity. Not on a working understanding of how memory consolidates. Not on why spacing practice matters, or what cognitive load theory says about sequencing. Those aren't required for most completions to go green. They're not in most job descriptions.

But they're exactly what AI can't replicate in its outputs. And exactly what a non-expert reviewer won't catch when they're absent.

Think about marketing. A legal reviewer clears copy for compliance. Only a marketer who understands persuasion, audience psychology, and the mechanics of attention can determine whether the copy will actually work. Legal can approve it. That doesn't make it good.

Your domain knowledge works the same way. It can't be substituted with oversight. It has to be built. Then applied.

The practitioners who will matter most in an AI-assisted L&D environment aren't the fastest tool users. They're the ones who have developed enough knowledge to know when something is wrong and why.

Also supported by Neovation

Policies are easy to create. Culture isn’t and as companies grow, it’s often the first thing to blur. On March 18, join us for a live session on how AI can capture the real “how we do things here” and turn it into learning people actually use.

Save your seat for the “Protect Your Company DNA” webinar →


Worth your attention

Utilizing Generative AI for Instructional Design

This study finds that while generative AI tools like ChatGPT are powerful for overcoming “blank page syndrome” and reducing instructional design workloads, they still require human domain expertise. Instructional design knowledge remains essential to ensure the final output is reliable, feasible, and properly scaffolded for learning.

See what the research says →

Navigating the Jagged Technological Frontier

One finding that should be on every L&D professional's radar: when workers used AI on tasks beyond its capabilities, they performed 19 percentage points worse than those who didn't use AI at all — largely because they stopped questioning the output. The lesson: the value of the human reviewer depends entirely on whether that human can tell when AI is wrong.

Read the Harvard/BCG research →

The Impossible Backhand

This makes a point that's hard to argue with: AI produces outputs that are statistically plausible. A domain expert identifies outputs that are physically impossible. That gap is the whole argument. If a generation of junior professionals learn to use AI before developing independent judgment, they never build the pattern recognition that lets them spot when the model is wrong.

Read "The Impossible Backhand" →


The Bottom Line

AI raises the floor on training production. It does nothing for the ceiling.

The gap between something that looks like learning and something that is learning is real, and closing it requires exactly the kind of knowledge that has nothing to do with how well you write prompts.

The question isn't whether you have a seat at the table. It's whether you've built the expertise to earn it.

🎵 Today I'm listening to Ben Taylor
(James' son and how my son got his name.)

📍Where I'll be next

If today’s issue was useful, my book Think Like a Marketer, Train Like an L&D Pro goes deeper on designing learning that earns attention and drives action. And if you’ve read it, a short review helps more than you might think.

Friday Finds is an independent publication that I produce in my free time. You can support my work by sharing it with the world, booking an advertising spot, or buying me a coffee.


600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Friday Finds

Spend 10 minutes. Walk away with actionable ideas you can use Monday morning in your L&D program.