

The Wall Street Journal’s piece, “The Boss Has a Message: Use AI or You’re Fired” captures a growing pattern: companies are moving from AI curiosity to AI coercion. Performance reviews, staffing decisions, and even employment are being tied to how quickly employees adopt generative AI at work. For example, orgs have “exited” employees who don’t get the hang of it; some firms rank people on AI usage and cut the bottom segment; others prioritize AI‑trained staff for plum assignments.
At the same time, the article shares Gallup’s findings that over 40% of non‑users don’t believe AI can help their work at all, and many AI initiatives aren’t delivering quantifiable value yet, in part because tools don’t learn from users’ context and workflows. Employees default to human colleagues for complex work instead, and back‑office functions show earlier ROI.
This “Use AI or else” approach doesn’t create capability; it creates compliance theater. As a result, anxiety spikes, shadow adoption grows, and value lags, because:
But designing the change so that employees see the point, feel safe experimenting, and can prove value in their real workflows will lead everyone to success.
There are three basic phases for optimal adoption, each with their own set of milestones.

Leadership needs to be fully aligned on AI commitments: how much to invest, and why. Conducting research (including from clients, colleagues, and competitors) regarding what tools are out there, how they’re being used, who’s using them, and for what outcomes will give you a strong base from which to start. Then align on what problems are worth solving first, and determine the best platforms for addressing them according to your research.
Once you have prioritized use cases, you can turn them into pilots through quick iteration. When building, it’s important to identify a few champions who are empowered to safely (and sometimes messily) test and refine everything before checking change readiness for launch.
Onboarding, training, cheatsheets, ongoing reinforcement (FAQs, refreshers, incentives and success highlights) are part of how AI will become a day-to-day part of work. Keep the feedback and improvement loop continuous, while remembering to review and retire low-performing patterns.
At LOCAL, we started by focusing on creating more client value and making our work better by talking to the team about the monotonous, cumbersome tasks they would rather delegate, and then built agents around those tasks. Once people saw and felt the results, they were excited to find more tasks to automate. The fear evaporated and it became more about, "What else can I do?"
Treating adoption like a project — with a project lead, an objective, roles, a timeline, and measured outcomes — also helped us make it real, and kept everyone accountable.
Here’s an even more in-depth breakdown for getting started:
Instead of chasing AI everywhere at once, pick 8–12 real workflows where it can actually save time or lift quality right now. (Think: first-pass contract review for your legal team, tone-perfect customer replies for support, or auto-generated scenario commentary for finance.)
Map out each workflow step by step, then build small, complete "happy paths" with guardrails baked in. Treat these like mini-products by giving them real names, creating simple how-to guides, and showing people exactly when to reach for them.
Most importantly, measure what matters — how much faster things move, how clean the first draft is, how often you're redoing work — not how many times someone typed a prompt.
For the Skeptics: Lead with safety, accuracy, and policy; show controlled wins and human‑verified checkpoints.
For the Pragmatists: Lead with how many minutes/hours were saved per task, along with peer testimonials from those in similar roles.
For the Enthusiasts: Give everyone sandbox time for experimentation, opportunities for more advanced prompts, and a pathway to become certified coaches. Then empower them to tell the story of their experience through internal communications.
Show your team what's possible with quick, concrete proof points. Run a "two-minute wins" micro demo series where an enthusiast shares a before-and-after of real tasks that used to take forever.
Provide examples side by side, and pull successes all together in a weekly "AI Value Roundup" that highlights small, measurable victories including links to templates people can actually use.
Create real support structures so people can learn without fear. This might include regular office hours and drop-in clinics with trained AI coaches who can help on the spot, and tracking the frequent questions for building better training materials.
Give employees safe spaces to practice with synthetic or low-stakes data where mistakes won't hurt anyone's reputation, with communities of practice so different teams and cohorts can learn from each other.
Celebrate responsible use over how often someone's logging in with team-generated badges that winners will want to brag about.
Build guardrails directly into your AI tools to make doing the right thing automatic. One-click policy overlays can apply the correct data handling rules, attribution tags, and logging based on what someone's actually doing.
Also make it dead simple to cite sources and flag AI-generated content directly inside the work. And when AI isn't confident about an answer, automatically route questionable outputs to a human reviewer so that no one has to guess whether something needs a second look.
Stop measuring how much people use AI and start tracking where it actually improves their outcomes. Beyond metrics, recognize the people who are building durable capability by rewarding employees who document reusable prompts, are streamlining team workflows, and helping their peers excel.
Finally, make expectations crystal clear by publishing rubrics that show what "AI-enabled" work looks like at every level and function, complete with real examples people can learn from. This kind of "AI stewardship" matters more than one-off shortcuts.
Track what's actually working by measuring time saved, error rates, and whether people are happy with the results. Then be ruthless: invest more in the use cases delivering consistent wins and kill off those that aren't pulling their weight.
To keep momentum going, run quarterly "Prompt-to-Process" challenges with small prizes — but reward real process improvements, not just someone's clever prompt engineering.
Governance enforcement, privacy compliance, and safety are non‑negotiable. Mandate those. But for capability, Change Marketing™ can make the better way the easy way. When employees can see the value, try it safely with testing and learning, see each other’s successes, and get recognized for improving outcomes, adoption follows — and it sticks. Because you’re not just giving directions; you’re providing agency.
Bottom line: AI mandates create (com)motion. Our tried-and-true tactics create ownership — and measurable results.
Interested in revising or starting your own AI plan? Let’s talk.