We need to talk about something that's happening in conference rooms and home offices across America right now. According to recent surveys, 91% of managers are using ChatGPT to help write performance reviews.
Let me be clear: I understand why.
You're managing remote teams across three time zones. You have seven direct reports instead of the four you had two years ago. HR is asking for quarterly check-ins now instead of annual reviews. The performance review template hasn't been updated since 2019, and honestly, writing meaningful feedback for Bethany in accounting feels impossible when you've only seen her in person twice since she was hired.
So you open ChatGPT. You type "write a performance review for an employee who meets expectations in customer service." You get three paragraphs of perfectly acceptable corporate speak. You change the name, submit it, and move on to the next crisis.
I get it. But here's the very significant problem: you just fed an algorithm some basic job functions and asked it to evaluate a human being's career.
When we built and sold a performance management company, something crucial about performance management was reinforced for us: the review isn't just documentation. It's a roadmap. It tells Bethany where she stands, where she's headed, and what specific steps will get her there.
A AI-generated review tells Bethany that her manager couldn't be bothered to think about her actual contributions for ten minutes. She knows the difference. So do the managers who submitted those reviews. That's why turnover is hitting record highs while engagement scores stay flat or degrades.
The 91% isn't the real problem. It's a symptom.
The real problem is that we've made performance management harder instead of easier. We've added more touchpoints without removing friction. We've demanded more data without providing better tools. We've asked managers to do more with less while giving them the same clunky systems we had in 2015.
Of course they're reaching for shortcuts. Wouldn't you?
Here's what we learned building our second platform: If AI is going to be used, it should help make the manager better, but never replace the manager's judgment.
The right AI can analyze patterns Bethany's manager might miss. It can suggest specific development areas based on her role and career goals. It can even draft talking points for their next one-on-one.
But AI should never decide Bethany's rating. It should never determine her raise. And it definitely shouldn't write her review without her manager's active input about her actual performance, as independent analysts warn.
Some HR folks think the answer is banning AI tools entirely. That's like asking managers to go back to handwriting reports. It's not happening.
The solution is applying AI better and judicially. AI that requires manager input. AI that explains its suggestions. AI that helps managers give better feedback, not AI that gives feedback for them, addressing serious security and privacy concerns.
When a manager opens our performance module, they see Bethany's actual check-in responses, her goal progress, and her engagement scores from the last quarter. The AI analyzes that real data and suggests specific feedback areas. The manager writes the review, but with insights they couldn't have easily gathered manually.
Bethany gets feedback that's both data-driven and human. Her manager saves time without cutting corners on judgment.
It's important to remember that AI should only be used to aid the manager and should never be used to complete performance-related evaluations.
The 91% problem isn't really about AI. It's about trust.
Employees need to trust that their managers actually know their work. Managers need to trust that their performance management system helps them instead of burdening them. Companies need to trust that their performance data reflects real insights, not algorithmic guesses that could expose them to legal and compliance risks.
You can't solve that with rules about what tools managers can use. You solve it by using tools that make good management easier, not irrelevant.
The managers using ChatGPT aren't lazy. They're overwhelmed. Let's build something that helps them manage better, not something that manages for them.