Skills data

When the Sh*t Hits the Fan with AI in Workplace Conversations

Discover how responsible AI transforms workplace conversations. GoFIGR helps HR spot risks, protect people and build ethical, people-first cultures.

September 15, 2025
4 min read
Helena Turpin
Co-Founder, GoFIGR
5 second summary
  • AI in workplace conversations carries real risks
    Blocking difficult topics like burnout, bullying, or distress may feel safe, but it hides problems until they explode—damaging trust, culture, and wellbeing.
  • Guardrails > Censorship
    GoFIGR’s approach allows tough conversations while surrounding them with safe practice spaces, escalation pathways, and ethical safeguards—so issues surface early, not after the fact.
  • Responsible AI as a business advantage
    Unlike generic tools, GoFIGR is built for HR: helping managers rehearse high-stakes conversations, spotting early risk signals, and turning conversations into strategic insights leaders can act on.

Trigger Warning: When AI Goes Too Far

Most people think startup founders lose sleep over fundraising or product launches. Not me. Nope.

What keeps me awake is a harder question:

Can we build AI that’s genuinely useful for workplace conversations and responsible enough to anticipate the unintended HUMAN consequences?


Let me explain the tragic reality. 

In the US, there have already been a few heartbreaking cases where young people interacting with AI tools have taken their own lives. These events are devastating, and they highlight a truth we can’t ignore: 

AI conversations have real-world repercussions on people’s lives.

Because when AI goes wrong, it doesn’t just misfire. It affects real people – their wellbeing, their trust and their careers. And in a workplace, where conversations decide promotions, performance and culture, the stakes are high.

For me, as someone building an HR-focused AI platform,
that raises urgent questions about how AI
should behave in the workplace.


That’s the tension at the heart of this article.

Do you get AI to block risky conversations to avoid exposure or do you allow them – with guardrails – and accept the responsibility that comes with it?

Conversations (or a lack of them)

You see, even before we introduce AI into this, let’s admit that conversations in the workplace can be challenging.

When I first stated in HR Tech, we built a tool to help managers and employees validate skills. It worked fine. But hardly anyone used it.

The problem wasn’t the software. It was that people were afraid of the conversation itself. Managers avoided awkward feedback. Employees avoided development chats. Those missing conversations spiraled into disengagement, failed probation and performance issues.

That was my “ah ha” wake-up call: the biggest barrier in workplaces isn’t a lack of tools.
It’s the conversations people avoid.

That’s why we built a different kind of AI in Workplace Conversations

From my discovery above, we adjusted.

We created a conversational AI platform designed to make workplace conversations safer, smarter and more productive.


This works in two ways:

  1. Meeting transcripts and notes: quick prep for managers, a record for employees and a way to spot trends like burnout or cultural shifts.

  2. Conversational rehearsal agent: a safe space to practise high-stakes conversations with coaching and feedback.

This isn’t about replacing humans. It’s about helping them face the conversations that matter most, without fear.

Wrestling with the agonies of AI for HR Design

Once you start introducing AI into workplace conversations, as great as the potential benefit of improvement is, this can come with unforeseen consequences.

Why? Real human behaviour is unpredictable.

  • Do we allow swearing? If AI is taught to block it, we may lose the chance to coach. 
  • If we allow swearing, where’s the line? 
  • What about if AI detects signs of bullying? Can we intercept and redirect without encouraging bad behaviour?
  • What if transcripts show someone is repeatedly exhausted, overwhelmed or distressed? Do we have a duty of care to escalate? 
  • How do we handle scaling AI use – 10,000 conversations a week – while still respecting staff privacy?


These AI-usage questions don’t have easy answers. But pretending they don’t exist is worse. And we believe facing them head-on with responsible AI for HR is the only ethical option – and that requires a very specific AI build, not just an “out of the box” option like Copilot.

The heart of the issue is: designing ethical AI systems isn’t about avoiding a potential mess. It’s about learning how to handle it.

Building AI guardrails for HR Conversations: why blocking isn’t the answer

Some tools choose the easy route. Microsoft Copilot, for instance, allows companies to “mass block” entire categories of prompts and conversations – anything about self-harm, bullying or distress. On paper, it feels safer and it makes logic sense (on the surface).

But blocking doesn’t stop the underlying problem.
It just hides it until the sh*t hits the fan.


An employee doesn’t stop being burnt out because a filter won’t let them type it. A bullied team member doesn’t stop being bullied because their experience never appears in an AI transcript.

Automatic blocking by an AI platform just removes visibility. It doesn’t remove risk.

Choosing responsibility over convenience in AI

That’s why at GoFIGR, we’ve chosen the harder (but more responsible) path.

We allow difficult conversations – swearing, frustration and even distress – but surround them with guardrails, coaching and escalation pathways.

Guardrails don’t mean censorship. They give people a chance to learn, redirect and get support. For instance, when AI is identifying patterns of distress in a user, confidentiality can still be protected while still bringing these signs to the attention of the both the person and management.  

This is what separates us from generic tools like Copilot. 

Copilot is safe, generic and scalable. It’s fine for admin tasks and quick summaries. But it won’t help leaders coach better behaviour, track culture or intervene before underlying issues spiral.

GoFIGR is HR AI, built differently. We don’t just capture conversations. We help people practise them, detect risks and turn those signals into insights leaders can act on.


This isn’t about productivity. It’s about responsibility and AI ethics.

Turn workplace conversations Into a safer advantage with our HR AI Platform

AI in the workplace isn’t coming – it’s already here. The difference is whether it happens TO you or WITH you.

GoFIGR helps you take control by:

  • Spotting early signals of burnout, disengagement and risk
  • Giving managers and employees a safe way to practise hard conversations 
  • Escalating real issues responsibly, without killing trust

By combining AI safeguards with cultural insight, we help you catch problems early, protect your people and make change a strategic advantage.

Don’t wait until the sh*t hits the fan. Take charge of your workplace conversations before they take charge of you.

Contact GoFIGR today and see how we can help you lead workforce transformation with confidence.

[Book a GoFIGR demo]

Helena Turpin
Co-Founder, GoFIGR

Helena Turpin spent 20 years in talent and HR innovation where she solved people-related problems using data and technology. She left corporate life to create GoFIGR where she helps mid-sized organizations to develop and retain their people by connecting employee skills and aspirations to internal opportunities like projects, mentorship and learning.

GoFIGR Newsletter

Sign up for the latest resources on how to keep your people engaged, happy and high-performing

By signing up, you agree to GoFIGR’s Terms of Service and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.