I’ve had a few conversations lately that I can’t quite shake.
They were good conversations, smart people, thoughtful leaders. Organisations doing what most people would consider the right things with AI. Pilots that are showing promise, tools that genuinely save time and in many cases, teams that are curious rather than resistant.
And yet I keep walking away with the same uneasy feeling.
Not exactly panic, more like the feeling you get when you realise you’ve been walking for a while and you’re not entirely sure how you ended up here.
What’s unsettling isn’t that AI is changing work - that part feels obvious now (or maybe that’s just what my news feed would have me believe!).
What’s unsettling is how often the human implications are treated as something we’ll deal with later. As if we’ll somehow work it all out once the technology is in place like we did with electricity or the internet. Or we assume this is something someone else will think about.
In the background of all of this, I keep getting questions. Some from leaders, some from managers and a lot from individuals that know me personally.
- What skills should I learn to stay relevant?
- Is my role going to exist in five years?
- If AI can do half my job, what am I actually being paid for?
- Should I be moving into data, product, AI, something else entirely?
- Am I already behind?
- Will my kid get a job?
- It’s changing so fast, how do I keep up?
These aren’t abstract questions, nor are they coming from futurists or commentators. They’re coming from people who can feel something shifting, even if they can’t yet see what replaces it.
At the organisational level, the pattern looks similar.
I’ve spoken to companies recently where the AI initiatives are genuinely impressive. When I ask what this means for roles, skills, or the shape of the workforce in two or three years’ time, there’s often a pause.
“Oh. We haven’t really thought that far ahead yet.”
I understand why. Most organisations are still wrestling with very practical questions.
- Can we roll out Copilot safely?
- Who’s allowed to use what tools?
- Do we need a policy for this?
There’s only so much headspace to think beyond that.
But if even some of these pilots work, if even a portion of the time savings and productivity gains people are quietly seeing start to stick, then roles will change. And pretending we’ll deal with that later feels a bit like sleepwalking.
Failure, to me, isn’t automation - but it is being surprised by outcomes you’ve been actively working towards.
There’s something slightly reckless about getting a year or two into AI initiatives and then acting shocked when parts of the workforce no longer fit the work that remains. Or when entry level roles quietly disappear. Or when people start asking, more urgently now, where they are meant to go next.
That’s when the questions change.
- Why wasn’t this flagged earlier?
- Why didn’t anyone tell us this was coming?
- What were we supposed to do to prepare?
I think part of the problem is that we keep framing AI as a technology rollout, when it’s actually a fast becoming work redesign problem. We default to licences, training sessions, adoption metrics, because that’s familiar territory.
But this isn’t “here’s your new system, please attend the training”.
This is “the way your job creates value is shifting, and we haven’t fully figured out what replaces it yet”.
That lands very differently.
AI work also tends to happen in pockets, innovation or tech teams, transformation groups. Often moving fast, often with good intentions, but without the usual disciplines that would force these conversations earlier. And I can’t quite work out why AI makes this a bit different.
So HR finds out late, managers feel caught in the middle and meanwhile employees start connecting the dots themselves, usually without much context or reassurance.
And once people start filling the gaps with their own narratives, it’s hard to pull things back.
That’s when I start getting a different set of messages.
- Should I stop telling my manager I’m using AI?
- If I get better with these tools, am I just making myself redundant faster?
- Is it safer to stay where I am and wait this out?
None of those questions show up in an AI roadmap, but they absolutely shape whether any of this works.
The organisations that will handle this well won’t be the ones with the boldest demos or the slickest pilots, I believe they’ll be the ones willing to sit in the awkwardness earlier.
The ones asking questions that don’t yet have clean answers.
- What work is actually becoming less valuable here?
- What work is becoming more valuable?
- Which roles are already changing, even if we haven’t acknowledged it?
- And what are we genuinely prepared to do for the people affected if this works?
Not because they want to slow things down. But because they don’t want people waking up one day feeling blindsided.
I don’t think most organisations or leaders are careless or malicious, I think they’re busy, slightly overwhelmed, and doing what feels safe.
But sleepwalking feels safe right up until the moment you hit something.
If you’re reading this and you can’t yet answer those questions, that’s okay. Most people can’t.
But if you’re also assuming someone else will figure it out, that’s where the risk creeps in.
Because if work is changing, and it is, then someone has to decide what comes next. Someone has to connect the dots between the pilots, the people, and the future shape of work. Someone has to say “we need to think about this now”.
And if not you, then who?
If you’re planning for 2026...
and want to understand how AI will reshape your roles, tasks and skills, our Impact of AI model gives you a clear starting point.
It shows where automation risk sits, where augmentation creates opportunity and where your people can realistically grow.
You can learn more at gofigr.ai or get in touch for a walkthrough.

