WEBINAR

The AI Reality Check: What’s Really Changing for Work, Workers, and the Workplace

In this session, Helena Turpin, Ben Burke, Linda Chai, and Gordon Eckel (APACAI) unpack what AI really means for organisations today, and how business and HR leaders can lead transformation with confidence, accountability, and humanity.

TLDR?

In this session, you’ll learn:

✅ How AI is reshaping work and the skills organisations need to stay competitive

✅ The new legal and ethical boundaries emerging across industries

✅ How to prepare your people and culture for human + AI collaboration

Featuring:

Ben Burke (Partner, Sparke Helmore)

Highly accomplished employment and workplace relations lawyer with more than 30 years’ experience advising across sectors including aviation, energy, defence, and tech. Recognised nationally for expertise in workplace relations and WHS law.

Linda Chai (AI Transformation Leader)

Over 20 years’ experience in strategy, operations and technology. Led transformation programs across health, retail and government sectors, embedding technology into business and workforce processes. Known for her pragmatic approach to digital and cultural change.

Gordon Eckel (APACAI)

Over 25 years’ experience leading and scaling creative, digital and media businesses. Now Director of APAC AI, delivering transformational AI-enabled technology solutions for medium to large enterprises across talent, administration, customer service and legal operations.

Featuring  🎥

Helena Turpin
Co-Founder, GoFIGR

Helena Turpin spent 20 years in talent and HR innovation where she solved people-related problems using data and technology. She left corporate life to create GoFIGR where she helps mid-sized organizations to develop and retain their people by connecting employee skills and aspirations to internal opportunities like projects, mentorship and learning.

Ben Burke, Linda Chai, and Gordon Eckel

We're joined by an incredible panel of leaders who bring deep expertise across law, transformation, technology, and the future of work.

Transcript

Gordon • 00:00

All over the world. Before I actually get started, if I could ask everyone just to mute, when we get to sort of being able to open some questions up later, I'll ask you to sort of just bounce that into the chat and then I'll be able to throw it to you and you can unmute. But my name's Gordon. I'm one of the co-founders and directors of APAC AI. We're a funnily

enough, an AI consultancy. And we're essentially dedicated to working and partnering with uh with clients to sort of, I suppose, cut through the noise of the AI landscape, of which there is no shortage these days. And we really sort of

specialize in helping you know companies clearly define roadmaps and uh and strategies to basically get the best out of AI.

Gordon • 00:44

But uh one of the things that uh we keep coming across is uh you know there's a lot of uncertainty, you know, fear and doubt, you know, the good old FUD for you know, we always love the TLA um, you know, in the market about uh you

know how how much you know AI is affecting uh our employees, uh, you know, recording in progress. Uh and to that end of things, um what we're aiming to do today is really um Look at how AI is reshaping, you know, faster than a lot of

organizations can adapt. You know, every day there's a new headline, uh, every day there's a new platform being launched. Um, so what we're here to do is really look at how it's done, you know, what it actually costs for an

organization, what the opportunity cost of missing out that phonemo side of things, uh, and essentially what the law says about it. So the three lenses that we're going to be approaching this today on is really sort of the transformational aspect of things, uh, the workplace law from a legislative uh perspective, and also, of course, workforce capability to

essentially make sense of what's um you know really changing for leaders and workers. So we have three fantastic panelists today.

Gordon • 01:51

Uh I'm going to uh hand over to each of them to introduce because they're certainly a lot more capable than I am. So I'll start with you, Linda. Would you like to introduce yourself to Pete?

Linda • 02:01

Hello, everybody. Um glad to have you with us. So I get to take the transformation lens. I've spent uh over 20 years of my life actually delivering transformations for customers, coming up with them and then actually doing them. So I get the

lovely job of making this stuff real and having to keep the promises. Um, but increasingly I've been working with AI

technologies and getting that into organizations. So you'll get from me uh a lessons learned from what we've seen in the field.

Ben • 02:27

Brilliant. Over to you, Ben. Hi, everyone. Uh, my name's Ben Burke. I'm uh an employment lawyer and partner at Spark Hillmore Lawyers, uh based uh here in Melbourne. And uh we are increasingly getting questions from clients about uh

how AI impacts their business and the interrelationship with employment matters. And I've also done some work in the shareholder space.

Ben • 02:50

Uh so I think it's I think it's an exciting uh industry sector at the moment.

Gordon • 02:57

Okay, Lady of the Moment, one of our partner organizations that APAC AI represents, sort of around it's a Europe, Asia, and here in Australia and New Zealand. But Helena, please introduce.

Helena • 03:10

Thank you. Well, thanks everyone for coming. I'm one of the co-founders of a career tech platform called Go Figure. And one of the biggest problems and challenges that clients and people in our network are asking is what does AI do to the skills, shape of our workforce, the tasks that they perform? So most recently we've been delving into task intelligence, job mapping, and looking at what skills, tasks, and jobs will be impacted as we deploy AI. So that's my area of focus.

Gordon • 03:43

Brilliant. We're going to be getting into a whole lot of that as we as we move through the day. So let's start with the big

picture today and look at really what's actually changing for organizations on the ground. So, Linda, you know, from a you know a transformation perspective, and you know, you you are the person who's been responsible for so many

organizations to actually implement and make this sort of happen. What's the biggest shift that you're currently seeing in how organizations are basically thinking about working in the age of AI and that deployment side?

Linda • 04:19

Yeah, so I think 2025 has been a really interesting year and there's been a lot of evolution. So I think I can answer that

question in on two fronts. So the 1st is around just the progress that we've made and the mind shift that's happened as we've made more progress. And then the 2nd one is as you imagine, as we've made progress, we've also come up with some new problems that we're dealing with as well. So if I go to the progress front, um uh we've actually just recently come back from the US because trade show season for all the major application vendors is literally just closed in the US with Thanksgiving coming around the corner. Um and what we've really seen there is a shift in the messaging around AI. So last year it was very much, it's new, please try it.

Linda • 05:03

Um, but this year it's very much now AI is just a part and parcel of the landscape that we're living in. It's a normal part of the world now. Let's just skip on with it and put stuff in place. So we're also seeing a lot of research come out, um,

obviously from the major consultancies around the world that actually provide some data to back up this position. So

um I read an article just the other day on there was a survey of 130 enterprises in the US. And from Q1 to Q3, 11% had AI deployments in production in Q1. By Q3, 39% of organizations had AI deployments in production.

Linda • 05:45

So that's a real shift from the experimentation and the pilot programs that we're seeing more of last year to this year, people actually going, let's just get it into usage with organizations. And I think anecdotally, what's always been the case is Australia and this region has been about six to 12 months behind what we're seeing in the US. And so I'm definitely

starting to see that shift in Australia, but it's definitely more prevalent in the United States at this point in time. So that's what's coming at us. Now, with all that progress, as you can imagine, we are getting to see a new set of challenges and new set of problems that we're having to deal with. And the big shift that I'm seeing there actually comes in two parts.

So the 1st part is as we try and take this technology and put it into production, what we are exposing is all the deficits of our legacy environments.

Linda • 06:40

And historically, there's been a lot of a conversation about data and the need to have more data that's still present. But what we're also coming across now, particularly as agents come into the AI program, is an increasing focus on the need for integration so that agents can actually do those bits of our jobs where we need to assemble information from multiple systems, or we need to actually do work across multiple systems in order to get that job done. The 2nd part,

and I actually think this is the biggest, the bigger part, is as we've been going from experimentation into production, we're actually seeing a real search for productivity gains and ROI for all of these technology investments that we've made over the course of the last couple of years. And I think what that's causing is a shift in the way that we're thinking from

foundational kind of capability or capacity building things like, you know, rolling out prompt engineering training to our staff so that they know how to use these tools and have confidence in using these tools to essentially a set of um

questions or problems that we're trying to answer in terms of what is the impact to process design or workflow, what is the impact to job design, what is the impact to operating models as we try and embed these tools more in the way that we actually carry out work on a day-to-day basis.

Gordon • 08:14

So do you think that um so if you just a really interesting point, you know, from from the US on some of the data coming out there from Q1 to Q3, you mentioned sort of from 11% to sort of 39%. Um everyone's familiar with the term FOMO.

Um there's uh you know a term that we're um more recently coined, I think, you know, just towards the end of 2023, so a very new one, but uh but FOBO, sort of that essential fear of becoming obsolete, whether that's from a personal

perspective or from an organizational perspective, becoming obsolete because you're not actually keeping up, you know, with the current trends in the market. I mean do you do you think that's the primary driver at this stage on that

transformation side?

Linda • 08:53

Yeah, there is a uh there is a lot of FOMO that's still in place, particularly in senior executive worlds. Um, but I do think FOBO is becoming more. Uh more prevalent. And that's just we are seeing people start to employ this tooling. Um, and as they employ the tooling, they're starting to make strides. And I think people are starting to go, well, if I don't start to

experiment with it, if I'm not keeping pace, then I risk becoming kind of falling behind in the pack and getting to the back of the pack instead of being in front of the pack.

Gordon • 09:25

Yep. And I I I guess sort of the other the other one of the primary sort of you know talking points that's been coming out in some recent surveys and whatnot, whether it was from AWS with Amazon Web Services, whether it was from you know MIT was another one, um, and then a more recent one from McKinsey, that you know, uh, with the MIT and the

Amazon Web Services, there was between 92 and 93% of projects. The McKinsey report was around 85%. So I guess

that sort of really sort of leads into my my next primary question for you is what's really separating the companies that uh that are navigating that AI change well from those that are actually struggling on what uh you know, what are they actually doing to ensure that? You know, it is going to deliver on that expected ROI and follow its roadmap for success.

Linda • 10:17

Yeah. I think, I mean, I I think of AI transformation a little bit differently from what we've seen in the transformation

studies historically. And I think Gareth just put in a comment, you're right, it's not just a technology thing. I actually think of this as a system of two parts, technology and people. And I do think the line between those two parts is actually blurring the more we go forward. And there are certainly those who are actually putting out there the concept that the

labor market and the software market are actually merging at this point in time, which blows that line even further. But I do think as we look across the customers that we're certainly seeing adopt AI, the people that are separating themselves from the rest of the pack are actually investing more in the people and workforce elements, not just the tech

components itself.

Linda • 11:10

So if I unpack that a little bit more, um, I've actually consistently been coming across three really good questions that people appear to be asking themselves to get more from their investments. The 1st one's really around rather than just simply automating what we do today, they're asking questions like what's what's the best place in our value chain where

we should be applying human effort? And then using the answers to that to actually intentionally redesign workflows, processes, and things like that. So not just saying, well, I do this so some so a machine can do it faster, but where do humans actually make the most sense? Where can I get the most out of that human effort? And then automating the stuff that doesn't make as much sense for the human to be a part of. The 2nd question that I found is instead of just concentrating on adoption of this technology and saying, um

Linda • 12:11

How do I equip you with skills so that you can use the tool? Actually, going the other way and asking, how do I increase the value of AI itself so our people reach for the tool rather than trying to push it onto them? And a really good example is when when we did um when we did the technology implementation to ourselves, because you know who better to do

this to than yourselves. Um we have this wonderful, uh, it's you'll love it. It's a 996-page risk and compliance document

that we have that everybody in the organization has read, as you can imagine, because it's totally from end to end. End to end. Um we originally put that out, we originally trained uh essentially a model on the contents of that um compliance

document.

Linda • 13:02

And we rolled it out to everybody and then said, you know, here's your prompt engineering um course. Now that you're

confident, go forth and conquer, you can use these resources now. Um, and interestingly enough, we got a bit of uptake, but not really that much. But the moment we put um an interface in front of it that felt more like asking questions of a

friend.

Gordon • 13:25

Very good point. We got adoption. Yep. And I I I guess that sort of you know comes to the core of one of the things we often discuss, you know, with clients is that you know it's very much you you can't look at it as just a machine that can you know throw a magic wand around and sort of all be fixed. I mean, as many CEOs would love that that would be the case, but look at it as sort of really onboarding a digital worker and think of them as an intern, as a new colleague, and ensure that sort of that, you know, how that working model, you know, sort of is going to be rolled out is defined the same way a job description is defined, you know, for some of them so they understand what the position is and how

they're going deliver. But I mean it's it's a it's a really you know great sort of foundation. But you know, I mean we see that sort of work is you know changing sort of this fast, sort of in the market.

Gordon • 14:12

Um, and you know, to that end it's sort of it's almost a great sort of transition because as those work rules and as those digital workers change in an organization, um, the rules and the risks have to evolve as well. So I mean it's really I'll throw that one over to Ben because you know, typically we've seen in a lot of cases where the law, and let's look at social

media, e.g. , but the law has in a lot of cases, you know, really been lagging behind on the rapid deployment of

technology. Um so where are you seeing, Ben, uh the biggest, you know, sort of gray zones, you know, as AI is actually entering, you know, the workforce and and delivering these seismic shifts or these seismic tremors through

organizations.

Ben • 14:52

Uh well, Goodness is quite a lot of grey areas actually. Um, so I'll just mention a few. Um workplace surveillance and

monitoring of employees by AI is is a big one, and some key questions arise in that space. Is surveillance appropriate or excessive? Um, what is the impact on employees? Does surveillance cause anxiety and psychosocial risks? There's a lot of focus on that at the moment.

Ben • 15:15

Uh does workplace surveillance legislation apply? Uh and what might be the impact of new legislation on the techniques you're using? And then also equal opportunity and discrimination. Do your AI platforms directly or indirectly discriminate against uh employees or prospective employees on the basis of sex or gender, nationality or race or other prohibited

factors in breach of the equal opportunity laws? Do you know how your AI platforms work and what assumptions they make? Um, how do you ensure there's no inbuilt bias or discrimination in the way those platforms work? And some key risk areas in relation to equal opportunity and discrimination include recruitment, the assessment and selection of qualifying employees, how is that done?

Ben • 15:58

Is there inbuilt bias and uh selection of employees for redundancy? And probably a 3rd big area is data privacy and

confidentiality. Um how how are your employees using AI? Do you have control of that? Do you have visibility over it? Uh are they putting your confidential information into AI platforms? Um, are you putting their information, their personal and sensitive information, into AI systems?

Ben • 16:25

How do you safely collect and protect employee information? And uh decisions on termination of employment are also particularly relevant in this area. So just to give one example, um, if an adverse action claim is made under the Fair Work Act, um the employer must prove the decision is not made for a prohibited reason. I think it can be very difficult to prove the basis of a decision which relies uh in whole or part on AI data. So there's some of the the grey areas that have not yet been addressed by legislation or or cases, but I expect we will see a focus on on all of these these topics.

Gordon • 17:01

Yeah, I think we um we touched on that yesterday when we were just having a quick catch up um prior to this. Um you mentioned uh that New South Wales and Victoria are sort of proactively starting to sort of look uh to a legislative

framework to start you know bringing a bit of clarity on that side of things um you know to the marketplace um you know so that you know sea levels can sort of make decisions and they know that they're going to be minimizing their exposure. And we we don't we don't none of us have got an absolute crystal ball at this particular stage. But um, you know, what are the murmurings that you've sort of heard from that legislative side of things?

Ben • 17:37

Yeah, good question, Gordon. So there's not much legislation yet which directly regulates AI, but uh new legislation's

clearly coming, and there are a few examples. I think in Australia we're going to see a focus on some of the things I just mentioned, excessive surveillance and monitoring of employees and psychosocial uh safety risks in particular. Um in

terms of developments, there are some developments in in uh in some jurisdictions. So e.g. , the European Union has a a new Artificial Intelligence Act. It's not yet in operation, it's been delayed until 2027, but under it, providers of high risk AI

will be required to establish a risk management system and ensure human oversight of the system. Um California also has uh we talked about this yesterday, a bit new legislation which imposes obligations again on developers of AI to

require an AI risk um management framework that addresses catastrophic risks.

Ben • 18:32

And in Victoria. Some of the audience may be aware that Victoria recently undertook an inquiry on workplace

surveillance, and that inquiry, which was handed down in May this year, recommends new legislation which imposes a positive obligation on an employer to prove for a risk assessment that any workplace surveillance that's done is

reasonable, necessary, and proportionate. So I think those are key words, reasonable, necessary, and proportionate. And New South Wales, you mentioned a moment ago, introduced a bill this week, which pri which proposes amendments to their Work Health and Safety Act, which will impose a duty on any business, a business conducting a business which

uses a digital work system to consider and address risks arising from excessive workloads and excessive surveillance.

So I think you can see quite a lot of development actually, and we might find that legislative development moves fairly quickly from this point.

Gordon • 19:33

I mean, uh, you know, I guess, you know, when we're looking at uh, you know, from a leadership perspective with inside an organization, um, what's some of the advice that um you know you're sort of delivering to uh you know your clients, sort of around how they think about accountability and decisions um which are made with or by by AI at this particular stage.

Ben • 19:58

Yeah, I think accountability is a key business issue in this space, as it is in in all areas of governance. So a few points about accountability, although sort of high-level points, but I think they're important. One is that you should not

outsource business and employment decisions to AI platforms. Use them but don't but don't allow them to make the final decisions. So AI platforms, in my view, shouldn't make final decisions. And that's because an employer or a

business is always responsible for its own decisions. So a human should should should always make the final decision.

Ben • 20:31

I think Linda touched on that before. Also, I think and and others who've commented in this area think that the use of AI in employment matters and terminations is high risk. That doesn't mean it shouldn't be used, but You need to consider safeguards and protections for workers. And to do that, you need to do risk assessments. And risk assessments involve identifying, assessing and minimizing risks so far as is reasonably practical, and including in that process addressing

psychosocial risks. Now, you also need to consider officer due diligence.

Ben • 21:08

So, e.g. , in work health and safety legislation in most states and territories in Australia, officers are have have a positive obligation to understand the key risks associated with their business activities and ensure appropriate resources are available to manage risks. Now, resources means equipment, people and money. So again, there's a focus on human people as well as AI tools. So just about striking the right balance there in your governance framework, really, but but

also exploiting the opportunities that AI systems provide.

Gordon • 21:42

Yeah, look, um, there it's a good follow-on for for a question I just saw that Gareth uh dropped into the uh the chat side of things. Can you see that there? Otherwise I'm happy to sort of just uh I saw it momentarily, but it disappeared from my

screen. So you might have to repeat it. Well, just uh uh essentially sort of Garth was saying sort of around existing

legislation in relation to both people decisions cover both human and AI usage in things like recruitment. Um so you can't discriminate. You must have equal opportunities regardless of how the decision is made.

Gordon • 22:11

And a side note, which I thought was really quite interesting, is why do we expect AI to be bias free whilst we don't seem too concerned about the overt and covert human bias to the same degree? I mean, you are still having a human-led

decision, and that is, I think, what as you just narrowed in, one of the uh sort of principal, you know, safeguards, you know, for organizations to make sure that humans are making the final decisions, but you still have overt and covert human bias. Um, how have you sort of touched upon that in your um in your experience.

Ben • 22:42

Yeah, it's a good it's a really good question. I mean that the question is absolutely right. Humans do have inbuilt biases and perceptions and and make assumptions and uh and they they they need to comply with equal opportunity

legislation. But I suppose the thing is with AI, um tests can be run on AI. And uh there are some studies that show

sometimes with some AF but AI platforms if tests are done, they they show perhaps a predilection to select a certain

category of employees over others. And that might be because of the way the machine has learned in the past. So given those tests can be run, I guess that's kind of the validation that needs to be undertaken for AI platforms.

Ben • 23:24

But you but you're right that the same laws apply to humans as to systems. You you know you need to ensure that you are treating people fairly and comply with the the legislation.

Gordon • 23:37

Yeah I mean it it it is that sort of confirmation bias loop. You know the the the the model is basically trained on the data that it has and the data sets that it has. So if it's looking at say a prevalence in a particular industry that 75% of the the the previous hirees within an organization were males within this Particular age group from with these particular

qualifications and background, it's developed a confirmation bias, you know, sort of just naturally because it's a

repetitiveness of the data that it's been receiving and confirming in the applicants that it's hiring. So I I guess it's one of the um it's not so much you know minefield, but certainly confirmation bias is one that uh you know we've been watching sort of you know very closely, especially in that uh in that talent acquisition recruitment space um to ensure that you know the compliance side of things is met.

Ben • 24:25

Well, another another point with with with with this is that we know um unions and applicant lawyers are going to be testing these things. I mean, they know that AI systems are being used widely, particularly in among big employers. So they're going to be pushing the boundaries a bit, and there will be some test cases. So I think you know, the the message is employers just need to be alive to these issues and risks and make sure they understand how the platforms work as

best they can. Yeah.

Linda • 24:56

And the good platforms actually do log this stuff because we have actually heard of test cases in Europe in particular where people have claimed bias during the recruitment process. And um we do know of people who have defended those positions based on essentially the logging that comes out of their systems because they're bothered to log this

information as their systems have been processing those interactions. So it's a Ben's point. We're gonna see more of it.

Gordon • 25:24

Look, absolutely. I mean the the data and the models are basically trained on, you know, sort of humans um and

their.data points that they're feeding it. So uh it's it's something that's gonna certainly have to be navigated. Um On that side, uh it's sort of I think a really good time to sort of you know jump over to uh to Helena, and we're you're essentially on the ground working with organizations on how that landscape is shifting, you know, for the internal stakeholders, for employees. And essentially the risk leaders need to keep in mind to ensure that uh, you know, they're doing it right and getting it right. Um so

Gordon • 26:03

You're working currently with organizations on mapping sort of the impact on roles and skills. Where are you seeing sort of those those biggest shifts at this stage, Helena?

Helena • 26:15

Well, I think the 1st it the 1st sort of big takeaway is that it's now become such an important topic to understand the

what work might look like and how that might impact, I guess in most cases, is is the thing that we all spend the money

on, most money on as companies, which is our human capital asset. It's now um reaching um the topic of boardroom.

So it's reached a really um senior level now. And as opposed to sort of, as Linda said, moving from pilots and

experimentation, there's some recognition now that if successful, this this stands to have quite a significant impact on how work is um completed, who completes it, and the cost, I guess, um of that workforce. So the very fact that this is um that we've built a product to assess this, I guess speaks to the the level at which this uh this topic has has uh

reached the boardroom agenda. And I guess what we're seeing now is um AI doesn't really have a home. So AI responsibility, it's often deferred to as a sort of technology problem.

Helena • 27:22

So perhaps the CIO or CTO of the organization might be responsible for leading the charge. But these um these types of leaders aren't always aware of the downstream or trickle effects. And so we've seen some quite spicy headline cases in the newspapers recently where you know large teams or contact centres or whole departments have been laid off

thanks to AI and the sort of trickle down effect on those that are left behind, or the lack of realization that the process doesn't neatly cover 50 humans, and we make those 50 humans redundant and then suddenly realize that their work

spanned multiple other processes. So I think the the very fact that this is now making the agenda at the executive level is a is a 1st . And so that's why we've been responding to it.

Gordon • 28:14

Brilliant. Do you think that sort of, you know, at the moment, um, you know, you're talking to a lot of sea levels. You know, do you think they're currently sort of underestimating, you know, or overestimating sort of what the organizational

impact is going to be and how quickly it's changing for them?

Helena • 28:29 I think both.

Gordon • 28:30 Yeah.

Helena • 28:31

I think both. And and you know, so much of the decisions that are made are kind of on vibes, right? So when we see AWS announcing 14,000 layoffs or robots are going to take over the world, you know, we we've seen as many sort of, well,

more failures probably than successes. I think that I don't know. I I sort of see two camps of thought. There's teams that think that they can, you know, automate their entire contact center without having um the underpinning data or or understanding the knock-on effect. And then there's those that are potentially being a bit too cautious.

Helena • 29:06

There was a really fun Pew study not that long ago, maybe only 18 months ago, where they asked Americans if they

thought AI was going to impact their job. And I think 68% of Americans were like, yes, it's going to impact jobs, but only 28% of people said, and it's going to impact marketing. And I can tell you now that the results of the of the model that we've created that it that sort of dissects jobs into tasks, rolls it forward through the lens of AI and predicts what might happen to that um role suggests that there is almost no job untouched. It's quite scary, right? There's there's no one going to be untouched by it in some way, whether they're the one building the tools or using the tools or not having to do something because the tools now do something that they were doing before. But what's been interesting is seeing like who that might impact. So some of the really interesting insights, aside from the fact that almost no job is untouched,

were some of the other data.

Helena • 30:02

So we combine some labour market data with our analysis, and we can already see in Australian labour market data, this is definitely true for the US, that entry level jobs are being impacted. I wouldn't say decimated yet, but I would say the

number of entry level roles being advertised in Australia. From my kind of cursory initial research, it's about 13% down, and that's that's not inconsequential, right? And then that has knock on effect. So if we're not taking so many entry level

people, who are managers going to manage? So it sort of has seismic effects on the shape and size of your workforce, it impacts. Who you need to hire, what they will do, the skills those people need now, and the skills that perhaps might

become, we're calling them sunset skills now.

Helena • 30:51

So the skills that become less critically important because a robot will do that task or an AI will do that task instead of a human, and other skills that become more important, and not just directly because of a result of technology. Some other interesting things that we've talked about earlier on in the session were this sort of um emerging trend to just. Allow people to do the human only work. So look like Linda said, so looking at the work that's performed, how it's performed

and who's performing it. So emphasis being played on taking the robot out of the human and then emphasizing our human only skills. But I think the reality is that there's going to be sort of short to medium term destruction of jobs before we see what we hope will be the creation of jobs that we can't even imagine imagine now.

Gordon • 31:43

Yeah, look, I mean, I guess that sort of really does sort of give quite uh sort of the opportunity for an internal mobility and a training perspective for a lot of a lot of organizations. And you know, it almost leads sort of you know straight to the

next question, which is you know, how do companies essentially the everybody knows that sort of AI, you know, if it's done right, is going to increase productivity, increase the bottom line. Um, and this also leads into a question that I'll tie it in with Trevor's uh question just around uh you know the thoughtfulness um you know, for who are going to be the most successful sort of organizations are are those that sort of do, you know, call it in an ethical AI sense, um, those who you know treat their staff and their employees as primary stakeholders. And they look at implementing uh, you know, an AI

deployment, you know, with a human led, you know, perspective to grain, you know, gain basically the the greatest sort of ROI because you touched on that with you know entry level positions, and they are certainly the some of the ones that

are most under threat, you know, at the moment, and that is certainly all of the data and the you know the articles and whatnot that we're seeing. But How how do you sort of talk to sort of organizations as to how they do this really

effectively for retraining and, you know, upskilling, you know, their their current workforce?

Helena • 33:05

Um I think the key is to get prepared. We know that this is happening, right? We we're not everyone on this call will know that to some degree this is going to touch their job, their life, or certainly someone in their network, right? I I think it's

unreasonable now to sit there and be surprised in a couple of years that this thing's coming for your lunch and that that there's no, you know, that we get surprised in a couple of years. And I think success is um taking action now. There is

definitely no roadmap for this. I think that was the headline of the webinar, even is that none of us none of us has done this successfully.

Helena • 33:40

Everyone's in the same boat, but there are some compasses available. So I think it's sort of incumbent on us now, in the roles that we're in, to um get prepared. And not everyone's going to need to be an AI prompter or an AI or an AI specialist.

I think that's that's crazy. But we it is possible to do some assessment now of what might change in the way of work. It is possible to map people's skills. People have got unbelievable skills that.

Helena • 34:08

Likely if you're in a company of more than 50 people, you're probably sat next to someone who you had no idea had a past life in something really in interesting, and working out how we the roles that we do think will be critical and

upskilling towards those critical roles. Um so I think it's a case now of um preparedness and and getting, you know, what bets are we willing to take and how do we get ourselves prepared so we're not on the front page of the newspapers in a year or two making sways of redundancies. But I think we have a choice. I think it's up to us how we employ it. There

definitely will be companies who um follow the do more with less pressure. And I understand that we're all under enormous pressure to do more with less. Um, or there's going to be the sort of what new value can we create?

Helena • 34:49

So, what new business lines, what new opportunities, what new uh new work might we be doing, or more new products or services might we be offering in the future that let our grow grow our business in a direction that we never imagined it taking. So yeah, time to get prepared.

Gordon • 35:05

Absolutely. And um, as we could say, going uh running off for a shovel in a pile of sand ain't gonna help you, or a beach is not really gonna help you right now. Um, you need to go into it eyes wide open.

Helena • 35:14

Well, for your own job, right? For your own future, if nothing else, just be selfish. If you want to have a job in a couple of years, get yourself ready, eh?

Gordon • 35:21

Yeah, so look, I'm you know, uh, you know, great sort of you know, Chinese sort of character for, you know, is the same for risk and the same for opportunity. Um, so if we're I'm I'm gonna sort of you know put this around the panel, but you know, sort of from each of your perspectives, if you're talking to a CEO today, you know, what's what's the one thing uh, you know, crystal ball moment here, guys? But uh, what's one thing that you would say, you know, you need to do right now to be thinking about how to get ahead of this, you know, I'll use that word again, seismic shift that's um that that's rolling through.

Linda • 35:52

Who wants to go 1st ? Yeah, we go once.

Gordon • 35:55

Right, you spoke up, Linda, away you go.

Linda • 35:58

Um, okay, this is I I'm starting to hear this as a whisper. Um, but what I'm finding is that people are starting to turn to

culture to create organizational resilience to tackle the uncertainty that something like this is creating. And it's not that we've never seen this sort of this sort of structural uncertainty before. I mean, certain industries have certainly seen it before. So things like, you know, the music industry had to face it with digital music came out. Like there are

existentialist threats that people have dealt with before. Um, and I I find it really interesting that people are starting to talk about how do we create these cultures of resilience so that as a group we know how to navigate this uncertainty

together and work through it rather than sit back and try and avoid it.

Gordon • 36:48

Absolutely. Do you want to jump um do you want to jump in there, Helena? Because I've got a another pointed sort of

question for Ben in a sec that actually just popped up in the chat, um, you know, from a legal perspective, but I thought it was a very good one as well.

Helena • 36:59

Um well, I've I've already said get prepared, right? I think I've made that point. This is coming. Toothpaste isn't going back in the tube on this one. Um my 2nd would be around like transparency. Um and I know that this is really hard, and I know I know I sit in a different position than perhaps um many people who are sitting in a organization of a few thousand

people. But I do a weekly future of work jobs post.

Helena • 37:23

I know lots of people in this call have commented on that. And the reason I do that is to look in the tea leaves of what's happening. So if I see a role, e.g. , with a company hiring a conversational design specialist, my brain goes to the fact, well, that's going to, that means they're looking to automate their contact center. And people in that company will be

making the same assumptions. If they see these AI hires popping up here, there and everywhere or experiments, and the intention or strategy for the for AI deployment in the company is not, is a bit secret and murky. That's the thing that's going to create fear, create resistance, stop your people adopting the technology at all because you've given them such uncertainty, they're going to resist those efforts. So I guess it's have a position on it and don't do it in the dark because your people aren't stupid.

Helena • 38:12

They're probably worried already. And you'll likely have more success if you involve them in the process than do it behind their back. But but that's just my personal view.

Gordon • 38:21

Look, um there was a question that just popped up from from Ian uh Ian Wood, who actually happens to be my co-

founder with Apex A, but uh you know, it was specifically to Ben and uh it it was sort of there have been quite a lot of

older workers who've um you know basically with with serious levels of concern and we've seen sort of the headlines uh sort of with certain large organizations, consultancies, whether you know finance, pure consulting, who've been sort of

freeing a great deal of uh, you know, large numbers of employees because they don't think they can actually get up to speed. I mean, do you do you think that moving forward that you know employers can be at risk in a in a legal

perspective from this particular approach? Um, assuming that certain demographics, you know, that those who are older don't have the capability capability to you know adapt adapt to this new world?

Helena • 39:10

Um Ben just dropped off when we joined some nice. Oh, he was just back. You might not have heard that question, Gordon. You might need to. Rats, I've got to run that one forward. Sorry. So Ben, you're back, you're back now.

Helena • 39:21

So um sorry, there seems to have been a power failure with my computer, but um I'm back on can you hear me now? Yeah, we've got you, mate. We've got you.

Gordon • 39:29

But uh look, I mean w when w we were just it was a good question for how you would advise uh I think specific to a CEO with a with a larger or older, you know, sort of workforce where we've seen a lot of large redundancies, you know, um, you know, in specific spaces where employees have been deemed sort of that they won't be able to get up to speed um, you know, with this rapidly emerging technology. Do you think that um that with how fast this is moving forward that

employers are really going to be at risk, you know, sort of in a legal perspective from basically you know sweeping sort of redundancies, assuming you know the older demographics can't get up to speed?

Ben • 40:05

Yeah, look, I think that is uh a real risk. Uh as I said before, I think it's clear that uh applicant law firms and uh unions and and other groups will be testing um uh in this space. We'll be bringing test cases and they'll be challenging uh

redundancies and and I think uh one of the one of the issues employers need to grapple with is um mapping out

opportunities for redeployment uh of impacted workers and being more creative than they have been up till now. And in fact, there's a there's a recent um case, uh uh Helensbrook Hole. It's not a um it it doesn't deal with AI, but it actually

imposes a broader obligation on employers to search for redeployment opportunities. And I think um I think that's going to be uh an area we underse more development. Um so I think employers need to really look um look creatively at at how they can redeploy all categories of workers, but particularly older workers, and also think about uh actually paying for

retraining and re-skilling.

Ben • 41:08

I I think that we're we're going to see obligations being imposed in the future on employers in Australia to be more creative in how they redeploy their workforce.

Gordon • 41:19

Look, there was a great question that just popped in there from Fran, and it actually sort of uh dovetails really nicely with one of the ones that I've got uh sort of sitting here from our um you know uh our registrations. But um, you know, as sort of AI really is becoming sort of a primary interpreter of information, how do we find truth um and maintain trust when you know we're having to move from a need to know basis to an open and transparent conversation because people

aren't stupid? But you know, sort of when models generate you know coherent outputs, you know, how do we how do we make sure that they're they're grounded, you know, in that metaphorical truth.

Helena • 42:02

Does that ask a specific one on that?

Gordon • 42:06 Yeah.

Helena • 42:06

Like if if an AI agent gave false, I don't know, financial advice or something like that. I mean, maybe that's a bad example that's what it's better regulated for. But let's just say a customer gets incorrect advice off the back of an AI. Who who's liable there?

Ben • 42:20

Well, that's a big question because it depends on uh who's actually responsible for the AI machine. Um and uh, you know, an AI uh agent is not a person, not a legal person. Um so uh legal persons are companies, corporate entities or human

beings. So I think that can be that can be quite a messy question to answer. Um how how do you work out uh who's liable for false uh information or misrepresentation? I mean, this issue could arise in the context of consultation with

employees in a major redundancy process. Um we know that um AI agents are capable of doing consultations on a very large scale, but what if um there's a mistake or some some some problem in the way the messages are delivered and

there's this false or inaccurate information?

Ben • 43:09

Who is liable for that? I mean, I think ultimately uh the employer that's relying on the the information and the process would would would primarily be liable. But um it it's a vexed question. And I think at some stage we're gonna have to

confront the issue of How do you deal with agent-to-agent communications? Because we already have, and Linda can

comment on this, communications between AI agents in one business with an AI agent in another business. What's the status of those communications?

Ben • 43:38

Because the AI agents are not persons, they're not being reels at this point.

Gordon • 43:44

It's uh it's rapidly evolving, and there's a lot of theoretical questions on that side of things. I know I read a report where sort of two I it two AIs started talking to one another uh you know uh through a chat conversation, and literally in the space of about a minute, they actually worked out they were talking to another AI and they switched to binary. Um and

the and the progression of the conversation moved, you know, quite devastatingly quickly. Um, but outside of AIs talking to AIs, I know uh I think that uh you know, within sort of the hiring process and within people and performance, you know, where when you're looking at a company's sort of EVP and their brand and their culture and you know, what it means to be that company, um, how do we sort of you know see you know that process of more automation and you know less connection? Um, you know, what major impacts do you think we're gonna have in the in the hiring process there? Do you want to jump on that one, Linda?

Helena • 44:54

So I'll start with an example of it going wrong. So one of the uh one of the really clever people in my network had did a consulting project with a recruitment team who automated their entire end-to-end recruitment process. Not all AI, because if you think about it, you could stitch together a series of vendors and processes. Now you don't need to wait for AI to do that. But they found that their candidate reneg rate, so the amount of declines they were having from people

who'd accepted a job went rapidly up. And that's because there had been no human connection or relationship. There's no sunk cost in this process.

Helena • 45:28

And at the end of the process, when you're given a button to say except fine, you've made it easy for people to say no. So stripping out all of what's special and important as it relates to human connection and relationship, I think, I think don't

underestimate how weird we are as people. We can does not mean that we should. So all I all I can speak to is an example of it going wrong. And I'm starting, this is anecdotal, right? This is not necessarily fact and don't treat it as so. But I am now starting to hear cases of people actually quitting their jobs because there isn't an opportunity to work on anything AI related, a bit more in the tech sector than anything.

Helena • 46:06

Um, because they these sort of people know that their kind of career currency and relevance is is going to be driven by

their skills and exposure to AI related work. And so if you are now not offering any opportunity as an employer to sort of dabble in these types of projects and interesting work, I do wonder if that's going to become more prevalent. And then the 3rd thing is it's going to have to be part of your employee value proposition to be able to offer sort of interesting and meaningful and relevant and future proof work, I think, to companies because everyone's going to be at AI. And there are, I suppose, more boring companies out there who maybe not don't have the enormous budgets to sort of just chuck money at the problem. And so if we aren't thinking about our value proposition, thinking about growing our own talent

and so on, I think I think that's a mistake that might we might be waiting to have for to happen if that makes sense. Yeah, absolutely.

Linda • 46:58

And I I I mean I've certainly seen a lot of anecdotal evidence that goes both ways as well. So e.g. , I have seen situations where um people felt really good about talking to the AI during the recruitment process because the AI had time on their hands and actually asked them how their day went and listened.

Gordon • 47:17

It's the the natural the natural empathy there because it's not on the clock um, you know, when it's actually having to go through a set of questions. Uh if that if that empathy in the conversational design has been done correctly, then yeah, people are gonna feel comfortable. We've seen that with one of our other partners, you know, sort of on the the actual satisfaction for perspective uh, you know, uh hires that they had a sort of 57% five-star, you know, interaction

satisfaction, 37% with four-star. There are only a few who just go, I want to talk to a human. Great, we can actually do that. We can ex, you know, we can move that forward. But uh you're right, absolutely.

Gordon • 47:52

The empathy side of things and and not being sort of having to get through 30 calls that day.

Linda • 47:56

Yeah, and so I think I think this is the opportunity for us all is to actually rethink how we use this technology to best effect and how we leave the things that are good about us as human beings and the connections that we have with people, because we are social creatures. It actually has the opportunity to enhance what we're good at and remove the stuff that really is not us. So, because uh of all the people in this room, how many part how many of you have a part of your job

where you are doing something along the lines of I'm copying something from here and I need to copy it perfectly into something here? Everybody has that as part of element of their job, right? Why don't we get the tools that are good at that to do that so that as a human being, you can do the stuff that you're good at.

Ben • 48:44

Yep. One one other point I would would raise, Gordon, is that I I talked before about the um the potential for in inbuilt or inherent bars and assumptions uh in AI platforms, but the opposite may also be true. Um that um with recruitment and other processes where AI agents are dealing with people, they actually might provide more accurate information more consistently and they might not have any hidden agendas. They would they wouldn't you wouldn't expect them to have any hidden agendas. So they might actually provide, you know, more accurate communication with with people. So we we we need to keep that in mind as well.

Gordon • 49:23

Yeah, it doesn't have that sort of uh inherent deceptiveness, sort of of that need to know it should be open, you know,

and honest. And that certainly um is very much defined by the guardrails, which are you know basically people led uh on the instructional side of things for knowledge bases, you know, for AIs. Um before we wrap it up, I'm you know, would you like to sort of throw in there a you know a parting sort of wrap-up thought, you know, Helena, before I um sort of let everyone know sort of what the follow-up from this is going to look like.

Helena • 49:51

Um well, 1st of all, thank you for coming. Thanks for that lovely, engaging chat and the interesting questions. This has been uh amazing. Um we will issue a recording, so we'll email everyone a rec a recording of this um webinar. So do feel free to share this with your colleagues um if you want to, and we'll make sure everyone gets a nice follow up asking if

they have any questions that they didn't have the opportunity to ask. So just a huge thanks to all of the speakers and for Gordon for hosting us today.

Gordon • 50:18

It's been an absolute pleasure. And you know, it's not it's not just a tech story. It's it's literally a life, you know, life-

changing, world-changing, you know, story at the moment. That um, you know, as you said, that you can either put your head in the sand, but um that there's no one who this is not going to actually be touching, and certainly not, you know,

from a a corporate or a uh you know organizational, even you know, a government, you know, perspective. There are some some jobs out there that yeah, it's gonna take a whole lot longer. I think uh Naomi jumped in and said maybe I should retrain as a masseuse. Um yeah, that one's safe.

Helena • 50:49

Yeah, that was an electricians. Yeah, the model says that's a good one, okay? You that's a good career choice.

Gordon • 50:56

Absolutely. So look, we are going to uh sort of follow this up with um and I will go through this uh through this chat and see sort of whether I can get a couple of additional sort of points that you've um that some of you have raised in there

and see if we can just put a couple of bullet points under those. I think it's always a great pull-up because we never get a chance to actually get through all of the questions that everyone raises. Um so look, thank you all so much for uh for taking the time to uh to join us today. Um you know, as I acknowledge before, I don't think there's too many people uh on this call who um are literally sitting around twiddling their thumbs just going, geez, I've got a spare hour or two today. So um it's very, very much appreciated, especially you know, at this time of year. Um based on that note, um, we'll be in touch with uh some further you know communications, as Selina said, a recording from today.

Gordon • 51:45

Um hopefully answer a couple of additional questions. But uh thank you very much for your time. It's been an absolute pleasure meeting you all. And uh We shall see you again. There'll be a whole series of these coming through from early 2025 with GoFigor and also with APAC AI. And uh we hope to see you there.

Gordon • 52:02

Um we'll be covering a lot of very broad topics and uh and hopefully a lot of um hope hopefully a lot of positive changes that are coming for the market. Thanks guys. Excellent talk. Thank you guys. Really appreciate it.

Find, inspire, grow and retain your best people.  Let’s GoFIGR.