Gordon • 00:00
For those of you who don't know us, I'm Gordon from APAC AI. We're a consultancy of eight, specialising in AI, agentic AI deployments, primarily in the people ecosystem. We can safely say that we have a wholly agentic AI stack of partners, one of which is GoFIGR with Helena. But we were able to basically through our agentic AI partners deliver the entire people ecosystem as an AI solution. Now, The interesting thing about today, 2030, we're really at an inflection point, you know, in human history, a point that's probably more impactful than what the Industrial Revolution was for wholesale change across the way that humanity interacts with technology. It's growing at an exponential rate, and we basically have really two camps on that side of things. One is those who are pessimistic about it and those who are optimistic about the possibilities that this affords humanity. So, if we take the position that fear and uncertainty and pessimism is easier to sort of buy into than optimism, but it also erodes our ability to be able to make sound decisions, or in the case of our own agency, to make decisions. So, we've really got the opportunity by taking an optimistic approach that we've got the opportunity to shape how our organisations, how our workforces, and how things transform for the better with these emerging technologies and AI platforms. So, like the Industrial Revolution, there are those who chose to to adapt, to educate their staff, and then to adopt, and then be able to thrive. And I think it'll be very similar in the year 2030, and certainly it is already with many organizations and early adopters that they have seen some incredible opportunities and some incredible growth due to AI. So, when we actually talk about the organisation of 2030, we're not really talking about something that's a long way in the distance. So, it certainly sounded like a long way in the distance 15 years ago, but it's imminently around the corner and its strategies and positions that organisations are making or taking away, you know, today that are going to shape their organisations over the next 12 to 24 months. And before we know it, they're wholly adopted and either thriving or working exceptionally well by the year 2030. So, today I want to look at what's going to look really different in 2030 and what strategies and bets are currently sort of, you know, being adopted and rolled out and planned to be ready for those wholesale changes. So, I'll throw this to our two panel leads today, Helena from GoFIGR and Linda. I'll let you sort of introduce yourself very quickly, but when you sort of think about the organisation of 2030. Who do you think, or what do you think we'll leave about work, and it will be meaningfully different to what it is today? Who'd like to go first?
Helena • 02:19
Oh, we're all so thrilled. Well, thanks so much for joining. I'm one of the co-founders of a career tech and workforce intelligence platform called GoFIGR. Amongst other things, we support with skills, visibility, internal mobility, career path planning, but we just launched a new capability. So you'll see a little QR code above my head here. If you would like to understand AI's impact on your job, your skills, your future, give this a try. It's free. It takes two minutes. What do I think will look meaningfully different? I think stuff that we're already starting to see. So I think work is already starting to get unbundled and re-bundled differently. If I even think about this app, this was a kind of idea and a concept a couple of months ago, and we were able to design this without a designer in two days. So all of the underpinning tech was there, but the design wasn't. And I would have had to have had a designer to do that. And between a couple of us and a really smart AI, we were able to kind of whip this up in a couple of days. So my job has extended beyond what I was originally, I think, set out to do. And so things like a marketing manager, as we understand it today, might look vastly different by 2030. I think the way that humans and AI collaborate will look quite different as well. It's all a bit accidental, experimental, we're in pilot mode, we're chatty, PT-ing up things. I think that's going to get more intentional. Maybe before I hand over to Linda, today is the worst this technology will ever be, and it's already pretty amazing. So I would be hedging my bets that this tech is going to get better and better, more and more capable, and that's going to be equally thrilling and threatening to people out there, who want to remain gainfully employed.
Linda • 04:06
Okay, so Linda Chai, hi everybody. I'm a co-founder at Innovation Quotient. We're a boutique consultancy that was formed from essentially some refugees from large firms who ran tech-enabled transformation programs. Today, we're an interventive technology and AI service provider and essentially for customers who are interested in how technology can actually accelerate their business strategy. So, we do strategy work with companies, but we also do a lot of implementations, so it actually means we have to put our money where our mouth is. At the risk of actually making some predictions about what the world will look like in 2030 and having somebody throw this recording back at me at some stage in the future to say how wrong I was about the world. Yes, we shall see. But I think there's a couple of things that I predict will actually happen. So, the first one is I do think that we're going to want AI assistants who are like AI versions of ourselves to help us with our work rather than having to deal with lots of different AI tools across lots of different platforms. So, this is very much a technologist new of the world, but every vendor will say they've got an AI something inside their platform at this point in time. I actually think we will change the way that we want to work instead of dealing with three different flavors of people we're going to want like the AI version of ourselves next to us helping us with our job. I do think that we'll definitely see a lot of automation happening, a lot of codifiable tasks being automated, and then that will still cause us to figure out what it means to be uniquely human and what things should be uniquely done by humans. I think that's something that I don't think we'll have an answer to, despite the fact that 2030 sounds like it's a long way away. I think we'll still be working that one out. But my favourite one of this is I actually think HR and IT will realise at some point that they eloped accidentally, and that we'll actually be trying to figure out how to raise this child that we had together called the digital worker or AI.
Gordon • 06:23
So what sort of positions are you thinking that, or are you seeing at the moment that organisations are currently taking to sort of ensure that, even though they don't have all of the answers right now, but what positions do you think they're taking, or are you seeing them take to prepare for 2030 at this stage? I just thought I'd run that straight through into the next one.
Linda • 06:52
So I think the organisations that are being proactive about this are realising that there's an organisational muscle that needs to be developed that really is about navigating uncertainty. So there is a school of thought that, and every vendor will come to you and say my piece of software is going to solve your AI problems, just spend money on me, it'll all be good. But I think that forces us into a position to say we're going to place bets on stuff and we need to bet on the right technology and right platform. And in a world where every vendor is doing the same thing, that uncertainty creates a potential for paralysis. And I have seen organisations who have gone, actually, it's also been hard to lean out a little bit and figure out, see what other people do and then follow the rest of the crowd. I think the ones that are going to be more successful in this navigation actually recognise that, firstly, the decisions that they need to make to take positions are not decisions that only IT can take, they're not decisions that only HR can take, and they're not positions that strategy or ops alone can take. We actually need to build the org structures that allow us to harness a more integrative form of thinking to see.
Linda • 08:13
to take decisions. I have seen organisations start to almost in a back to the future sort of context take a real options approach or a lean startup approach to looking at investment decisions and looking at how they can create the data sets that they need to be able to make good decisions about where to place bets. And instead of actually placing big bets on things, they're taking a more strategic approach that is about taking positions in certain technologies or in certain options that are out there in front of them so that they get the early warning signals coming back in that inform them for how to take the next step and the next step. So they see it as a series of incremental steps to essentially learn about the technology and trusting that things will work, but also developing that mentality that says I will verify what I think is happening so that I can then inform the next step. And that organisational muscle, I think, is actually the key. What's the meta way that you can think about how to navigate in the future?
Gordon • 09:28
I think that sort of, you know, it's a common thread that, you know, whether it's, you know, whether you're consuming this, you know, through Substack, whether through blogs or whether through podcasts, whether just, you know, anyone else spend a fair bit of time on LinkedIn. You know, there's a lot of very smart people out there with a lot of, you know, very sharp opinions. One of the things, one that I'm, you know, constantly seeing reiterated time and time again is that regardless of how great AI, you know, may be, you know, for workforce or workplace transformation, if you don't actually have a good data set, you don't have a good understanding of what the data is and where things need to be changed, which is all human driven, then ultimately it's sort of, you know, as with any system, bad data in, bad conclusions, bad data out. Similarly, it's got to be with the way that AI is being adopted inside an organisation. You have to have a very clear understanding of where that, you know, workforce or where that work architecture is going to be changed before you can actually implement a whole new digital workforce, if you will.
Linda • 10:33
Yes, indeed. Indeed.
Gordon • 10:36
So, look, I can jump sort of straight over to, you know, that naturally leads to a question over to Lelina. I mean, you know, for the workforce data that you're actually seeing from GoFIGR's perspective, you know, what early signals are you seeing from organisations that they're sort of currently building for 2030? They've been proactive about it rather than just sort of reactive, going, oh, sugar, we'd better get onto this right now.
Helena • 11:00
There's sort of the vanity slash external signals. So, we… I ran some… I pulled some Australian labour market data. I need to rerun that, actually, but I pulled some last year because I wanted to see if… what new skills were being asked for. And you can literally see the curve of when ChatGPT launched a couple of years ago and then suddenly everyone's asking for AI skills. As a sort of sidebar, the most commonly requested AI skill in the Australian labour market is AI, which is… I don't know, a bit of a nonsense, right? It doesn't really help anyone. We're seeing a lot of senior level executives being hired in and increasingly we're seeing lots of strategy and transformation and AI pop up in people's job titles. And it's now making board reports.
There's a fun podcast I listen to where there's a sort of correlation between the amount of times a CEO mentions the word AI and like investor stock market like reaction to it. So, everyone's at it, right? So, that's the sort of vanity, we're doing AI kind of metric. But I guess that the ones that really are thinking about this a bit more long-term are the ones that are kind of smart enough to realise that we're hiring all these clever people, we're running all these AI pilots so that we can. insert your solution statement. It might be to save money, drive efficiency, build a new capability that you couldn't before, but they're smart enough to realize that this will impact people's jobs and it will impact the people you need, the people you keep, the people you retain. And they're starting to measure that kind of like task level impact so they don't get caught unawares in a couple of years and end up on the front page of a newspaper having made a swathe of redundancies because of big bad AI, because I just think that's gonna be, I think the Australian public is gonna, I don't know, tolerate that less and less. So they're looking at the tasks that may be impacted and thinking about the sort of overall human impact.
They're investing in things like upskilling and a skill infrastructure so that they can see who they've got on the table and they can start preparing this workforce for the future now. And HR and IT are genuinely collaborating. I won't mention names, but there's a company here on this call who've done a really smart job of aligning their people and IT functions. And I don't believe that every single company will merge that function, but increasingly, to your earlier point about we just wrote a blog on it, actually, Linda and I, because we do see this as a kind of union and a marriage and no longer can these two sides of the business work in isolation. The digital worker is a kind of, we have to co-parent this thing and we have to design this thing and we have to think thoughtfully about this thing. So I think we're seeing increasingly the sort of future thinking companies are really collaborating between their people and technology functions and it's not just lip service. So those are some of the early signs.
I think that we're seeing. Linda, is there anything to add on that based on the interesting conversations we've had over the last couple of weeks?
Linda • 14:17
Yeah, I do think there's just so much uncertainty around this for organisations that it is difficult to say this is where it's going to go. But I do think we need – I don't think any one part of the organisation is really good at building kind of these coalitions if they're willing to co-parent this new child that we're going to put into the world.
Gordon • 14:39
So what sort of capabilities do you think, you know, the leaders currently, you know, today in preparing for this, you know, need to build, you know, to avoid, you know, playing catch up so that they are proactively on the front foot? You know, what do you see in some of those?
Linda • 14:55
As I said, I think the ability to build a coalition if it's willing is to great thinking from all the different parts of the organisation. Because if you think about it, getting AI into an organisation is actually really hard. From the IT department's perspective, I can give you a tool, I can switch it on, I can put it in, we know how to do that, that's easy, but that's not the job done. After that, we need HR to essentially give this new person the culture that it needs to manifest, the behaviours that it needs to manifest, so that it actually lives and values that we want to project to the rest of the world. IT can't do that, it's not IT's set of skills. And then we need Ops and Strategy, and those guys to actually institutionalise this creature so that it has the working knowledge of the organisation to be able to actually be effective in the organisation. No one department knows how to do all of that on their own, no one can do that on their own. And so if anybody tries to do it on their own, I don't think they'll be entirely successful.
Linda • 16:11
But the ability to, I suppose, create those coalitions of people, so that as we put this technology into the organisation, it has that well-rounded set of skills, is something that I think we really need to get good at. There are parts of the organisation that probably have more skills in that space, so Transformation Officers, Strategy Officers and stuff like that, those guys have been doing this sort of work, but not to the degree that AI will require.
Gordon • 16:46
I mean, if you're looking at sort of how do you, you know, adopt onboard and bring on, you know, bring into the fold, you know, a whole group of digital workers, you know, that are valued with your, you know, measurable and as part of your, you know, business. EVP. How does the actual assessment or the auditing of that, that's got to be, you know, when you're actually measuring, you know, the efficacy of, you know, how well that conversational AI or how well, you know, that, you know, that task, you know, task tool, you know, responding to emails, you know, is actually working. Is it going to be IT or is it going to be HR? Or, again, it's just, you know, that coalition of the willing, if you will, that working group that are going to be putting forward that assessment and going, well, it's meeting these assessment criteria, but we need to pick it up here, here and here. Who's that primarily going to sit with? I think it actually needs to be a lot of people.
Helena • 17:46
I think we need to co-parent it. How's that? But that's how apps are changing, Linda. We've talked about that, right, is that there will be tasks in many people's jobs that they may not even initiate anymore, but they have to supervise the outcomes. And even on the last webinar we ran on this topic when we had the lawyer come and join us, is that who's ultimately responsible for the advice or output of an agent? And even that's a bit murky for most people, but it is going to change jobs because if you're, even with your tech stack now as it stands, I'll take Salesforce as an example. Salesforce is launching AgentForce and increasingly the tech does the job that the human used to do. So it's already nibbling into people's jobs. and we're having to supervise the outcomes, quality check. I don't feel too bleak about it. I don't believe we'll end up in this sort of machine, like a digital factory type of environment where we're just sort of supervising AI outputs, but it's already changing people's jobs now. It's maybe not explicit, maybe the job title hasn't changed, but you can already see examples where an AI has crept in because your vendor, your tech stack is full of AI, you know, like it or not.
Linda • 19:00
I think I've seen it with early adopters, so I remember reading an article about GitLab. You know, tech company, obviously, and tech guys are the first to do this to themselves in many ways. So you see a lot of stories coming out from them at the beginning, but GitLab essentially produced pieces of software. Across the software development cycle, what they've done is actually put AI across the whole thing. So rather than saying, this job needs to be split like this, they've actually gone, if this is all the stuff that we do, how do we put AI across all of the stuff that we do? And then what they've ended up with is, you know, AI being involved in things like the planning cycle, the code generation, you know, the development of code and putting that into production. But the human role has shifted to being one of, I now communicate intent, what I want, what comes back out, and make sure it's in line with the intent that I originally stated.
Linda • 19:56
And then I merge what has been created into, you know, the product that we ship at the end of the day.
Helena • 20:05
And that's… Claude Coe can do that by itself now, can't he?
Linda • 20:11
It can't, well, it can't communicate intent, not the way a human can communicate intent, and that, I suppose, that's what they do when they looked at, if this is all the stuff that I do as an organization, I've got to tell this thing to go do these activities for me. That's a really different way of shaping kind of the world. Instead of going, this job, how do I break it up and reform it? It's actually looking at the intimate value chain of what I do as an organization.
Helena • 20:40
I'll give you a silly micro example of how that shows up in our company. Yeah. It's really micro, right? But we have a career, a chat interface that supports with career, difficult conversations, one-to-ones, that kind of thing. It's one user interface made up of many agents and one agent's job maybe is to summarize the conversation, another is to go and look up that person, another is to commit the conversation to memory. So it's not one agent, but it looks like it is. We're in the position already that we need almost like an org chart of what each of these jobs to be done is because we could inadvertently change the objectives of one agent and it breaks three others or we give them the same job and so their instructions are kind of crossed over. So we're already, even as a small business in the position where we need some kind of like org chart to tell us what jobs are being performed by this digital worker to be able to sort of be able to see and manage and not have someone in one team in one country build something that's at odds with another agent. It's as like, it's as granular as that. It's already, it's already hard.
Linda • 21:52
Yeah, and we're seeing with the work that we do. So we work with integration platforms from a tech perspective. That problem actually surfaced during the app world where everybody was building apps and all of a sudden we had apps on top of apps and we had this problem of kind of this ecosystem of apps and nobody knew which ones were connected to others. One of the big challenges that we're being asked to address is how do I manage orchestration of all of these agents in the world because I'm using them in lots of different ways and I cannot afford to make a change here and have that inadvertently kind of spiral out into a whole bunch of unseen consequences. So from a technology perspective, we're absolutely seeing the need for orchestration of these agents and the ability to kind of see a map of where they're working in an organisation so we can manage them effectively. Linda, as it relates to workforce planning then, do you foresee that people in that role will need to manage
Helena • 22:54
Like, does, does, do these become the workforce, right? Does, does it become an extension of the workforce? So you're going to have to map it. That's so, it's so hard to get your head around how that might look. It won't, isn't it?
Linda • 23:06
Well, I think it is because so, so this, this is where my controversial point is going to come from. So please forgive me. Don't crucify me guys by saying this. But yeah, I think saying AI is a technology tool, I think it's wrong. I do think it is a digital worker that we're starting to talk about. And this is where I think the co-parenting arrangement needs to come in. That digital worker actually becomes part of your broader workforce.
And I don't think that HR or IT have the skills to be responsible for that workforce entirely on their own.
Gordon • 23:48
It's got to actually include a basic human-centred design, UX side of things to actually bring those together with IT, with HR. Human-centred AI.
Linda • 24:03
Human-centred AI.
Gordon • 24:05
Human, yeah, but ultimately it's still going to be human – it's about all being a human's job role or job description or the ability to be able to sort of manage their workflow better. And, you know, very much, you know, we know that as long as there are humans in the loop within that decision-making matrix that, you know, you have less chance of having a complete workflow that can go off into different directions and go awry. But you have to look at things from, you know, I think from a human-centred design perspective to be able to work out what that UX is like both for, you know, the humans who are interacting with a digital worker and, you know, almost the digital worker knows how to interact with the humans from an EVP, you know, as to, you know, what's the best way to do that. What is it to be us as this organisation to ensure there's a consistency in the way that the experience for workers, whether it's digital or whether it's human, is not diverging too quickly. So, again, you talk about that co-pairing, I just think there's got to be a very much a UX, a UI interface that's very much considered within that working group. Would you agree with that?
Linda • 25:21
Um, I do think that, um, yeah, I do think all those participants being involved, um, I, okay, I'm going to go off tangent for a little bit. I do have a question to your point, Helena. I think, um, you know, strategic workforce planning is something that we do, we do need to do together, not just the domain of HR anymore. And I do realize I'm talking to a room of people in HR and they're probably going to say out of my dead hands, will you deprive this capability from me? And I'm okay with that. Um, but, um, I, I do think there's like, I look at our own organization at the moment. I think Helena and I have reflected on this before. I, I, there's an absolute privilege with being a smaller organization, being able to design the workforce of the future from scratch, from a black sheet of without having to transition, um, an existing workforce at this point in time. I think the pathway for transitioning and existing, um, existing workforce is actually going to be a far more difficult job and I think we still have to design new solutions and new ways to deal with that. So I'm actually really open to collaborating with anybody from HR if you're willing to collaborate with somebody from HR, from IT, about how to actually make that happen because I think there is an increasing number of data sets that are coming to us, or that there are data sets that we need to create in order to make these good decisions. I think we'll have to invent them but I think there are things that I can pick up right now. So look, do you want to add anything on top of that, Helena, before I sort of jump us over to a few of the questions that we've got that we can bounce around, you know, pre the webinar and then we go to some chat ones and then a wrap-up before we hit two o'clock. And then, look, I think I spoke to both of you earlier, we can run through till about 2.20, 2.30 where we've got an absolute hard stop at 2.30 if we want to, if there's enough questions, which there always seems to be with this.
Gordon • 27:31
But is there anything you'd like to add to that one, Helena?
Helena • 27:35
Only, only some things about, I don't know, questions or concerns or early warning signs that I see in the shape and size of organizations. So, there's sort of two camps emerging at the moment. There's the automators. So, there's the Amazons who have, I think, China has the most robots and AIs per capita, and then Amazon has the second one globally, like the second highest. So, there's the kind of like race to automate, to rip out all of the cost. And as consumers, we're okay with that because we like having cheap, convenient products. And then there'll be the growers. There's a lovely IKEA case study of a, they launched Billy the Bot. It put two or 4,000 customer service workers at risk. They identified through the skills that those people had that there was a new service line that they could create. And they turned that, they sort of repurposed those people into interior designers and it's a new profit center. So, there'll be the growers, right? There's the automators and the growers. I'm also, I'm a bit starting to get concerned about entry-level jobs. I know that there's a lot of bluster out there about, is it economic? Is it people, you know, reducing headcount, you know, to sort of… deal with some of the over hiring that happened or because we're, you know, we're not performing as well as we should be. But there's, there's been a definite decline. I can see an entry level and grad job since literally since chat GPT launched. And that has a knock on impact. It's like, okay now because we can kind of swallow that impact. But in two or three years time when we don't have a talent pipeline, where are our leaders going to come from? Who are our managers going to manage? No one's kind of getting that much enough attention.
And then if AI is nibbling away at their entry level tasks, how to entry level people learn, that freaks me out a bit. And then there's the kind of human, very human side of things where, let's just say you are a kind of like, I don't know, 40s, 50s person. You've, you're, you've had a lovely career. You're running good money. Like, your relevance here in some cases is a question, people are really feeling threatened by this and that is a barrier to AI adoption as well, because if suddenly you're in fear for your job, you're not going to let your fiefdom be undermined or whatever. So there's some really interesting human behavioral sides of things that we need to overcome that will sabotage our efforts, I think. And I don't know, I haven't seen a lot of people. talking about this, let's say. I've just seen sort of stuff online like many of you do. So I do think there's some future burgeoning problems and maybe we'll tackle it. There's a question I saw come in earlier about this one, so maybe we'll tackle it then.
Gordon • 30:14
Yeah, look, I think that, you know, there's actually a comment from Fran on there, sort of, that one of the biggest pieces is actually around change management, because that is ultimately about organisational adoption. And if the change management piece is in there designing sort of what success looks like, it ensures the adoption. And it means that we're getting a lot of these, you know, these early deployments out of the POC stage, because at the moment, you know, it's literally 5%, 10%. If you listen to McKinsey, they usually have a fairly good opinion. But, you know, 5% to 10% are actually going to be in full traction, full rollout. They're just not getting out of the POC stage. And a lot of that has got to come down to the change management aspect of ensuring that, you know, there is a strategy as to how this is going to be rolled out and how it's going to be adopted, how we're going to measure it. And then, most importantly, sort of how do we replicate it then across our organisation once we're out of that POC stage. So, change management is, I think, and again, it's always going to be people-led, because that's where the biggest pushback is going to come through, you know, in an organisation. One last question before we jump into all of the – and jeez, there are a lot from everyone sort of prior to – while registering for the webinar. But, look, I mean, last question is, what are the biggest sort of planning assumptions that, you know, if we're looking at that 2030 thing, what are the planning assumptions that you think are most likely to break between now and 2030 for the organisations that you're looking – type of organisations that you're looking at, you know, north of 2,000, 3,000 staff?
Helena • 31:50
Shall I start? What are the planning assumptions that are most likely to break between now and 2030 that you're seeing? Without repeating myself, I think that sort of entry-level talent pipeline that it's all going to look exactly the same in four years, I think that's a, that's a… Yeah, I don't know, I would hate to be a young person trying to find a job now. Anyone who's got kids here, I don't know, I feel very nervous for young people. I think it is naive to think that people are going to embrace this despite how many promises that we make about how it's going to free up their capacity and it's going to be wonderful and so on and so forth. I met a wonderful academic who helped me with some of the work that we've been doing recently and the top saboteurs of AI implementations, one is that you get crappy results or outputs and that is probably because you haven't trained the person properly, that they're getting crappy outcomes or outputs. The second is fear of irrelevance. I don't think we should underestimate the fear of irrelevance. I don't think job titles are going to matter so much and I think already, even with the stuff we see and I'll point to my QR code again, the the predicted AI impact between two very similar jobs, like I've seen recruiters come in and marketers and all that kind of stuff, it's really, really different. It's impossible to say on average that customer service is going to be automated by AI or an average marketing is because what happens in your company with the skills that you have, with your ambitions and your intentions, means that no, I don't know, no two companies, no two jobs are going to be impacted the same way. So I think it's a bit difficult to read some of these big sweeping headline statements about all developers and all recruiters and all of these jobs are going to be irrelevant. I think it depends. I don't know, that was a really muddled answer.
Helena • 33:52
So for a more coherent one, I'll hand to Linda.
Linda • 33:55
Oh, thanks. Thanks a lot, no pressure. So I think the bread thing is very correct. Like putting our money where our mouth is, we've actually been looking at ourselves and how we build a talent school for ourselves. Because I do think there is a challenge where the way that we used to develop grads, particularly in professional services is an apprenticeship model, right? You got to do the easy work, but what happens if the easy work is being done by a robot going forward? Who do we give to do the easy work to now?
Linda • 34:27
So we're actually looking at it ourselves and just going, how do we create? How do we grow up? create and grow a pathway for our talent as they come in to us, because we absolutely need grads. I never want to be in a position again. I've been in professional services where we didn't do a grad cohort one year, and we felt the pain of that missing talent pool all throughout the entire career lateral for a good 10 years. Like, it actually, for that long. So that's one of the things that we're certainly doing.
Linda • 35:01
So I don't necessarily have all the answers right now, but I'm working on the problem for my own talent pool. And I'm happy to share what we learn as we go through this dynamic experiment. I think the other thing though, and this is slightly controversial coming from a tech person is, I think there's a secret for why some of the POCs aren't actually making it into production. From a tech perspective, a POC is pretty easy to get together. Getting it into production is actually really hard because we carry as organisations a very large quantity of tech debt, of things that we need together to get them live. And unfortunately, technology and AI in particular is incredibly unforgiving. with the jimmying that humans do and when you take the human out of the equation all of that debt, all of those things that we did surface as barriers to us getting things into production.
Linda • 36:01
So I do think that one of the things we'll find by 2030 is a more realistic expectation of the pace at which AI will actually go through.
Gordon • 36:12
Or do we see some organizations sort of wholesaling, wholesale throwing out a lot of that, I love that term, the tech debt, you know, legacy systems, et cetera, where you're looking to go, well, if we were to amortize out what the actual cost, you know, and the lost opportunity cost, which I think is also a big one, if it's going to take us two years to get over our legacy tech debt and integrate this without all of the roadblocks that we're having to overcome constantly or re-engineer, but actually let's just throw the whole lot out and let's look at what we can do with a wholly automated, agentic workflow that's then human orchestrated and driven.
Helena • 36:51
the minute like a two-person startup comes in and wipes out an industry like the minute that happens that's gonna catch our attention you know like yeah what's up had 12 staff or something crazy before it didn't Instagram have 20 staff or something like that like the minute something someone comes along with three staff and you know we're gonna be a lot less precious about holding on to our legacy products and services and ways of working I don't know it hasn't happened yet but I don't think it's gonna be too long before someone comes in and we see some examples of things being vastly shaken up okay I think that sort of and then encircling back to because it was the question that popped up repeatedly you both just mentioned in that you know sort of in that grad program on those junior side of things do you think it's actually and designing apprenticeships because that is what it was an apprenticeship when you're entering professional services as a paralegal similarly in you know in a law firm or finance perspective but do you think you know the the modeling for those new apprenticeships is very much going to be around
Gordon • 37:58
orchestrating the work and it's driven towards orchestrating the work rather than actually just having to do the legwork and learning because it's a big knowledge gap to jump through to go, hang on, now you're actually orchestrating a whole bunch of agents and learning how to be a conductor yourself rather than just actually doing the legwork because part of our learning process as a human is to actually go through and do those yards. You can't just expect to just jump there, but how does that apprenticeship potentially look? I know that's one that you're trying to solve right now, Linda. I have a hypothesis with this that we're working with, certainly with the development of our staff, and I think it's the difference between things that you learn deeply and things that you learn at a very shallow level. There is the potential that if you start orchestrating things you don't need to know that much, but then how do you check?
Linda • 38:56
that something is right when something goes wrong. How do you fix it? You know, and there's a lot of sci-fi movies around how, you know, the elders built this great machine that looked after everybody and, you know, that was really great for a while and then the machine started breaking down because it was getting old, but nobody knew how to fix it anymore. One of the challenges that we're having to bridge ourselves is, as senior staff, because we had to go old school and we learned in old ways before all the automation was present, we learned things very deeply.
Helena • 39:30
I think we'll have to preserve work for learning. I actually think we're going to have to be, you know, we're going to have to like not AFI everything because some stuff has to be preserved to be learned.
Linda • 39:39
Exactly. And that's, that's what we're trying to figure. We're trying to separate out between what, what are the crucial things that we need to give to people so that they learn deeply so that they can be effective as quickly as possible without having to go through the hard yards of, you know, the years of doing monkey work and stuff like that.
Helena • 39:58
And I know, I think Tim's on the call from the Future Skills Organisation and he and I have talked a lot about this. It's not just going to be grads we have to put through apprentices. I reckon adults are going to have to go through reskilling. I'm convinced of it. We're going to have to have this for, you know, it'll be our sixth career and I'll need to learn something, you know, pretty sharp. I'm pretty certain by 2030 my job will look quite different and I, you know, I'm reasonably curious, but. Yeah, if I'm not running GoFIGR anymore, I actually really think I'd need some help now. I don't think it's just our young people we need to apprentice through new career pathways.
Gordon • 40:38
Well, I mean, you know, both yourself and Linda come from sort of that big consulting sort of professional services background. I mean, there's a question that came through sort of how do you see the world of billable hours, time, materials evolving between now and 2030? And why would clients want to continue, you know, with the current status quo or they prefer working with firms in a new way where there's a whole bunch of things that are automated and a whole lot faster? How do you think that's going to look?
Linda • 41:08
So, I have some – I have a hypothesis on this. I think the billable hours model has been under threat for – or been under pressure for like 20 years. I think it's just accelerating and getting worse. We're certainly seeing more fixed price work, more outcomes-based work and customers wanting to do more of that. I will actually say that it also requires a certain degree of trust on the customer's part and a certain maturity on the customer's part to trust us so that we can work that way together. But I don't think things like the billable hours model will go away necessarily. It's a very uncertain work.
Linda • 41:48
It's still a very legitimate commercial model between parties. And I also believe – look, I'm betting my career on this. I also believe that there's still room for expertise and the will. health and people will still pay for expertise. So, you know, if I look at the professional services model more broadly, yes, there is a challenge to junior staff. And for a long time I've had customers really challenge me to pay for junior staff, happily pay for senior staff, but they won't always pay for junior staff, which is why the enterprise models work a lot better for us. But I also think that people are moving away from the very large firms to more boutiques who are expertise led in their nature, because that's what they have to be in order to make a difference in the world. And so, yeah, I do think that networks of experts will become increasingly consumed as well, because there's still room for somebody who has a viewpoint that sits across organisations, a viewpoint that's just tied to a single organisation.
Helena • 43:04
That's one of the biggest questions I get every time I meet a new client. What are you seeing elsewhere? Yeah, well, I mean, that's – Are we all going to be relieved when there's a blueprint of copyright? If someone could just go first, please, so we could just call copy.
Gordon • 43:20
Well, I mean, look, this is actually a question I had specifically from one of our, you know, one of our attendees. What are some of the practical L&D, learning development strategies need to have to be seen as from GoFIGR's perspective that afford leaders and employers to make AI adoption a success? We talked about sort of the change management side of things, which is a big one. What other ones are you seeing sort of in the market at the moment with your clients, Selina? I think it's actually like co-creation. I think the minute someone's…
Helena • 43:48
The minute someone feels that something secret is going on behind their back, that's the minute you kind of lose people. So even years and years ago, before AI was a thing, I worked for a big recruitment company and twice I had to go through the process of putting forward a business case for robotic process automation. Same, same, but different. And the first time we did it behind our share services back, it was really bloody hard because you were making assumptions about… workflows and stuff that was happening without really understanding it deeply. And then the second time we did it, we kind of personified this robot. We called it Billy, I think.
And we were like, what crap would you like to give Billy that really drains your energy, that adds no value, that it finds really difficult? And they were fine. People would just not… This would have been eight years ago. We assumed that people would freak out and be really anti, and they weren't, because there was just heaps of swivel chair tasks and download and upload spreadsheets and just nonsense that added no value and no energy. So allowing people to co-create and be involved in it, not doing it behind their backs, being upfront about what you know and what you don't know. So is it going to change my job? Yeah, maybe. Are we going to support you? Yes.
Some of those things I think I see. And actually letting some of these people do it themselves. One of our clients has built via Gemini some gems so that even the process of instructional design of learning content is now mapped to the way that it's like a learning expert and it understands the way people learn. And then it takes in, ingests all the content that they've ever created to avoid creating new content from scratch. And it delivers it in like a really, I don't know, amazing and compelling way. And then another AI kind of spins up the images to go with it. So, yeah, I think rather the ones I see having success are bringing people on the journey rather than doing AI to them.
Yes. So those are some of the things that I see. One company actually, a legal company over here in Australia put forward like a prize. They actually put forward a bounty. So if you could find something in AI-ified, there was a sort of cash prize and that was a big incentive. Yeah.
Gordon • 46:09
Quick question. Because we did touch on this when we were having a sort of pre-chat on this about a week ago and it leads on from one of our attended questions, but will we actually get to a point, you know, very soon in the future, maybe next week, where we get sick of the AI interactions because it's getting closer and closer, especially over the phone, you know, that you don't know that you're actually talking to an AI. It takes a little bit of time to actually click. But we're actually, you know, craving those personal connections. And I can't remember whether it was you who said it, Linda or you, Helena, but you said maybe we'll get to a point very soon where similar to a B Corp certification, we have some form of certification for companies that we are human-led. Organic. Organic. We're wholly organic. You know, what are your thoughts on that?
Helena • 46:55
Yes, I think it's the short summary, but we're still going to be fine with AI taking over some of the sort of like, I don't know, like, you know, I didn't ever call my bank anymore and I'd never, I don't know, my Wi-Fi went down. I'd far sooner be able to kind of self-serve my problem at 10 p.m. or whatever it is, rather than actually like have to go through the process of explaining something. So there's some things where, you know, I think it's you know, it's just more convenient. I've even seen surveys where Gen Z and younger alike, they trust an AI more than they trust their boss, or they'd rather receive a clinical diagnosis from an AI than a human. Like boggles my mind that that might be the case, but I do kind of get it. But I, and then on the other hand, I have a guy who helps with blog writing for me. So about 90% of the content that we put out in the world, I write, but I'm not an SEO expert. And so we have a guy called Chris who supports with one or two SEO optimized blogs a month.
And when ChatGPT came out, he was like, oh, holy hell, I'm doomed. And already he's winning business from customers because they don't want AI slot. Like you can spot it a mile off on LinkedIn. You know, Google's already penalizing AI slot. So yeah, it's already come around full circle. I think that there is a space for human expertise. And we also talked about this earlier, didn't we, Linda?
About if we're all using the same 8, 10, 12 foundational models to build our take upon, because we're not all billionaires that can build large language models. We're going to be one of 12 flavors. And how, where do you differentiate there? It's going to be your humans that actually differentiate once the AI has taken over all the nonsense. So yeah, I, there's my view on it. My weird and wonderful view.
Linda • 48:36
So the best story I've got to contribute to this was I actually contributed to a piece of AI and healthcare research at one stage with one of the universities and what was fascinating to me was they actually did a survey of people in terms of whether they wanted an AI interaction or they wanted a human interaction and what they found was that it's definitely generational. So if somebody of our generation is diagnosed with like terminal cancer, we want a real life doctor to tell us and we want the care and the empathy but there is the younger generation that actually prefers a robot to tell them that they've got cancer because they wanted to act out in front of the robot and they didn't want to have to care about the emotions of the other party. And I think it's really telling that we're all raised differently with different expectations, different social mores. And I think it's difficult for us to be able to say, yes, we all want a more authentic human interaction. I think each buying segment or each customer segment will want a different experience based on what their expectations are, and we just need to be sensitive to that. So we've got the skills to do this, it's just a question of understanding more about what different parts of the market actually want and then catering to that accordingly. But yeah, I was really floored when I had somebody actually say they would prefer a machine to tell them that they had terminal cancer.
Not my experience, but…
Gordon • 50:14
I'd rather not have to know what it's like to be told about terminal cancer and I'm just trying to put myself in the shoes and go, well, if you're dealing with an AI, you can just drop your bundle completely. As the kids would say, you can have a complete crash out. The AI is going to be infinitely patient with you. It's not going to feel offended. It's just going to go, well, this person's going through something. That's one of the things that infinite patience and in certain cases, it's not so much design empathy, but it sits within conversational design to sort of naturally empathize a little more than a very busy human. And if you've got 15 minutes to break it, hang on, I allowed 20 minutes to break it to this person. They've got cancer. Whereas the AI will sit there and let you rant brave for 25 minutes, half an hour. It's an hour. It doesn't really matter to the AI. It can have half a million conversations simultaneously. So it's just there to break the news to you. Sure. I can understand sort of. Both sides. I'd rather, yeah, I think I'd still prefer the human touch, but then that's a generational thing, right? That's a generational thing. Look, we're getting sort of towards the end of our time, or at least for the set time that we've got for this. So, I'll ask a sort of wrap-up question. I'll let everyone know that I will be sharing this and emailing out the link for the whole webinar, which will also have the whole chat, because, jeez, if you haven't been following through with the chat, there's some fantastic comments in there. I've even still got a few more questions that I wanted to get through. But, you know, if I was closing that in a single line, you know, before we sort of get on to some additional chat for, say, another 20 minutes, you know, what gives you sort of the most, you know, optimism right now, or the most concern about organisational, you know, the future of work right now? Who's going to go first, Linda? Alina?
Helena • 51:53
Alina, come on. Come on. I think I've already said my pessimistic stuff, so maybe I won't repeat that. Okay. I'm going to be an optimist then. Yeah, so I've said that. I am… I love being able to do things I couldn't do. I love, I know I'm not a designer, but I love being able to get an idea out of my head and that my team can build it within like a day or two now. I love the speed and the extra capacity and the ability to be a little bit superhuman.
I love being a bit more competent and capable than I used to be. I think when I posted about this before Christmas, and there were other people like that saying, I love it, I can do all these things, I never could, I could code, I've got a website, all this kind of stuff. But I worry a little bit about how dependent I am on it now, and I worry about whether my brain is going to add a little because it's a bit of a crutch for me. So I'm optimistic about new opportunities for people, for companies, for individuals, for skills, for pet projects, for all of those kind of things we wish we could do and we can't, and I worry that I'm going to get dumber because of it.
Gordon • 53:13
It's a critical thinking thing, right? But there was a comment in the chat there, I think it was from Annette, we've got to have a greater focus on critical thinking in school because so much of the actual legwork can be done. And I think that critical thinking and that's where the expertise and the professional expertise that we have as our generation, older people, is that we've gone through the hard yards, we've gone through the learnings. And having that critical thinking expertise to prompt or challenge the right questions is what allows us to sort of help shape things without sort of falling behind or falling too far behind. But where it comes to that, the future of the work, as Selena just said, she's worried about, you know, am I going to get slower because I'm allowing AI to do a lot of that legwork?
Linda • 54:12
A good question. You know, they, so full disclosure, I'm actually the mother of homeschooling kids. So this was really curious to me, but I think it was in the UK that did brain scans of kids who used AI to support them in doing their schoolwork and kids who did it the old fashioned way. And they did actually notice a dimming of the brains of kids who used chat GPT to support them in homework. Now, there are many reasons for why that could be the case, but there is like that question as to whether your brain is being turned on that much if you're using something like chat GPT to support you in that work. I think though that I have faith in humanity and I have faith in humanity's ability to be ingenious because we haven't been before. And so that's what gives me hope that this uniquely human spirit of overcoming the adversity that gets put in front of us, we'll find a way through and we will get there. So I'm probably more optimistic than most in that respect, but history tells us that we've done this before and I don't think we'll get dumber. Helena, I don't think you'll get dumber. Okay, I'm encouraged.
Gordon • 55:35
So, look, we'll just jump into a few extra questions as we've just gone past that 2 o'clock.
Helena • 55:42
People should feel free to hang up if they need to, of course.
Gordon • 55:45
Feel free to hang up. Feel free to sort of raise their hand and sort of jump in or just say they've got a question. I can throw that to you, because I think I'd like that sort of just on an open side of things. Fran, did you have a specific comment? I know that sort of having had you on two other webinars, I know that you've generally always got something to contribute. Sorry, I feel like I'm just like, oh, this is so interesting. No, I've actually just found out this morning that I have been confirmed as a speaker at the Australian Institute of Training.
Speaker 3 • 56:19
and development to talk about future organizational structures, and how you can use AI. How do you design organizational structures to incorporate AI and the humans? And there's some work I've been doing with another person around how that might look, so I won't give away too much at the moment because it's still fledgling, but it's very much of this moment. I'm an Oracle development expert by background and change management and a human resources person, so this is so timely. People I'm seeing out there just don't know what to do, and Helena's point at the beginning about AI being the most kind of in-demand skill. I met with someone yesterday who said a board asked for three AIs. I'd like three AIs, please. Three. Just three. Just three. Just the three, please. No. So I think there's a huge amount of upskilling, but also, yeah, I agree with Linda. It's like humans find a way. They find the path of least resistance, and there'll be a lot of interesting work for people that want to get on board. I honestly think it's the neurodivergent, the different thinkers. Is it their time when it's been a lot of history in capitalism around convergent thinking and doing it a set way? Maybe that's the social shift as well as the economic and technological shift, and a lot of these platforms were designed by potentially neurodiverse people, so I think it opens up some more engagement and inclusion around there. I'm positive that there's going to be some losers as well. There always are, whenever the massive technological shifts come along. Yeah, but we have had it before, and this is why I think it can't be, and to Linda's point around it, it's not the technology. It's like seeing it as a worker as well, but it's absorbing it into this. There's so many facets about society that it's changing, and that's why I like looking at it from that bigger and even that historical perspective to kind of get a grasp on. I think a lot of things are going to have to be reinvented.
Speaker 3 • 58:19
It's just these ones we're just trying to cling on to old structures that are probably not serving us and not going to by 2030. We have to design from first principles. How do we teach the kids how to go from first principles again? I had a coffee with a friend this morning who said this is all these little snippets of like verbatim information. She said she's in her mid-50s. She's got two kids in their late teens, early 20s who are struggling to find work at the moment because their degrees are like one of them's in content marketing and I can't remember what the other one's doing. And the daughter said to her, mum, what would you do about this thing? And it was something like, you know, general information that your parent would know maybe. And her mum said, oh, I do this. And she goes, oh, good. That's what CPT said as well. Using us older people. This is our role. And this is the other bit about elders. Like my partner's from a Nepalese background and the role of an elder in communities is quite different in like different cultures. And I think white culture has lost that sense of eldership. So, is that a resurgence that we get as being, you know, wisdom and elders pre, oh, she remembers pre-AI. She has the official truth, not the AI slop regurgitated truth. And there's a role for people that keep their faculties about them. Eleanor, with your, keeping your curiosity and not getting an adult brain.
Helena • 59:36
Wow. I chatted to a guy in the UK about this in terms of like, what will the future look like and what jobs will be protected? He's coming, we're doing a UK version of this later this evening. And he's got some really interesting thoughts on it. Like he thinks that there's going to be a boom into the trades. So, you know, physical, you know, manual labour, care, health care, that's another safe one, right? Or at least at the moment, we're living longer, you know, we're getting, I don't even know, we're not necessarily getting sicker, but we're living longer and we need to be taken care of. He also thought that though, the military, roles in the military would increase. Maybe that's just a bit of a sign of the times right now because, you know, the world has gone a little bit insane. But he saw a rise in defense force jobs and he had a different view to me about young people. He's like, if you were in a company and you could have like two or three smart younger people who are sort of tech native, fearless, why wouldn't you have two or three of them and sort of get rid of a couple of your senior managers instead? It was quite interesting to hear quite an alternative view on that, yeah.
Gordon • 01:00:49
It was one of the comments that were earlier in the chat was that I wouldn't be worried too much about some of the younger generation. They're going to be AI native, which means they're coming in knowing a whole lot more because they've grown up with it. You know, it's similar to sort of, you know, take it, you know, My career sort of starting off, you know, we just saw the birth, you know of the internet, you know, three four years later Well, maybe it was five years later, but you know We had to it we had to adapt to it But those kids who are coming through by the time, you know, you're 30 years old you've got 21 year olds who are coming through who've grown up on an email and the tinternets and whatnot and Sending the way Wow How do you do that? How did you and that's always been one of the things that I've loved about having you know Younger team members who are digitally native and whatnot. Hey, how did you do that? You get stuck in your ways and you get stuck in that particular right or that way of doing things and Someone comes along guys. He is a better way to do this. Why don't you try it this way again? She's how did you do that? Show me? When a partner of bloody professional services ask a more junior team member for the millionth time how to PDF a document That's what you should feel embarrassed about not