

Discover more from Future of Work
Happy Wednesday!
It’s been too long! I spent the last months writing my new Equal Pay for Equal Work book which is now available. I wanted to tell you first: public announcements follow next week!
I also took some time off for vacation. I had a keynote at HR Week Lithuania, which was an awesome conference: always impressive to see 1600 HR professionals get together! Yesterday, I attended Cornerstone’s ConnectLive in London where I got sneak peak at AI-driven Copilots for learning. This week, you can watch my Equal Pay keynote at Paylocity’s Elevate conference. Will you be at Unleash in Paris on October 17? Send me a note and we’ll meet for coffee!
Let’s keep work uniquely human
Since my last newsletter we’ve seen a host of companies release Copilots: intelligent bots that help you get work done. And I’m curious, so I try out each one I can get access to. Here’s what happened when I asked Microsoft’s Copilot (preview edition) about my new book:
In fact, my book was already indexed by Bing. The correct answer would have been: I don’t know. Instead, the answer was a total fabrication or hallucination (that’s what AI vendors like to call it, so you won’t say AI is lying). Today, while writing this newsletter, I repeated the same question and got a similar answer. I find it interesting that Copilot can’t find the correct information, unless I specifically prompt: “the correct answer is: Anita Lettink is the author of the book Equal Pay for Equal Work”. Then Copilot’s answer even includes the subtitle, so suddenly it knows where to find the info…
But it does not capture the information. Once I end the chat, and repeat the question, I receive the same hallucination all over again. Scary. Also, when you tell Copilot it is wrong, it will reply with “It might be time to move onto a new topic” and it shows you a prompt that allows you to apologize for disagreeing. You are also prompted to ask for suggestions to improve your search skills (!). In other words, Copilot is not wrong, you are. I wonder why it won’t allow you to point out obvious mistakes. Wouldn’t that make the future experience better? And now think how this example would work in HR: the HR Copilot responds to your payroll admin that they are wrong, and then won’t learn from its mistakes…
So, the results have been underwhelming, but I have no doubt that there is more potential than we currently see. When ChatGPT first offered access to real-time information, using Bing, things went off the rails very quickly, and they had to take a step back. I am sure other vendors immediately halted their releases, to ensure they would not have to backtrack. Because that’s never a good sign of reliability, and it erodes confidence in your products. Today, one vendor after another is releasing an improved version using real-time info.
But the moral of the story is: HR Tech Vendors are releasing Copilots and soon, you will be able to offer them in the workplace. These Copilots have been trained on generic information and can help employees get answers they would normally find in FAQs or policies. Is it quicker? Absolutely! I’ve seen some Copilots that delivered HR admin support, and they surely are time savers. Yesterday at Cornerstone’s ConnectLive I saw a great demo where a copilot helped a learning admin create a course: from selecting the audience to defining an attractive title to writing the description and gathering underlying course elements. I haven’t seen any Copilots (yet) that focus on providing company-specific HR answers. First, it takes time to include corporate information in the knowledge base, and secondly, you need a substantial body of correct data. Few companies have that. And maybe HR isn’t the first function you want to try this with, considering all the privacy issues that you must deal with.
When it comes to HR Copilots, I am leaving you with the following question: How many employee questions that you handle are generic? And how many are specific to an employee’s situation? I bet the majority falls into the second category, because you offer self-service for the first. When it answers generic questions, a copilot is simply a modern channel for employees. But if the questions your HR team receives are specific and circumstantial, a Copilot won’t be a quick fix. It might be more beneficial to focus on AI functionality that identifies and corrects mistakes when they happen, so you eliminate the reasons for these questions.
I don’t think we’ll see company-specific Copilots for HR any time soon. Maybe some large corporations are experimenting with it. But I am eager to be proven wrong, so ping me if you are getting ready to release your company-specific HR Copilot!
Replacing the median human
And speaking of humans, do you know what a median human is? Well, let me tell you: it’s you and me. It’s a term tech billionaires use when they talk about AGI, artificial general intelligence. The goal according to Sam Altman: “For me, AGI…is the equivalent of a median human that you could hire as a co-worker." You can read more about the concept in this article.
I always wonder if they realize that only 20% of the workforce is desk-based. The other 80% is deskless. Their work has already been changed by robots. But it’s also characterized by constant exceptions that need unique solutions: just think about a plumber working on your house and then moving to mine, where everything is laid out differently. They are constantly solving small problems that require the unique creativity of people to get work done.
But not all deskless workers benefit evenly from using AI. In my May newsletter, I shared research about the mixed results of an experiment with customer services representatives, where AI did not improve the results of the most skilled workers. Harvard Business School has completed a similar exercise with consultants: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. It’s an analysis of the performance of consultants when using AI. The study provides two key insights:
In the first experiment, consultants with access to GPT-4 completed 12.2% more tasks and were 25.1% faster. Their work quality improved by 40%. Here, as in the earlier study, lower performing consultants benefited the most from AI augmentation. They increased their performance by 43% (compared to a still respectable 17% for higher performers).
In the second experiment, consultants analyzed business data to offer strategic recommendations. AI reduced their performance: Consultants who used AI were 19% less likely to produce correct solutions compared to those not using AI.
We are nowhere near AGI, but the tech world bros expect we should be close by the end of this decade. But the question is: what exactly is intelligence? And is artificial the same as human intelligence? Can we assign capabilities like agency, comprehension, cognition, or reasoning to these computational models?
Humans have qualities that are irreplaceable: As my example at the start of this newsletter shows, critical thinking is crucial for identifying AI mistakes. And unfortunately, AI is not very gracious about accepting feedback when it’s wrong. In addition, cultural awareness is crucial for identifying biased assumptions.
Prof. Iris van Rooij often writes about the differences between AI and AGI. If you are interested in a bit of fresh air on the topic as well as the occasional myth busting, follow her. You can find a short, insightful article on her blog that debunks the inevitability of AGI. It’s based on recent research. The authors explain how the idea that human cognition is a form of computation is a useful conceptual tool for cognitive science. However, modern AI has interpreted the idea that human thinking could be replicated through computation as a signal that achieving human-like thinking in actual computer systems is within reach. They also believe this will happen soon. The authors go on to explain that when AI is used like this, it deteriorates our theoretical understanding of cognition rather than advancing and enhancing it.
And Naomi S. Baron defends the power of human writing in the age of AI. And I couldn’t agree more! I am happy to tell you all about it in my new keynote about AI in HR: Let’s keep work uniquely human!
What else is new in the Future of Work?
Since it’s been a while since I last wrote this newsletter, here are some interesting topics of this summer:
Venture Capital in Silicon Valley is Broken
I owe you an update on the third quarter HR Tech investments, and I will share that shortly. In the meantime, enjoy this article from Jason Corsello: “Silicon Valley based startups founded over the last decade, are grossly overvalued, overfunded and, most surprisingly, underperform their peers across the world.” Jason backs that statement up with data. Location should not drive such a large difference: stellar functionality and user experience should. European companies outperform their Silicon Valley colleagues where it counts: in ARR and ARR growth. Jason makes a convincing argument: what do you think?
Generative AI exists because of the transformer
If you want to learn how generative AI works, this article is the best I have seen. It starts by explaining what a Large Language Model is, and it does so by using the sentence “We go to work by train”. What could be more appropriate for HR? It includes some great visuals that show you what goes on under the hood. If you are looking for a primer on AI, start here. Once you do, you will immediately understand how these models produce reasonable text so quickly, and why they are often wrong. You will also learn why it’s computational, rather than an imitation of human thinking. Highly recommended.
Should Remote Employees Receive Global Pay?
My corner of LinkedIn was abuzz with posts about the pros and cons of location-based pay all summer long. It started with a post from Nick Bloom, followed by this opinion piece in Time "Where a Remote Employee Lives Shouldn't Affect Their Pay". LinkedIn published a summary article "Solving Remote Pay Structures". People have lots of opinions on the topic, but I also read many misconceptions. Enough to write an article that separates facts from fiction and explains why location matters when it comes to hiring people. One thing stands out: if you have remote employees (not independents), you can’t just let them work from anywhere.
Podcasts
And if you’d rather listen to a podcast, here are two recommendations:
I enjoyed this WorkLab episode The most plausible outcomes for AI and Work, where Amy Webb explains why Futurists are actually Strategists. She also describes various ways in which AI is changing the world of work, and it will make you smarter.
Chad Sowash & Joel Cheesman invited Maria Colacurcio and me for a conversation about Pay Transparency on their Chad & Cheese podcast. We had a blast and I hope that comes across. We also cleared up some misunderstandings about the EU Directive!
That’s it!
Have a great day, Anita
Can we keep work uniquely human?
Great to see you at Cornerstone Connect Live London last week! Pleased to hear you enjoyed the sessions :-)