I’ve been on the road and super busy over the last few weeks and haven’t written anything in a minute. So, hold on tight. I have some stuff floating around in my brain that needs to get out!
It’s conference season, and I’m hitting a bunch of them. So far, one thing everyone wants to talk about is ChatGPT and Generative AI. You guys know I like to educate you on this stuff, so GPT and Generative AI are basically the same thing. GPT is the OpenAI generative AI large language model, which is basically owned by Microsoft at this point. Google has Bard as their generative AI, and while they are built to be similar, Google is currently behind Microsoft by a lot. We all expect them to catch up.
One of the biggest issues around generative AI is there are a lot of ethical issues with the use of AI. From folks being concerned with bias in AI to the elimination of jobs that humans currently do to the spread of false news and ideas that seem very real.
“Tim, AI has bias! I read an article in the New York Times! Didn’t you see the lawsuit against HireVue?” It’s one thing I hear in the HR community a lot. Most folks, who don’t really understand AI, love to believe AI is biased! It’s kind of funny when you actually explain to them the reality. Currently, no one is using Generative AI (ChatGPT) in their HR Tech stack. Many are using “Conversational AI” in their stack, which is like old-school chatbots went to college and got smarter. Conversational AI is AI with guardrails. All the responses are built purposely so you actually know anything the bot might answer. This type of AI is incapable of being racist.
So, where does the biased/racist talk come from?
Early machine learning models. Machine learning has been the big buzzword in HR tech over the last 5-7 years or so. Some of the first tech companies to build ML into their tech had some backfires. For the record, the Hirevue thing was one of these issues while testing the potential of using facial recognition as a way to determine if any facial attributes could be used as a potential attribute in helping a company select the best talent. Turned out the machine learning model actually had a really hard time deciphering dark faces over light faces. It was quickly found out and shut down and never used again. But people still pull that one example from five years ago as the only example of AI being biased.
The reality is machine learning learns human preferences. So, when you say your AI is racist, all you’re saying is you, yourself, are racist. It learned your behavior and mirrored it back to you! That’s the funny part! Think of AI as a baby. A baby that can learn a lightning-fast speed. But if you teach your baby bad things, it’s going to grow up and do bad things! Unless the folks who build the AI actually build in guardrails and audits to constantly check that the AI is learning and producing the “right” things. Which is currently the situation. If fact, to Hirevue’s credit, from their early learning, they are leading the industry in building ethical AI policies and third-party to ensure their AI is as biased-free as possible.
Here’s the reality in 2023.
I’m way less concerned with my AI being biased than I am of Jim the hiring manager making the final selection of each hire! I can actually audit and control my AI’s bias. I can not do that with Jim! Goddamn, you Jim!
I actually was on a panel recently with an AI professor from Stanford who said, regarding bias in AI, that in reality, every time you add a human into your process, you add bias. But when you add AI into your process, you eliminate bias by comparison. That made my head turn! Because we love to think the opposite. For some reason, we have a lot of pundits in our industry trying to scare people away from AI in HR. I’m not saying anyone just blindly go forward with AI in HR. Go into it with eyes wide open, but don’t go into it with fear of what AI was five years ago.
I’m fascinated by where and when we’ll see massive usage of generative AI in HR. It’s going to take some time because most HR leaders and legal teams aren’t really excited about using a tool where they have no idea what the response might be to a candidate or an employee! But, I do think we’ll continue to see massive adoption of conversational AI within our tech stacks because there is much less legal risk and, as I mentioned very little risk of bias.
Do we still have ethical issues in AI? Yes. Generative AI is very new, and there is so much we don’t know yet. The use cases are massive, and we’ll begin to see, almost immediately, tech companies testing this in certain parts of your processes to help automate tactical things. The one major ethical issue we’ll have is when we start asking models like GPT questions, and we get answers, and we don’t really know how those answers were gathered or who had an influence on those answers behind the scenes. Because if someone behind the scenes in OpenAI manipulated the AI to answer a question in a certain way over another, we now have to question every answer and who’s pulling the strings behind the curtain.
It’s exciting to think of the possibilities, but we still have a ton to learn. More to come. I’ve got this AI bug now, and I think it’s going to dominate our space for a while!