From bias to balance and beyond: How to navigate risks in Generative AI

Artificial intelligence is fraught with bias and untruths, but there are ways to get more reliable responses.

This article is the second in a series exploring artificial intelligence (AI) in human resources. The first article looked at how to keep the “human” in human resources. We invite you to share your experiences with AI in HR for possible inclusion in future installments in the series.

AI regulation is evolving

Regulation or the lack thereof is a risk factor of artificial intelligence. Because AI regulation is still emerging, it’s a bit messy. The European Union is leading the way, and whatever directives come from its governing body likely will serve as the basis for a global standard. Until then, check for any local regulations — including your organization’s policies. While our comments are relevant in any market, our perspective is US-centric.

Big risks: Faulty facts and bias

Generative AI makes stuff up. Never assume AI-generated information is accurate. Most generative AI tools cite references, so review all sources. Often, the cited source won’t be the original source. Follow the data trail to its origin. When that’s not possible, evaluate the quality of the cited source. For example, was it The New York Times or an unknown blogger? In the HR world, we believe Society for Human Resources Management (SHRM)-cited sources carry more weight than a product-sponsored survey.


continue reading »