AI Bias Could Ruin Your Marketing and Outreach

AI often operates by looking at what has been done before, so you might find the results to be a culmination of bad practices.

Artificial intelligence can be a good tool. However, it can also turn on you, biting when you’re not looking, including in CRE.

But step outside real estate for a brief moment to see what can happen. Amazon.com some time back decided it wanted a more inclusive technical staff, so it spent a couple of years (and probably significant money) creating an AI resume analyzer. The hope was that the company could run resumes through the system and get a better, faster, and less biased view into who to interview.

Unfortunately, the system, based on machine learning, fell apart. The software kept choosing resumes from men, not women. The reason — the system learned what to do from all the previous choices the company made, and that meant the original lack of women chosen. The software learned to do exactly what had been done in the past.

There’s a new study from researchers at the University of Michigan about broader implications for AI. Here is the abstract:

“Despite the impressive performance of current AI models reported across various tasks, performance reports often do not include evaluations of how these models perform on the specific groups that will be impacted by these technologies. Among the minority groups underrepresented in AI, data from low-income households are often overlooked in data collection and model evaluation. We evaluate the performance of a state-of-the-art vision-language model (CLIP) on a geo-diverse dataset containing household images associated with different income values (Dollar Street) and show that performance inequality exists among households of different income levels. Our results indicate that performance for the poorer groups is consistently lower than the wealthier groups across various topics and countries. We highlight insights that can help mitigate these issues and propose actionable steps for economic-level inclusive AI development.”

Translated, current AI software trains on existing data, typically from publicly available sources. But those can have inherent biases as the human beings that create and choose the content to be displayed have biases.

“We found that most of the images from higher income households always had higher CLIP scores compared to images from lower income households,” Joan Nwatu, a doctoral student in computer science and engineering at the University of Michigan and one of the study’s authors, told Futurity, a publication that reports on academic research. “If a software was using CLIP to screen images, it could exclude images from a lower-income or minority group instead of truly mislabeled images. It could sweep away all the diversity that a database curator worked hard to include.”

The study looked at bias by income, but many more types of bias are possible. If you rely totally on what generative AI might create for a marketing campaign or ad, you could omit or potentially offend key demographics or target markets for the property you are promoting.