In June 2023, Zillow and other major U.S.-based portals were forced to remove a ChatGPT-powered plugin amid fears that its response to prompts ran the risk of breaking fair housing legislation—namely, that the GPT would share data about neighbourhood demographics or other data points that would unfairly impact a buyer decision making.
Almost 12 months on, Zillow has now added a new language screening tool that identifies potentially discriminatory behaviour by Artificial Intelligence (that doesn't know any better).
Zillow's new "Fair Housing Classifier" establishes guardrails to promote responsible and unbiased behaviour in real estate conversations powered by LLM technology.
The technology is open source, meaning its source code is readily and freely available for other parties to integrate and modify on other platforms.
A news release circulated by Zillow this week says:
The Fair Housing Classifier acts as a protective measure, to encourage more equitable conversations with AI technology. It detects questions that could lead to discriminatory responses about legally protected groups in real estate experiences, such as search or chatbots. The classifier identifies instances of noncompliance in the input or the output, leaving the decision of how to intervene in the hands of system developers.
Also quoted in the release is Michael Akinwumi, Ph.D., chief responsible AI officer at the National Fair Housing Alliance, who said:
"Zillow's open-source approach sets an admirable precedent for responsible innovation. We encourage other organizations and coalition groups to actively participate, test, and enhance the model and share their findings with the public.
"In today's rapidly evolving AI landscape, promoting safe, secure and trustworthy AI practices in housing and lending is becoming increasingly important to protect consumers against algorithmic harms."