To pinpoint and address causes, Arize is introducing AI bias tracing

To pinpoint and address causes, Arize is introducing AI bias tracing ...

Bias Tracing, a new tool for identifying the root cause of in machine learning pipelines, has been introduced by a maker of artificial intelligence (AI) observability tools. This technique can assist teams prioritize and adjust, and resolve issues either in the data or the algorithm itself.

Observability and distributed tracing have long helped businesses improve applications performance, troubleshoot bugs, and identify security concerns. Arize is part of a small group of businesses that are adapting these techniques to enhance AI monitoring.

Data logs to monitor complex infrastructure at an all-time high. Tracing reassembles a representing the data flow for complex applications. Similar methods to this point of view are used to create a map of AI processing flows, including data sources, feature engineering, training, and deployment. This can be used to identify and rectify the root cause of the problem.

"This type of analysis is extremely effective in industries such as healthcare or finance due to the real world implications in terms of health outcomes or lending decisions," said Aparna Dhinakaran, the cofounder and CEO of Arize.

Making it to the root of AI bias

Arize's AI observability platform has already provided guidance for monitoring AI performance and characterizing model drift. The new Bias Tracing capabilities may automatically reveal which model inputs and slices contribute to the most of bias encountered in production and identify their root cause.

Dhinakaran said the Bias Tracing launch is related to Judea Pearl's groundbreaking research on, which is at the forefront of both explainable AI and. Pearl's Causal AI project focuses on allowing machines to learn cause and effect rather than just statistical correlations. For example, the machine requires the ability to reason if a protected attribute is the cause of an unfavorable outcome.

Going deeper

Recall parity is a measure of the model's sensitivity for a specific group compared to another, as well as the model's ability to predict true positives correctly.

Considering that, in a similar manner, a regional healthcare provider may be interested in ensuring that their healthcare assumptions are held in line with Latinx (the'sensitive' group) and Caucasians (the base group). If recall parity falls beyond the 0.8 to 1.25 thresholds (known as the four-fifths rule), it may indicate that Latinx are not receiving the level of required follow-up care as Caucasians, leading to different levels of future hospitalization and health outcomes.

"It's especially important to distribute healthcare care in a representative manner," said Dhinakaran.

Arize assists the company identify that there is a problem at the beginning, and assists the company climb a level deeper to see whether or not the disparate impact is most pronounced for certain groups. For example, Latinx women aged 50 or older may be infected with this cohort, but ML teams can quickly resolve the issue by adapting or retraining the model accordingly.

Arize Bias Tracing is currently developed to work with classification techniques, and will expand to other applications in the runtime.

Other firms working on AI observability include. Companies working on tools for improving AI explainability include.

You may also like: