Data Explainability: The Counterpart to Model Explainability - DATAVERSITY

Data Explainability: The Counterpart to Model Explainability – DATAVERSITY

Source Node: 2658143

Today, AI and ML are everywhere. 

Whether it’s everyone playing with ChatGPT (the fastest adopted app in history) or a recent proposal to add a fourth color to traffic lights to make the transition to self-driving cars safer, AI has thoroughly saturated our lives. While AI may seem more accessible than ever, the complexity of AI models has increased exponentially. 

AI models fall into the main categories of black box and white box models. Black box models reach a decision without explanation, while white box models deliver a result based on the rules that produced that result. 

As we continue to move towards a world of whole deep learning methods, most are largely gravitating towards black box models. 

The issue with that approach? Black box models (like those built in computer vision) cannot be directly consumed. This is often referred to as the black box problem. While retraining black box models can give users a jumpstart, interpreting the model and understanding the results of the black box model becomes harder as models increase in complexity.

One tactic to address the black box conundrum is to craft a very bespoke and explainable model. 

But, this is not the direction the world is moving. 

Where Model Explainability Ends, Data Explainability Begins

Explainability is critical because it improves model transparency, accuracy, and fairness and can also improve confidence in the AI. While model explainability is a conventional approach, there now also arises the need for a new type: data explainability.

Model explainability means understanding the algorithm, in order to understand the end result. For example, if a model used in an oncology unit is designed to test if a growth is cancerous, a health care provider should understand the variables that create the end results. While this sounds great in theory, model explainability doesn’t quite address the black box problem. 

As models are becoming ever more complex, most practitioners will be unable to pinpoint the transformations and interpret the calculations in the inner layers of the model. They rely largely on what they can control, i.e., the training datasets and what they observe, the results, and prediction measures.  

Let’s use the example of a data scientist building a model to detect photos of coffee mugs from thousands of photographs – but the model begins to also detect images of drinking glasses and beer mugs, for instance. While the glass and beer mugs might have some resemblance to coffee mugs, there are distinct differences, such as typical materials, color, opaqueness, and structural proportions.

For the model to detect coffee mugs with higher reliability, the data scientist must have the answers to questions like:

  • What images did the model pick up instead of coffee mugs? 
  • Did the model fail because I didn’t provide it with enough or the right examples of coffee mugs?
  • Is that model even good enough for what I was trying to accomplish?
  • Do I need to challenge my view of the model?
  • What can I conclusively determine is causing the model to fail? 
  • Should I generate new assumptions of the model?
  • Did I just choose the wrong model for the job to begin with?

As you can see, delivering this kind of insight, understanding, and model explainability every single time there’s an issue is highly unlikely.

Data explainability is understanding the data used for training and input into a model, in order to understand how a model’s end result is reached. As ML algorithms become ever more complex but more widely used across professions and industries, data explainability will serve as the key to quickly unlocking and solving common problems, like our coffee mug example.

Increasing Fairness and Transparency in ML with Data Explainability

Fairness within ML models is a hot topic, which can be made even hotter by applying data explainability.

Why the buzz? Bias in AI can create prejudiced results for one group. One of the most well-documented cases of this is biases in racial use cases. Let’s look at an example. 

Say a large, well-known consumer platform is hiring for a new marketing director position. To deal with the mass of resumes received daily, the HR department deploys an AI/ML model to streamline the application and recruiting process by selecting key characteristics or qualified applicants. 

To perform this task, and discern and bucketize each resume, the model will do so by making sense of key dominant characteristics. Unfortunately, this also means the model could implicitly pick up on general racial biases in the candidates as well. How exactly would this happen? If an applicant pool includes a smaller percentage of one race, the machine will think the organization prefers members of a different race, or of the dominant dataset.

If a model fails, even if it’s unintentional, the failure must be addressed by the company. Essentially, whoever deployed the model must be able to defend the use of the model.

In the hiring and racial bias case, the defender would have to be able to explain to an angry public and/or application pool the use of datasets to train the model, the initial successful results of the model based on that training, the failure of the model to pick up on a corner case, and how this led to an unintentional data imbalance that eventually created a racially biased filtering process.

For most, this kind of nitty-gritty detail into AI, imbalance datasets, model training, and eventual failure via data oversight is not going to be received well or even understood. But what will be understood and stick around from this story? Company XYZ practices racial bias in hiring. 

The moral of this all-too-common example is that unintended mistakes from a very smart model do happen and can negatively impact humans and have dire consequences. 

Where Data Explainability Takes Us

Rather than translating results via an understanding of a complex machine learning model, data explainability is using the data to explain predictions and failures.

Data explainability is then a combination of seeing the test data and understanding what a model will pick up from that data. This includes understanding underrepresented data samples, overrepresented samples (like in the hiring example), and the transparency of a model’s detection in order to accurately understand predictions and mispredictions.

This comprehension of data explainability will not only improve model accuracy and fairness, but it will also be what helps models accelerate faster.

As we continue to rely on and incorporate complex AI and ML programs into our daily lives, solving the black box problem becomes critical, particularly for failures and mispredictions. 

While model explainability will always have its place, it requires another layer. We need data explainability, as understanding what a model is seeing and reading will never be covered by classical model explainability.

Time Stamp:

More from DATAVERSITY