Bold Business Logo

IBM Launches New Artificial Intelligence Technology to Detect Bias

new artificial intelligence technology, AI robot working pointing at a hologram

Artificial intelligence is redefining the way we work, travel, learn, communicate, shop, and maintain privacy. But it is still an innovation that requires a lot of refinement. For all the good it is doing for businesses and individuals, AI has its pitfalls, and in an aspect that is inherently human—bias. As a response, IBM Watson launched a new artificial intelligence technology into its “Trust and Transparency” service, which aims to provide better insights into AI decision-making.

How AIs Have Biases

AI systems don’t have biases per se, but they learn and process information based on training data fed by humans. And we know that people have their biases, as much as they try to be fair and unprejudiced. Humans tend to harbor biases as a way to protect themselves, their values and interests. More than 180 different types of biases have been found — some we are aware of, and some that are embedded in our unconscious minds.  These biases, however, innocuous they may be, bleed into how they design algorithms that AIs use to make decisions. Ultimately, these systems become inaccurate and discriminating.  In some cases, they can be detrimental to the livelihood, opportunities, safety, and freedom of millions of people.

bias in artificial intelligence, AI algorithm picture
Has a solution been found to the problems caused by biases?

Bias in Different Applications

Probably the most recognizable example of bias in AI is facial recognition in photos and videos. When we upload photos on Facebook, it knows which people to tag based on the faces it recognizes. On a much larger scale, facial recognition is crucial for security and safety. When training data to detect faces is insufficient or skewed towards a certain racial group, the AI system becomes unreliable.

In the realm of criminal justice, AI helps judges make decisions on pre-trial conditions as well as sentencing defendants. If the AI algorithm highlights that people of color have a higher risk of recommitting crimes, and that white defendants have a lower risk, this puts the whole system in jeopardy. Judges referring to the AI system could give out unjust sentencing, or too lenient sentencing.

In the corporate world and in school admissions, AI programs help professionals by recommending people to interview. If the training data for the program mostly consists of only successful white males, then it will most likely recommend only white males for consideration. This eliminates opportunities for people of other genders, racial backgrounds, and even high achievement levels.

Banks use AI to guide them in loan approvals based on parameters like age, country of origin, and account maturity. If the training data set lists only a certain age or income bracket, other people automatically and effectively become ineligible.

AI is employed to serve countless people in different industries. It has to function accurately, without any intimation of human prejudice. AI decisions impact the lives of so many people. Jeopardizing its systems with unreliable or outdated training data would compromise everything.

new artificial intelligence technology, beth smith is quoted
How IBM is Uncovering Bias in Artificial Intelligence, Beth Smith

IBM’s New Artificial Intelligence Technology

IBM announced its new artificial intelligence technology to combat biases embedded in AI systems. The cloud service aims to open the black box of AI systems and reveal biased tendencies in decision making. The AI’s new Trust and Transparency capabilities are built on the IBM Cloud and work with popular AI and machine learning frameworks such as Apache Spark MLlib, AWS SageMaker, Microsoft Azure ML, Google Tensorflow, and Watson itself.

The cloud service exposes the decision-making process, detects biases in AI models, and polices unfair outcomes in real-time. It shows factors that contributed to the weighted decisions, the confidence in its recommendations, and the reasons behind its confidence. It can also recommend to add data to the model to help widen its training dataset, and eventually eliminate traces of biases.

The AI keeps track of the AI’s accuracy, performance, and its impartiality. Therefore, decisions and their processes are traceable and can be fixed to behave more fairly in the future.

IBM Research will be releasing an AI bias detection and mitigation toolkit to the open source community. This consists of educational material and tools to train people and encourage collaboration against bias in artificial intelligence.

The New Frontier of Bias in Artificial Intelligence

No matter how businesses believe how ethical they are, their AI systems may have already adopted flawed human perspectives.  These beliefs may be offensive biases, which can ultimately impede their company’s growth. IBM’s new cloud service is the cornerstone in AI decision making and automation. It gives businesses a significant boost of confidence in using automation and AI services in their everyday operations. The new artificial intelligence technology will uphold the value of transparency in decision-making processes. It will make these systems even more reliable than ever, perhaps even more reliable than human insight. Such AI may be instrumental in achieving diversity and inclusion in many corporate settings, creating a healthy working environment, and reliable output.

IBM’s AI integrates ethics deep into an organization’s system, to create strategies that certainly benefit its future plans. Making fair decisions through this service fully harnesses the power of AI and machine learning, and will foster a just, lawful, and principled global ecosystem.

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!