ArticlesAlgorithmic bias in AI and machine learning: problems and solutions

Algorithmic bias in AI and machine learning: problems and solutions

Key points

1) Goldman Sachs – an investment banking company – along with Apple, has recently faced allegations pointing that the California-based technology company launched the Card, which uses an algorithm that’s discriminating against women in the credit-scoring appraisal.

2) Several senior tech executives expressing the concern that their spouses received lower credit limits from the Apple Card. The complainers include the Apple co-founder Steve Wozniak.
Both companies denied the claims.

3) One thing is clear for sure: Algorithms and AI will progressively be used to make financial and credit improvements, and the law and supervisory environment are not evolving as rapidly as the technology.

According to Goldman Sachs, which administrates the card for Apple, algorithmic bias is an important issue, but the Apple Card isn’t essentially an example of it.
“Goldman Sachs hasn’t and will never create choices based on factors like race, gender, sexual orientation, or any other legally prohibited factors when defining the creditworthiness,” says the spokesmen from Goldman. “We welcome a shared discussion of this topic with regulators and policymakers,” they added.

Apple’s co-founder Steve Wozniak was not the only tech person showing concern about algorithmic bias and the Apple Card. In fact, tech entrepreneur David Heinemeier Hansson took to twitter, saying that her wife, despite having a higher credit score, still received 20 times lower credit limit than him from the Apple Card.

Apple and Goldman Sachs made the claim that using algorithms in decisions like credit scoring has globally proven less biased than human beings. For example, the state of California recently approved a rule to support the expansion of more job-based algorithms to overcome any human biases from the recruiting process.

Now the question that comes after what happened with the Apple Card: “Does the use of AI and algorithms is really a genuine move to avoid the human bias in decision making?” In fact, it’s far from 100% scientifically verified that an AI that resides on code written by humans, as well as data entered into it as a learning mechanism, will not reflect the existing biases of the human world.

Dealing with bias in AI and Machine Leanring

Down below is given a detailed analysis of “Why algorithms are at the core of our AI-powered future, and why we should care?”

1) What’s the use of AI in key areas of life?
2) Chances of whether or not AI is biased.
3) Do people who program AI are biased, or they can be?
4) Do algorithms are completely private information?
5) Does there any limit on government oversight of AI?
6) Do algorithms need to be audited, or they simply are considered 100% Ok?

1- What’s the use of AI in key areas of life?

As Wozniak and his wife witnessed, AI systems are becoming more mainstream in areas that everyday people rely on.
The artificial intelligence is not only being presented in credit and job hiring but mortgages, insurance, and child welfare as well.

In the year 2016, Allegheny County introduced a risk-modeling tool called the “Allegheny Family Screening Tool,” which is used to generate better child welfare call-screening outcomes in a situation when issues of child maltreatment are raised to the US department of human services.

The Allegheny created an AI system collected data on each person in the referral and integrates it to achieve an “overall family score,” which will then be used to determine the probability of a future event.

Other places, like Los Angeles, have also used pretty similar technology to increase the quality of child welfare, and it is for sure an example of how AI systems can be used in ways that can influence people in largely positive ways, and as a result, it is essential to identify how those systems can be biased.
Allegheny did receive some backlash, though one conclusion was that it produced “less bad bias.”

2) Chances of whether or not AI is biased

Most AI is created from a process of machine learning, which is coaching a computer something by entering thousands of pieces of data to help computers recognize the information of the data set by itself.
An example of that would be giving an AI system thousands of snaps of cats, with the purpose of training the system what a cat is. From there, the system would be able to take a glance at a photo and finalize whether it is a cat or not based on that entered data.

So, what if the data you are entering into a system is 75% golden retrievers and 25% Dalmatians?

Dr. Sarah Myers, a postdoctoral researcher at the AI Now Institute, says these systems are built to mirror the data they have received, and that data can be constructed on the bias.

“These AI systems are trained on data that are reflective of our widespread social order,” West said. “Therefore, artificial intelligence is seeming to reflect and really amplify back past forms of discrimination and inequality,” she added.

A real-world example of how AI systems can be biased: While the human manager-based recruitment process can certainly be biased, debate remains encircling whether algorithmic job application technology genuinely removes the human bias.
The concern with AI learning process is that it could amalgamate the biases of the data they’ve received, for example, the CVs of the top-performing candidates at leading firms.

3) Do the people who program AI are biased, or they can be?

AI Now Institute, a research foundation setting to examine the social manifestations of artificial intelligence, has recently done a survey that found biases in the individuals who are creating AI systems. It was found that merely 15% of the AI personnel at Facebook are women, and only 4% of their entire workforce are people of color.
The April 2019 study demonstrated that the workforce at Google is significantly less diverse, with only 10% of their AI people being women and merely 2.5% of their workers black.

An MIT’s computer professional, Joy Buolamwini, revealed what she witnessed during her research project that would put digital masks onto a mirror: the generic facial recognition software she was working with wouldn’t recognize her face unless she wore a white colored mask.

Buolamwini, in her understanding, explained that her system couldn’t detect the face of a black woman, because the data set it was running on were exceedingly lighter-skinned.

“Pretty truly, it’s not a solved problem,” says West. “It’s basically a very real problem that remains surfacing in AI systems on a weekly, practically daily basis.”

4) Do algorithms are a piece of completely private information?

Al algorithms, all over the world, are completely proprietary to the company that developed them. Dr. Sara Myer West described that researchers are facing really tough challenges to understand where there is biasness lies in algorithms.
Even if we could identify them, it doesn’t make us fully understand them, said Dipayan Ghosh, Shorenstein Fellow at Harvard University.

“At the time, it’s pretty difficult to conclude any decisions based on source code,” Ghosh said. “Even Apple can’t easily pin down its proprietary certified algorithm, because it may involve a lot of different sources of data and a lot of different executions of code to analyze that data in multiple siloed areas of the company,” Dipayan Ghosh said while further explaining the scenario for removing algorithmic bias.

5) Does there any limited government oversight of AI?

Right now, there’s little government oversight of artificial intelligence systems. “When AI systems are being practiced in areas of highly political, economic, and social importance, we need a stake in finalizing how they are affecting our lives,” says Dr. Sarah Myers West, the leading researcher at AI Now Institute. “Currently, we don’t really own the righteous paths for the kind of transparency we would require for accountability,” she added.
The good news is that one US presidential candidate is asking to change that: Senator Cory Booker supported a bill earlier this year called “The Algorithmic Accountability Act.”
The bill asks companies to keep a look at flawed algorithms that might create discriminatory situations for Americans.

The algorithmic Accountability Act will allow the Federal Trade Commission to introduce guidelines to ‘manifest impact assessments of highly sensitive automated decision systems.’ That requirement would affect systems that come under the FTC’s jurisdiction, new or existing.

The web-based description of the bill is present on Cory Booker’s website, and it is clearly citing algorithmic malpractice from Amazon and Facebook in the preceding few years.

In fact, Booker is not the first politician to call for better regulation of AI systems. The Obama administration, in 2016, also raised the call for development in the algorithmic auditing and external testing of big data systems.

6) Do algorithms need to be audited, or they simply are considered 100% Ok?

The rare government oversight is increasing third-party auditing of algorithms: the process involves an outside unit coming in and evaluating how the algorithm is made without disclosing trade secrets, which is a significant reason why algorithms are private.
“This is happening quite frequently, but not all of the time,” Ghosh said. “It happens when companies feel they don’t want to be called out having had no audits whatsoever,” he added.

Working as the co-director of Digital Platforms and Democracy Project, Dipayan Ghosh further expressed that government action can take place, as seen in the FTC’S multiple inquiries into Facebook and Google.
“The thing is that if a company is found to harmfully discriminate through its AI system, a regulatory agency should come in and say ‘Hey, we’re going to sue you in court, or you’ll be facing X, Y, and Z. Which one do you want to do?

Disadvantages of Biases in AI systems

Bias hurts everyone by reducing people’s ability to take part in society and the economy. The mistrust and distorted results from artificial intelligence are reducing people’s ability to take part in society and the economy by a quite significant effect.

Bias is all our responsibility. Both business and organizational leaders are required to make sure that the AI systems they use must improve on human decision-making. They need to consider it their obligation to encourage advancement in research and should adopt approaches that will reduce bias in AI systems.

Imperatives to reduce Bias in AI systems

The growing academic research in the area of AI bias has highlighted two imperatives for action.

1) We should take advantage of ways that AI can improve on traditional human decision-making

As we already know, machine learning systems disregard variables that don’t rightly predict outcomes in the data provided to them. This is in contrast to humans, who may propagate or don’t even realize the factors that make them to, for example, recruit or disregard a particular job contender.

Potentially revealing human biases can make it easier to probe algorithms for bias (inscrutable via deep learning models that a human brain is the ultimate “black box”).
The fact is that using AI to improve decision-making is expected to benefit traditionally disadvantaged groups, as researchers Sendhil Mullainathan, Jon Kleinberg, and others call the “disparate benefits from improved prediction.”

2) We should accelerate the progress in addressing bias in AI systems

One of the most complex initiatives in addressing the bias in AI is also the most obvious – understanding and measuring “fairness.” Researchers, in recent times, have introduced technical ways of deciding fairness. These include models with equal false positive and false negative rates across groups or models with equal predictive values across groups. However, this raises a significant challenge: multiple fairness definitions generally cannot be persuaded simultaneously.

3) Counterfactual fairness and path-specific approach

Researchers have also made significant progress on a plethora of techniques that ensure AI systems can adopt them. These fairness promoting techniques include processing data beforehand, changing the system’s findings afterward, or incorporating the fairness classifications into the training process itself.

One favorable technique is “counterfactual fairness,” which reflects that a model’s decisions are similar to the counterfactual world where attributes deemed sensitive, e.g., gender, race, or sexual orientation, were altered.

DeepMind’s Silvia Chiappa has established a path-specific approach to counterfactual fairness. It can resolve complicated cases where some paths by which the sensitive peculiarities affect outcomes are treated fair, while other influencers are treated unfairly. This model, for example, can be applied to a specific department at a university to ensure that admissions to the specific department are unaffected by the candidate’s gender while possibly allowing the university’s overall admission rate to vary by gender if, say, female applicants tended to apply to more competitive departments.

All these improvements will help, but still there remain challenges more than the technical solutions, such as how to decide when a system is fair enough to be practiced, and in which circumstances fully automated determining should be permissible credibly.

These queries need multi-disciplinary perspectives, encircling social scientists, ethicists, and other humanities thinkers.

Nathan Enzo
Nathan Enzo
A professional writer since 2014 with a Bachelor of Arts in Journalism and Mass Communication, Nathan Enzo ran the creative writing department for the major News Channels until 2018. He then worked as a Senior content writer with LiveNewsof.com, including national newspapers, magazines, and online work. He specializes in media studies and social communications.

Latest

Related Articles