Home Uncategorized Unmasking Bias in Artificial Intelligence: Challenges and Solutions

Unmasking Bias in Artificial Intelligence: Challenges and Solutions

by admin
mm

The recent advancement of generative AI has seen an accompanying boom in enterprise applications across industries, including finance, healthcare, transportation. The development of this technology will also lead to other emerging tech such as cybersecurity defense technologies, quantum computing advancements, and breakthrough wireless communication techniques. However, this explosion of next generation technologies comes with its own set of challenges.

For example, the adoption of AI may allow for more sophisticated cyberattacks, memory and storage bottlenecks due to the increase of compute power and ethical concerns of biases presented by AI models. The good news is that NTT Research has proposed a way to overcome bias in deep neural networks (DNNs), a type of artificial intelligence.

This research is a significant breakthrough given that non-biased AI models will contribute to hiring, the criminal justice system and healthcare when they are not influenced by characteristics such as race, gender. In the future discrimination has the potential to be eliminated by using these kinds of automated systems, thus improving industry wide DE&I business initiatives. Lastly AI models with non-biased results will improve productivity and reduce the time it takes to complete these tasks. However, few businesses have been forced to halt their AI generated programs due to the technology’s biased solutions.

For example, Amazon discontinued the use of a hiring algorithm when it discovered that the algorithm exhibited a preference for applicants who used words like “executed” or “captured” more frequently, which were more prevalent in men’s resumes. Another glaring example of bias comes from Joy Buolamwini, one of the most influential people in AI in 2023 according to TIME, in collaboration with Timnit Gebru at MIT, revealed that facial analysis technologies demonstrated higher error rates when assessing minorities, particularly minority women, potentially due to inadequately representative training data.

Recently DNNs have become pervasive in science, engineering and business, and even in popular applications, but they sometimes rely on spurious attributes that may convey bias. According to an MIT study over the past few years, scientists have developed deep neural networks capable of analyzing vast quantities of inputs, including sounds and images. These networks can identify shared characteristics, enabling them to classify target words or objects. As of now, these models stand at the forefront of the field as the primary models for replicating biological sensory systems.

NTT Research Senior Scientist and Associate at the Harvard University Center for Brain Science Hidenori Tanaka and three other scientists proposed overcoming the limitations of naive fine-tuning, the status quo method of reducing a DNN’s errors or “loss,” with a new algorithm that reduces a model’s reliance on bias-prone attributes.

They studied neural network’s loss landscapes through the lens of mode connectivity, the observation that minimizers of neural networks retrieved via training on a dataset are connected via simple paths of low loss. Specifically, they asked the following question: are minimizers that rely on different mechanisms for making their predictions connected via simple paths of low loss?

They discovered that Naïve fine-tuning is unable to fundamentally alter the decision-making mechanism of a model as it requires moving to a different valley on the loss landscape. Instead, you need to drive the model over the barriers separating the “sinks” or “valleys” of low loss. The authors call this corrective algorithm Connectivity-Based Fine-Tuning (CBFT).

Prior to this development, a DNN, which classifies images such as a fish (an illustration used in this study) used both the object shape and background as input parameters for prediction. Its loss-minimizing paths would therefore operate in mechanistically dissimilar modes: one relying on the legitimate attribute of shape, and the other on the spurious attribute of background color. As such, these modes would lack linear connectivity, or a simple path of low loss.

The research team understands mechanistic lens on mode connectivity by considering two sets of parameters that minimize loss using backgrounds and object shapes as the input attributes for prediction, respectively. And then asked themselves, are such mechanistically dissimilar minimizers connected via paths of low loss in the landscape? Does the dissimilarity of these mechanisms affect the simplicity of their connectivity paths? Can we exploit this connectivity to switch between minimizers that use our desired mechanisms?

In other words, deep neural networks, depending on what they’ve picked up during training on a particular dataset, can behave very differently when you test them on another dataset. The team’s proposal boiled down to the concept of shared similarities. It builds upon the previous idea of mode connectivity but with a twist – it considers how similar mechanisms work. Their research led to the following eye-opening discoveries:

  • minimizers that have different mechanisms can be connected in a rather complex, non-linear way
  • when two minimizers are linearly connected, it’s closely tied to how similar their models are in terms of mechanisms
  • simple fine-tuning might not be enough to get rid of unwanted features picked up during earlier training
  • if you find regions that are linearly disconnected in the landscape, you can make efficient changes to a model’s inner workings.

While this research is a major step in harnessing the full potential of AI, the ethical concerns around AI may still be an upward battle. Technologists and researchers are working to combat other ethical weaknesses in AI and other large language models such as privacy, autonomy, liability.

AI can be used to collect and process vast amounts of personal data. The unauthorized or unethical use of this data can compromise individuals’ privacy, leading to concerns about surveillance, data breaches and identity theft. AI can also pose a threat when it comes to the liability of their autonomous applications such as self-driving cars. Establishing legal frameworks and ethical standards for accountability and liability will be essential in the coming years.

In conclusion, the rapid growth of generative AI technology holds promise for various industries, from finance and healthcare to transportation. Despite these promising developments, the ethical concerns surrounding AI remain substantial. As we navigate this transformative era of AI, it is vital for technologists, researchers and policymakers to work together to establish legal frameworks and ethical standards that will ensure the responsible and beneficial use of AI technology in the years to come. Scientists at NTT Research and the University of Michigan are one step ahead of the game with their proposal for an algorithm that could potentially eliminate biases in AI.

Source Link

Related Posts

Leave a Comment