There are many benefits to artificial intelligence. AI can be used in a variety of sectors, ranging from healthcare, criminal justice, manufacturing, lending, education, and hiring. It can also be used within organizations. “ AI can be used to help organizations determine when equipment – such as transportation components – might require maintenance, which can lead to significant cost savings,” Reva Schwartz, Research Scientist at the National Institute of Standards and Technology (NIST) , tells BlogHer . “Additionally, AI can be used to assist humans in making a variety of other decisions, such as determining what type of content to present (or not present) to consumers, or whether to hire a person based on some specified requirement.”
Because of this, AI has become widely relied on as it can make sense of information more quickly and consistently than humans, but at what cost? The truth is, there is a darker side to AI. “The ability to automatically categorize, sort, recommend, or make decisions about people’s lives comes with many risks,” says Reva. “Any time methods or processes from the real world are automated there is a potential to create or exacerbate significant gaps in performance. These gaps can create the opportunity for risks, such as the biases that are all around us, to be mechanized and inadvertently lead to discrimination or other types of false decisions.”
She adds, “AI systems do not always function as well as people think they do, and different contexts create different challenges for these systems. These functional challenges and the potential for false decisions can lead to negative impacts or harm. From wrongful arrests to privacy concerns to hate speech – these negative impacts of AI, and the speed and scale at which they can happen, have led to great concern on behalf of the public.”
To help combat this, NIST is making strides through an AI Risk Management Framework which seeks to equip organizations that build and deploy AI with specific guidance they can use to tackle AI risks such as bias. “The actions within the Framework are intended to foster an organizational risk culture focused on minimizing negative impacts from AI on individuals, communities, and society,” says Reva.
But beyond what NIST is doing, organizations are also doing their part. Groups such as the Algorithmic Justice League , which aim to create equitable and accountable AI along with, the Data Trust Alliance , which “brings together leading businesses and institutions across multiple industries to learn, develop, and adopt responsible data and AI practices.” And while we can never entirely get rid of bias, as Reva notes, the fact that organizations are recognizing bias, is a step in the right direction.
One critical aspect of determining how to tackle racial and gender bias in AI is dissecting why it happens. And essentially, it all comes down to human nature. “Biases are endemic in society,” says Reva. “Datasets and models used in AI systems can reflect these societal and historical biases, which can lead to significant stereotypes and systematic discrimination in the resulting system decisions, and violation of societal values and norms. The human implicit biases present in all of us can inadvertently play a role in the development and use of AI systems – even when the humans involved want to do the right thing.”
AI gender bias example: if an AI system has learned that physicians are most likely to be men when a hiring official is searching for physicians, the recommendations will be inaccurately skewed away from women applicants.
So, would hiring more women and minorities in tech solve the AI gender and racial bias issue? Yes and no, according to Reva. “AI organizations need to ensure their practitioner teams are composed of staff from a diversity of gender, races, experience, expertise, abilities, and backgrounds to help improve the identification of risks that can produce negative impacts,” says Reva.
“These teams can help broaden the perspective of AI system designers and engineers, and contribute to more open sharing of ideas and assumptions about the purpose and function of the technology , which can create opportunities for identifying existing and emergent risks,” she adds.
However, relying on individual, diverse staff members as an approach to remedy institutional diversity or fix structural fissures is ineffective, inappropriate, and also not part of their job description. What’s needed is a multi-layered approach that looks at AI systems as a whole.
It is important for the tech community to recognize that AI systems are more than their mathematical constructs and are not built in a vacuum or walled off from societal realities,” says Reva. “To meet those realities, and solve gender and racial bias, a multi-layered approach is required that takes our values and behavior into account and designs with impact in mind.”
Has artificial intelligence impacted your ability to secure job opportunities or a the ability to rent or purchase a home.
Contact the Law Offices of Renee Lazar at 978-844-4095 to schedule a FREE one hour no obligation to discuss your situation.
newsbreak.com