Estimated reading time: 3 minutes, 3 seconds

Is AI Affecting Civil Rights? Featured

Is AI Affecting Civil Rights? "Civil rights march on Washington, D.C. Film negative by photographer Warren K. Leffler, 1963. From the U.S. News & World Report Collection. Library of Congress Prints & Photographs Division. \r\n\r\nPhotograph shows a procession of African Americans carrying signs for equal rights, integrated schools, decent housing, and an end to bias.\r\n\r\nhttps:\/\/\/item\/2003654393\/"

As humans become increasingly reliant on technology and machines for their daily activities and decision-making, there is concern over a possible conflict between machines and people. Artificial intelligence is one area that has shown potential to help in many things but has also shown the possibility of affecting various humans in different ways adversely. For instance, if left unchecked, AI can create inequality and can even deny humans their rights. On the contrary, if used for good, it can enhance these rights and provide shared prosperity and create a future we all dream of for ourselves.

Over the past few years, there has been an emergence of products that incorporate AI. This technology is meant to streamline activities and enhance decision-making capabilities. Most of these products are intended to minimize human interference in some activities, which is one way of eliminating bias. While the role of humans has gone down in AI-based systems significantly, it does not guarantee equity. Unlike civil/human rights, that focuses more on equity, AI-based systems prioritize on user-preference. This creates a lot of difference since AI systems may be biased towards groups based on their color, gender, race, age, and sexuality. This may create a civil rights issue in things such as AI-based advertisement because groups such as minorities, people living with disabilities, and the LGBTQ community can be screened out.  These are the areas that human rights seek to address and protect.

Unlike the civil rights groups that are against targeting individuals based on their beliefs, sexuality, or race, AI systems might make decisions based on personal characteristics. An example is the case of retail giant Amazon, that Reuters reported to have created an AI system that evaluated resumes and identified the most qualified candidates for employment opportunities. The AI system was claimed to be highly biased against women. According to Reuters, the system penalized resumes with the word “women’s” and downgraded all-women’s colleges.

From Amazon’s example, it is clear that AI systems can affect civil rights greatly. For instance, AI systems can harm advertisements and the target population. It can deny some groups access to information while giving others advantage to the same. An example of such is a case brought to court by ACLU and others against Facebook’s ad targeting based on gender, age, and different other demographics from Facebook’s use of AI-based targeting options. The same social media giant was also sued recently for discriminating against female and older users by hiding some ads and access to financial services.

Job replacement can be another way in which AI can infringe on civil rights. Although it makes processes efficient by automating tedious tasks, it is estimated that more than 47 percent of jobs are at risk of being taken over by machines by the next decade. Unlike the past, where AI was thought only to be replacing humans in dangerous tasks, it is now emerging that technology is likely even to occupy spaces that were only filled by humans. This threatens to lead to unemployment and poverty as a result. AI is expected to lead to mass job losses and income disparities, although it can lead to positive effects, a fact that is undeniable.

While it is clear that AI could lead to various ills regarding human rights, we all have a shared responsibility to ensure that this technology is fairly used. The concepts of fairness, equity, and inclusivity are top on the list while developing these systems. On a positive note, technology needs to provide greater visibility and access to information to everyone. This can be achieved by ensuring that algorithms are trained with fair data.   

Read 304 times
Rate this item
(1 Vote)
Scott Koegler

Scott Koegler is Executive Editor for PMG360. He is a technology writer and editor with 20+ years experience delivering high value content to readers and publishers. 

Find his portfolio here and his personal bio here

Visit other PMG Sites:

click me
PMG360 is committed to protecting the privacy of the personal data we collect from our subscribers/agents/customers/exhibitors and sponsors. On May 25th, the European's GDPR policy will be enforced. Nothing is changing about your current settings or how your information is processed, however, we have made a few changes. We have updated our Privacy Policy and Cookie Policy to make it easier for you to understand what information we collect, how and why we collect it.