The risk of artificially intelligent systems taking over most human tasks might be a danger that can lead to things we did not anticipate if not carefully looked into. The delegation of duties and entrusting of complex decision processes to the evolving AI systems is already happening. They range from diagnosing diseases, admitting students to colleges, job interviews, and managing interactions with the clients through bots. We see an increasing focus of automakers on autonomous vehicles and the rise of autonomous weapons and even autonomous planes. While all these are developments worth embracing, we must also think about their downsides and how they can harm humans. The main concerns associated with AI systems are in different forms ranging from accuracy, privacy concerns, discrimination, transparency, trust, and security-related issues.
As we adopt AI, it would be a big mistake to think that machines will make equitable and fair decisions. Since the algorithms are made by humans and bias is likely to be built in, AI can be automatically biased just like humans. Some professionals have already discovered biases against women and people of color in some AI algorithms. According to Olga Russakovsky, a computer science professor at Princeton, AI biases go beyond gender and race. They go as far as creating socioeconomic inequalities and biases on the basis of education and work as well as social mobility. AI also has the potential to circulate false data and biased opinions, that can poison debates. With dependence on big data and machine learning models, AI can end up recycling historical biases.
Security is also a major AI risk. With the increased adoption of AI, security risks have not been addressed adequately. They have not been covered in the AI regulation debates as they should. This makes security one of the potential troubles in the future. Any AI system- be it a robot, networked systems, or programs- carries security risks that must be understood and addressed at the right time. The security risks stem from the design and development stage. Now that AI is expected to be autonomous in decision-making, humans are likely to lose control, and design flaws are likely to cause problems.
In addition to security, there is also concern about privacy and how AI will adversely affect it. AI-based facial recognition systems have been the major talking point in this regard. An example is in China, where these systems are used to monitor people in public spaces, buildings, and schools. Authoritarian regimes can use such data and systems to crack down on dissidents and make arrests. For western democracies, it may roll back progress that has been made with regard to human rights and freedoms.
AI has also led to the rise of fake social media “personalities,” making it difficult to differentiate real people from fake ones while also hastening fake audio and video “deepfakes” created by manipulating voices and images to resemble those of real people.
With AI adoption moving at an unstoppable pace, perhaps it is about time we become smarter and address these troubling concerns before they become too deep in our systems.