Conducting Spear-Phishing Attacks
Spear-phishing is a widely used method of performing cyber attacks and the ability to automate these attacks can make them exponentially more dangerous. IBM’s development of a highly targeted and evasive attack tool called DeepLocker is powered by AI. The company purposely hid malware in a video conferencing application and had DeepLocker unlock the program when prompted by facial recognition of a specific individual.
IBM designed DeepLocker to understand how AI and malware can be used for new types of spear-phishing attacks. Its success indicates that this technology in the wrong hands can be used to cause havoc in many unexpected and detrimental ways.
Identifying System Vulnerabilities
The United States Department of Defense has purchased technology from a private company that possesses self-healing capabilities and is optimized to employ machine learning techniques to discover system vulnerabilities. While this particular piece of software is under government control, there are certainly similar tools being developed by cyber criminals with the intention of finding exploitable vulnerabilities in the systems that society depends on every day.
Software vulnerabilities have always been a prime target of hackers. By delegating the labor and time-intensive tasks of identifying weak points to AI-powered programs, the hackers can spend more time sharpening the weapons with which to take advantage of vulnerabilities found by their artificially intelligent tools.
Controlling Autonomous Vehicles
The potential to use AI and ML techniques to hack into autonomous vehicles is a concern that needs to be addressed before the wide adoption of these machines. The ability to fool the vision and collision avoidance systems of these vehicles could lead to catastrophic results as traffic signs are ignored or rules of the road disregarded.
Consumer drones can be weaponized and controlled by an application that uses ML to continually update their ability to avoid capture. These techniques could lead to new types of terrorist attacks that will be much harder to detect and prevent.
The ability to use machine learning to impersonate individuals in various ways opens up new avenues through which cybercriminals can exploit their victims. The creation of fake voice recordings and videos using AI are making it increasingly hard to determine between reality and an impersonation.
Using impersonation techniques, one may be tricked into providing personal details that otherwise would never be made public. This can lead to compromised systems or accounts and can be extremely detrimental in many ways.
These are just a few of the ways that the technologies of AI and ML may be used with malicious purpose. Society must be aware of the double-edged potential of these disciplines to bestow benefits on society or cause significant harm.
Last modified on Monday, 01 April 2019