Advancements in artificial intelligence (AI) have been responsible for the transformation of various sectors including automobile, manufacturing, defense, and others. AI applications have increased in self-driving cars and drones with progression of innovative technologies. From navigation to sensing potential crashes, AI plays a crucial role in smooth and safe operation. However, researchers are apprehensive that advancements in AI could be responsible for launching massive attacks. Hackers may use these technologies to execute malicious attacks and turn autonomous cars and drones into weapons of mass destruction.
According to the report, titled, “The Malicious Use of Artificial Intelligence”, security researchers from Oxford, Yale, Cambridge, and others warned about potential security threats from ill-intentioned use of AI. The report outlined that self-driving cars could be used to cause accidents through misinterpretation of signals. The red signal could be manipulated into interpreting it as green signal to cause road accidents. On the other hand, drones used for surveillance could be used as weapons to carry out coordinated attacks.
The report mentioned that, “Carrying out attacks through intelligent machines is cost effective as certain tasks can be automated and targets can be attacked effectively. Hackers have been using “spear phishing”, in which personalized messages are sent to each target to get sensitive personal information and details.” Moreover, AI could be used to deliver hate speeches, entice violence, and spread rumors. Faces of influential leaders could be used to create fake videos and spread misinformation. Similarly, videos called as “deepfakes”, in which faces of celebrities are superimposed on adult film stars, will be created in huge amount. Along with threats on lives of people through autonomous cars and drones, malicious use of AI could be carried out to threaten the reputation of celebrities and well-thought-of people. Along with foreseen malicious usages of creating fake videos, causing road accidents, and carrying out coordinated attacks, AI could be misused in some unforeseen ways as well.
The report did not offer any specific ways in which the malicious use of AI could be prevented in the future. Researchers recommended that there should be more collaboration between researchers and policy makers. They also recommended that active participation from stakeholders is needed to prevent misuse of AI. Deploying tighter security measures while developing AI systems could mitigate the risk of misusing such systems.
Global leader, tech entrepreneurs, and well-renowned scientists have expressed concerns over the development of AI. Elon Musk, CEO of Tesla and SpaceX, outlined that AI is “potentially more dangerous than nukes” in 2014. The world-renowned physicist Stephen Hawking warned that AI could become the “worst event in the history of our civilization” if the development is not controlled. However, Bill Gates, Founder of Microsoft and Mark Zuckerberg, CEO of Facebook have backed AI, calling it a blessing to mankind.
Though AI technology is in its infancy, companies have been pouring in billions of dollars to develop AI systems. As the applications areas of AI are being diversified, the trend of innovation has propagate across industry segments. According to Progressive Markets, the global artificial intelligence market is projected to grow at a CAGR of 46.5% by 2025. Advances in language processing, image processing, and machine learning are expected to bring about tremendous improvements in AI systems. However, developers need to give security the utmost priority to prevent its potential malicious use.