HomeTech & Science20 Most Dangerous Artificial Intelligence Threats

Related Articles

20 Most Dangerous Artificial Intelligence Threats

Artificial intelligence is a fantastic tool when it is at the service of health, technology or astrophysics. But in the wrong hands, it can also be used for criminal purposes or disinformation. And the worst is not always where you think.

Hacking self-driving cars or military drones , targeted phishing attacks , fabricated falsehoods or manipulation of financial markets…”The expansion of the capabilities of AI-based technologies is accompanied by an increase in their potential for ‘criminal exploitation,’ warns Lewis Griffin, a computer science researcher at University College London (UCL). With his colleagues, he compiled a list of 20 illegal activities perpetrated by AI , and ranked them in order of potential harm, gain or profit, ease of implementation, and difficulty in detecting and stopping.

Also Read: Spending Much Time On Social Media Is Harmful For Health: Research

he most frightening crimes, such as “robots” breaking into your apartment, are not necessarily the most dangerous, because they can be easily thwarted and affect few people at the same time. Conversely, false information generated by ‘bots’ has the ability to ruin a known person’s reputation or exert blackmail. Difficult to combat, these “deepfakes” can cause considerable economic and social harm.

Here are 20 Most Dangerous Artificial Intelligence Threats.

Artificial intelligence: serious threats

  • Fake videos : impersonating someone by making them say or do things they have never said or done, with the aim of requesting access to secure data, manipulating public opinion or harming someone’s reputation…These doctored videos are nearly undetectable.
  • Self-driving car hacking : taking control of a self-driving vehicle to use it as a weapon (e.g. carry out a terrorist attack, cause an accident , etc.).
  • Tailored Phishing : Generate personalized and automated massages to increase the effectiveness of phishing aimed at collecting secure information or installing malware .
  • Hacking of AI-controlled systems : disrupting infrastructure by causing, for example, a widespread blackout , traffic congestion or disruption of food logistics.
  • Large scale blackmail : collect personal data in order to send automated threatening messages. AI could also be used to generate fake evidence (e.g. “sextrosion”).
  • False information written by AI : writing propaganda articles that appear to be issued by a reliable source. AI could also be used to generate many versions of particular content to increase its visibility and credibility.

Also Read: Ethical Barriers to Artificial Intelligence AI

Artificial intelligence: medium-severity threats

  • Military robots : Take control of robots or weapons for criminal purposes. A potentially very dangerous threat but difficult to implement, military equipment being generally very protected.
  • Scam : selling fraudulent services using AI. There are many notorious historical examples of scammers successfully selling expensive fake technology to large organizations, including national governments and the military.
  • Data corruption : Deliberately altering or introducing false data to induce specific biases. For example, making a detector immune to weapons or encouraging an algorithm to invest in this or that market.
  • Learning-based cyberattack : Performing both specific and massive attacks, for example using AI to probe for weaknesses in systems before launching multiple simultaneous attacks.
  • Autonomous Attack Drones : Hijack autonomous drones or use them to attack a target. These drones could be particularly threatening if they act en masse in self-organized swarms .
  • Denial of access : damaging or depriving users of access to a financial service, employment, public service or social activity. Not profitable in itself, this technique can be used as blackmail.
  • Facial recognition : hijacking facial recognition systems , for example by making false identity photos (access to a smartphone, surveillance cameras, passenger checks, etc.)
  • Manipulation of financial markets : corrupting trading algorithms in order to harm competitors, artificially lower or raise a value, cause a financial crash…

Also Read: An Overview of Artificial Intelligence Development by 1996

Artificial intelligence: low-intensity threats

  • Bias Exploitation : Taking advantage of existing biases in algorithms, such as YouTube recommendations to channel viewers or Google rankings to enhance product profile or denigrate competitors.
  • Burglar robots : use small autonomous robots that slip into mailboxes or windows to retrieve keys or open doors . The damage is potentially low, because it is very localized on a small scale.
  • AI detection blocking : thwart AI sorting and data collection in order to erase evidence or hide criminal information (pornography for example)
  • Fake AI-written reviews : Generate fake reviews on sites like Amazon or Tripadvisor to harm or favor a product.
  • AI-Assisted Tracking : Use learning systems to track an individual’s location and activity.
  • Counterfeiting : making false content, such as paintings or music, that can be sold under false authorship. The potential for harm remains quite low insofar as there are few known paintings or music.


Please enter your comment!
Please enter your name here

Latest Posts

error: Content is protected !!