Deep Fakes: Artificial intelligence (AI) is present in several situations in our daily lives – from the use of smart devices and voice assistants such as Siri and Alexa to the spell checker on the cell phone.
Used to make people’s lives easier, technology can also become a powerful weapon in the hands of cybercriminals. One of the criminals’ most common uses of artificial intelligence is creating deep fake videos or audio to give more credibility to social engineering scams – but this is not the only one.
In addition to deep fakes, scammers also use AI to perfect phishing scams and crack passwords.
Deep Fake videos, which use artificial intelligence to get people to say and do things they never did in real life, have been heavily used in the spread of fake news and concerned disinformation experts around the world. Combined with social engineering tactics, they can also be problematic for digital security.
Deep Fake content could make scams like fake kidnapping much more sophisticated. Using artificial intelligence software, criminals could generate fake videos or audio to trick a victim into thinking that a family member is in an emergency and needs, for example, their bank details to make a withdrawal.
In general, most phishing campaigns follow the “spray and pray” tactic: scammers send messages to thousands or millions of users at the same time, with no defined criteria for target selection. . Even if only a tiny percentage of recipients fall for the scam, criminals will still make a profit.
However, spear phishing is a more strategic and effective form of fraud. In it, criminals use machine learning to predict which users within a given database are more likely to fall for the scam and thus optimize the profile of victims. With the support of artificial intelligence, scammers can send targeted campaigns and create more explicit messages.
Clever Malware That Interferes With Machine Learning
Malware capable of tricking facial recognition systems into granting criminals access to Secret Services might seem like something out of a science fiction movie. Still, it’s pretty close to reality. According to Sol Gonzales, a security researcher at ESET, it is possible to train intelligent malware capable of challenging security mechanisms.
The expert explains that criminals can, for example, poison machine learning models during the training phase to cause malfunctions or allow the hacker to control the algorithm’s predictions. To do this, the attacker inserts incorrect or incorrectly labeled data into the models’ training set so that they learn the wrong patterns.
Thus, a hacker could infect a malicious software recognition model by labeling malware as a legitimate program, causing the system not to reject the threat in question. On a larger scale, attackers could infect facial recognition software to identify criminals, so the program starts labeling the guilty as innocent.
Artificial intelligence can also be used as an ally of cybercrime in cracking passwords. In 2017, researchers created a program that, using machine learning, can generate realistic passwords based on human behavior. Called Pagan, the software was able to guess more than 25% of a set of 43 million passwords leaked from LinkedIn profiles.
The program uses two neural networks: the first is responsible for creating artificial keywords. After being fed with actual passwords obtained from data leaks, the software learns how humans create the codes (adding your year of birth to the name of your first pet, for example) and generates new passwords based on that. Behavior.
The second neural network distinguishes keywords created by artificial intelligence from passwords created by humans. If she can’t make out the codes, it means that Pagan has succeeded in its mission. While companies in penetration testing can use it to ensure that employee passwords are secure, cybercriminals can also use the technology to breach accounts.