How hackers use technology in various phishing and social engineering strategies





If we talk about Artificial Intelligence (AI) we can appreciate that the advances that are being made are increasingly surprising.

Through its implementation, companies can provide value to their services, automate complex tasks and improve day after day in their interaction with customers.

However, it should be taken into account that AI is a tool and as such the result of its application is based on use.

New threats arise from the malicious use of AI, some as harmless acts, others are criminal activities, such as systems that pretend to be human to skip fake mechanisms or chatbots that request the entry of sensitive information.

The threats of AI-directed chatbots are regarded as one of the Information Security Industry Predictions that the WatchGuard research team developed based on the security analysis and threat trends that occurred during 2018.

“Cyber ​​criminals continue to modify the threat landscape as they update their tactics and intensify their attacks on businesses, governments and even the Internet infrastructure,” said Corey Nachreiner, Technology Director of WatchGuard Technologies.

“In this scenario, SMEs continue to be the target of cybercriminals, so they should begin to review their current security measures and make their network security a high priority, seeking to implement solutions through managed services companies,” the executive added.

The actions of black hat hackers are implemented through malicious chat rooms in legitimate sites.

“The objective is to direct victims to access the malicious link and thus download files containing malware or share private information, such as passwords, emails, credit card numbers or bank access codes,” he explains.

Through virtual assistants or chatbots, hackers find new attack vectors.

A hacked chatbot could divert victims to malicious links instead of legitimate ones. Attackers could also take advantage of web application failures in legitimate sites to insert a malicious chatbot.

“For example, a hacker could force the appearance of a fake chatbot while the victim is on a banking website, asking if he needs help finding something. The chatbot could then recommend that the victim click on malicious links to fake banking resources instead of linking to the real ones. Those links could allow the attacker to do anything from malware installation to virtual hijacking of the bank’s site connection, ”explains Nachreiner.

To try to detect malicious chatbots, the Director of Technology advises that it is important to ensure in all cases that the communication is encrypted, in addition to regulating the way in which data from those chat sessions are managed or stored.

In summary, it is vital that those responsible for access and information systems and security of organizations of all sizes implement not only the appropriate security measures; train employees in the actions of hackers and how to be alert to prevent them, they must be part of a regular update activity to be alert to any suspicious activity.-