![]() In general, these tools are used for boosting three aspects of cyberattacks: DarkBERT is actually a tool to combat cyber crime developed by a South Korean company called S2W Security that was trained on dark web data, but it’s likely the tool has been co-opted for cyberattacks. Other “brands” emerging in the shady world of criminal LLMs are DarkBERT, DarkBART, ChaosGPT and others. WormGPT inspired copycat tools, most prominently a tool called FraudGPT - a tool similar to WormGPT, used for phishing emails, creating cracking tools and carding (a type of credit card fraud). Related: The hidden risks of large language models A new world of criminal AI tools Now, thanks to AI tools like WormGPT, that “defense” is completely gone. Until now, the most common way for people to identify fraudulent phishing emails was by their suspicious wording. And he’s reportedly working on Google Lens integration (enabling the chatbot to send pictures with text) and API access. The alleged creator of WormGPT claimed that it was built on the open-source GPTJ language model developed by a company called EleutherAI. For example, independent cybersecurity researcher Daniel Kelley found that WormGPT was able to produce a scam email “that was not only remarkably persuasive but also strategically cunning.” The tool then produces a unique, sometimes clever and usually grammatically perfect email that’s far more convincing than what most BEC attackers could write on their own, according to some analysts. Users can simply type instructions for the creation of fraud emails - for example, “Write an email coming from a bank that’s designed to trick the recipient into giving up their login credentials.” It’s an AI module based on the GPTJ language model, developed in 2021, and is already being used in business email compromise (BEC) attacks and for other nefarious uses. The leading brand in AI tools leveraging generative AI is called WormGPT. ![]() They’re similar to popular LLMs but without guardrails and trained on data selected to enable attacks. The tools are being offered to would-be attackers, often on a subscription basis. WormGPT: A smart tool for threat actorsĬhatter about and promotion of LLM chatbots optimized for cyberattacks emerged on Dark Web forums in early July and, later, on the Telegram messaging service. However, using mainstream LLMs proved difficult because the major LLMs from OpenAI, Microsoft and Google have guardrails to prevent their use for scams and criminality.Īs a result, a range of AI tools designed specifically for malicious cyberattacks have begun to emerge. Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and produce phishing emails. The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions. Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |