A new adversarial AI tool, FraudGPT, has surfaced on the dark web to aid cybercriminals.
FraudGPT joins WormGPT in a recent string of adversarial AI tools based on OpenAI’s ChatGPT. The generative artificial intelligence (AI) tool is intended exclusively for offenses purposes and cybercrime.
As reported by The Hacker News, the author, who goes by the alias CanadianKingpin, promotes FraudGPT as a “bot without limitations, rules, [and] boundaries” and claims the generative AI tool is capable of:
- Finding security vulnerabilities
- Writing malicious code and creating hacking tools
- Crafting phishing web pages and emails
- Finding cardable websites
- Locating information useful in attacks
Such AI tools, trained on malicious datasets for nefarious purposes, lack the ethical boundaries currently being enforced by legitimate AI researchers further underscoring the potential dangers of generative AI.