Emerging AI-driven cyber threats: How criminals attract victims online

Emerging AI-driven cyber threats: How criminals attract victims online

The rise of generative AI has resulted in ground-breaking innovation in practically every field, including, sadly, cybercrime. Tools such as ChatGPT, FraudGPT, and WormGPT have reduced the barrier to launching complex social engineering assaults, allowing even “low-skilled” criminals to accurately simulate trust, authority, and professionalism.

BlockDefenders has been actively monitoring how threat actors evolve. One of the most obvious trends in 2025 is that hackers will utilize AI to help them hack systems and manipulate people.

This article discusses how artificial intelligence (AI) is being used to entice and deceive people online, particularly in the cryptocurrency and blockchain space.

1. AI-generated phishing at scale

Phishing scams have evolved significantly in recent years. AI has transformed spam into well-written, personalized messages.

Cybercriminals are using large language models like WormGPT and FraudGPT to create phishing emails that sound identical to genuine messages from platforms like Binance, MetaMask, and OpenSea. These emails may alert customers to “security issues,” spoof login attempts, or request that they validate a transaction by clicking a malicious link.

Even more concerning is the multilingual capabilities of these AI systems. Scammers can now launch phishing campaigns worldwide, targeting victims in their own language with cultural subtleties, making detection even more difficult.

Recently, many users reported receiving fake MetaMask security notifications in near-perfect German and Japanese, advising them to “re-verify” their wallets. The emails were so convincing that victims typed their seed phrases before realising their mistake!

AI also enables dynamic content generation, which means that messages can be tailored to different user segments, devices, or transaction histories, making each message individually relevant.

PRO TIP: No legitimate platform will request your seed phrase or private key via email or direct message. When in doubt, always go to the platform’s official website and avoid clicking on links in questionable communications.

2. Deepfake technology in social engineering

Deepfake technology, a combination of “deep learning” and “fake”, refers to AI-generated videos or voice recordings that closely resemble actual persons. What was required Hollywood-level equipment is now accessible to everyone with a consumer-grade desktop.

Scammers employ deepfakes to imitate well-known cryptocurrency influencers, entrepreneurs, and project leaders. Imagine receiving a customized video from a trusted figure, such as the CEO of a DeFi project, inviting you to a “exclusive presale,” complete with authentic face expressions, realistic voice tones, and a professional background!

These videos are then pushed via:

  • Telegram groups
  • Crypto-related Discord channels
  • Social media ads (especially Facebook and YouTube)
  • Even hacked verified Twitter/X accounts

One recent example involved fake YouTube live streams showing an AI-generated Elon Musk promoting a “limited-time Bitcoin giveaway”, which lured thousands of viewers before being taken down.

The danger lies in the illusion of authenticity. Video and voice are powerful tools of trust and when weaponised, they can convince even sceptical investors to act quickly.

WATCH OUT: If a “project leader” or influencer offers exclusive early access, always double-check the announcement on the official website or verified channels. Deep Fakes are getting harder to spot, but genuine teams will never ask you to act impulsively.

3. Automated spear phishing using AI

Spear phishing is among the most harmful types of internet deception.  Spear phishing, as opposed to generic phishing emails, targets specific individuals by using tailored information.  Scammers can now extract publicly available data from platforms such as LinkedIn, Discord, Telegram, and even specialised Facebook groups using large language models and automated data scraping tools.

AI uses personal data to generate tailored messages that feel genuine and familiar. For example:

Hey Lisa, we noticed you’re active in the FB Crypto Community and we are inviting a few experienced users to test a new DeFi beta platform with 38% APY staking rewards. Want in?

The attacker didn’t just guess! Lisa actually posted in that group last week. And the message sounds like it is coming from someone within the same space.

The illusion of familiarity is what makes spear phishing so successful. Victims are more likely to trust the sender, click links, and disclose personal information when the communication looks to be from someone with similar interests or insider knowledge.

Modern spear phishing campaigns powered by AI can:

  • Adapt tone and style based on the victim’s demographic.
  • Mention recent interactions or discussions.
  • Bypass common spam filters due to natural-sounding language.

WHY IT WORKS: When we see personalised content, our brain assumes trust and relevance. AI attackers exploit that instinct ruthlessly.

4. AI-crafted fake crypto projects

Generative AI has reduced the need for a staff of developers, designers, or marketers to create a fake cryptocurrency project.  A single person with the appropriate prompts can set up a complete token economy in hours.

Scammers are now using AI tools like ChatGPT, Midjourney, and low-code platforms to:

  • Write whitepapers full of believable jargon and fake roadmaps
  • Design sleek websites with AI-generated branding and token metrics
  • Create full FAQs, blog posts, and investor updates
  • Build chatbots that mimic support agents or community managers—often trained on actual DeFi conversations

From the outside, the project appears legitimate, sometimes better than authentic ones. The UI is clean, the team bios sound impressive, and Telegram is full with “activity” (bots). However, there is no actual code, no audit, and no intention to build. Within days or weeks, creators vanish, leaving victims with useless tokens following a typical rug pull.

One recent example involved a DeFi project that raised over $200,000 in less than 48 hours through a presale page. The whitepaper was entirely AI-generated. The smart contract had a hidden backdoor, and within days, the liquidity pool was drained.

PRO TIP: Watch out for:

  • No visible or verifiable team
  • AI-generated faces in team bios (images seem too perfect)
  • Whitepapers full of buzzwords but no technical detail
  • Instant tokenomics with no lockups or vesting
  • “Flash launches” with countdown hype and little documentation

5. Fake Customer Support Bots

Stuck with a transaction? Waiting for a withdrawal? Are you currently dealing with a network fee issue? This is exactly when scammers strike.

Scammers are now using AI-powered customer support bots, particularly on platforms like Telegram, Discord, and Twitter/X, to impersonate the support teams of well-known exchanges and wallets. These bots:

  • Reply to user complaints or questions in real time
  • Use official-looking names and profile images
  • Include fake ticket numbers, logos, and custom greetings
  • Provide links to malicious “verification portals” or cloned wallet interfaces

The scam works like this: you post a support question in a Telegram group or tag an exchange on social media. Within seconds, you receive a DM that sounds extremely helpful:

“Hi! I’m Peter from [Crypto Exchange] Support. We saw your post about the failed transaction. Please verify your wallet using our secure portal so we can resolve this immediately.”

Everything looks legit… until you realize you’ve just handed over access to your wallet.

These scams spike during:

  • High traffic times (network congestion)
  • Scheduled upgrades or migrations
  • Breaking news or exchange outages

PRO TIP: Real support teams will NEVER DM you first.

  • Official communication happens via email or in-app notifications
  • You’ll be redirected to verified domains only
  • You will never be asked for your seed phrase or private key
  • No real support agent will “verify” your wallet via link or QR code

6. AI-Enhanced Social Media Influence Campaigns

Not every cryptocurrency “influencer” you see online is a real person. In reality, many profiles on X (previously Twitter), TikTok, and Instagram are now completely AI-generated:

  • Their profile pictures? Created with tools like ThisPersonDoesNotExist
  • Their posts and captions? Written by language models trained on crypto jargon
  • Their followers and likes? Bot farms programmed to simulate real engagement

These fake personas are used to build false credibility and push:

  • Fake airdrops with “limited-time claim” links
  • Presales for tokens that don’t exist
  • Staking programs offering impossibly high APY

Once enough fake engagement, likes, retweets, comments, and even fictitious “testimonials” are made, actual people begin to believe it is legitimate. To make matters worse, these false influencers frequently follow you first, engage with your post, or DM you to establish trust. Others may appear in the comment sections of well-known cryptocurrency accounts, further blending into the community.

This is AI-powered social engineering at scale, using deception, automation, and psychology to draw in victims.

WHY IT WORKS: Humans are wired to trust popularity. When a post has thousands of likes and a verified checkmark, our guard drops, even if it’s all fake.

PRO TIP: Never rely on social media for investment advice.

  • Verify airdrops on the official project website or CoinMarketCap listings
  • Cross-check influencer credentials (LinkedIn, GitHub, YouTube history)
  • Be skeptical of anonymous accounts hyping a presale or giveaway
  • Do not trust links in bios, comments, or DMs, go to the source directly

7. Exploiting human bias with AI messaging

One of the most impressive features of generative AI is its ability to manipulate rather than to create.

AI systems are rapidly being trained to exploit human psychology via subtle linguistic cues, rather than simply producing material. These models can run millions of message variations, test them on different audiences, and iterate based on the best performing version.

Scammers use this to target common cognitive biases that lead to impulsive decision-making. Here are a few of the most exploited:

  • FOMO (Fear of Missing Out): Creates urgency and fear of being excluded from a potential profit.
  • Urgency & Time Pressure: “Presale closes in 15 minutes—act now.”
  • Authority Bias: “As endorsed by Vitalik Buterin.” (Fake quote or AI-generated image)
  • Social Proof: “Join 12,000+ investors already staking with us!”

These messages often appear in:

  • Airdrop announcements
  • Presale DMs or emails
  • Social media ads
  • Telegram group pins

Even cautious users can fall for them, because they are engineered to bypass logic and hit emotions.

BEHAVIORAL TIP: If a message causes you to feel rushed, thrilled, or worried, simply stop! Close the tab, take a deep breath, and return to it later with a critical mentality. Scammers succeed by getting you to act emotionally rather than intellectually.

8. AI-driven romance & trust-based scams

Not every fraud is fast and aggressive. Some of the most dangerous are slow, personal, and emotionally manipulative, and they rely on trust rather than panic.

With the rise of AI-generated material, scammers are increasingly developing entire fake personas intended to attack dating apps, social platforms, and even professional networks like LinkedIn. These profiles often include:

  • AI-generated selfies (produced via tools like Midjourney or FaceFusion)
  • Carefully scripted bios designed to appeal to target demographics
  • Pre-written conversation trees and emotional responses powered by language models

The scam unfolds over time. It starts with friendly conversation, slowly escalates into emotional intimacy, and eventually transitions to crypto-related asks:

  • “My crypto wallet is frozen and I need a small loan.”
  • “I want to teach you how to earn passive income through staking.”
  • “Can you help me move some funds to avoid sanctions in my country?”

Some victims are groomed for weeks or months before the first financial request is made. By then, they believe they are helping a real person, possibly someone they are in love with.

WHY IT WORKS: Scams exploit loneliness, empathy, and the human desire for connection. AI makes it scalable, believable, and persistent.

And it’s not just dating apps. These scams now appear on:

  • Instagram DMs
  • Telegram chats
  • LinkedIn connection requests
  • Facebook groups
  • Gaming platforms

PRO TIP: If a new online relationship involves crypto talk, investment tips, or “urgent help” with funds, it’s very likely a scam, no matter how real it feels.

Conclusion: AI is the new face of online deception

As this post has demonstrated, generative AI has become a powerful tool in the hands of criminals, not for code, but for breaking trust. Attackers use a variety of methods to entice, mislead, and deceive victims across the digital landscape, including AI-crafted phishing messages, deep false endorsements, fake customer support bots, and artificial influencers.

What makes this threat particularly dangerous is its scale and believability. A scam message that once would have been easy to spot is now polished, personalised, and backed by AI logic trained to exploit human psychology. The crypto community, already familiar with high-risk environments, now faces a new layer of deception that’s harder to detect and more convincing than ever.

If you’re unsure whether something is a scam or need clarification, you can contact VALEGA Chain Analytics using the forms on the official website. Our team will review the situation and confirm whether it is a legitimate case or a potential fraud.If you’ve already fallen victim to a cryptocurrency scam, please visit Scam Victim? and you will find instructions on what to do next and how we may be able to help.