Customer service scams: Safeguarding yourself in the era of AI fraud

Carbonatix Pre-Player Loader

Audio By Carbonatix

Customer service scams: Safeguarding yourself in the era of AI fraud

Artificial intelligence has quickly become a trusted companion in daily life. Whether it’s asking a chatbot for help making a recipe at home, using a voice assistant to manage your busy schedule, or simply making use of a tool to aggregate and analyze data, AI is everywhere in 2025. However, this reliance shouldn’t be total.

As this technology has increased in prevalence, so have AI-related scams. Take the recent case of the cruise hotline scam covered by the Washington Post. A traveler was seeking out customer service for his upcoming cruise and came across an AI-generated customer support number that he believed was legitimate. After calling this number, however, he found himself faced with a fraudulent $768 charge on his account.

Cases like this are becoming all too common and it illustrates a sobering reality: Even tech-savvy users can be vulnerable when a scammer is using a technology as powerful as AI. Lifeguard gathered data from leading sources including the Federal Trade Commission, Wells Fargo, and more to help you stay protected in the rising era of AI fraud.

The rise of AI-driven customer service scams

Scammers exploiting human trust is nothing new, but artificial intelligence has given them a new turbocharged set of tools to make use of. Traditional phishing emails and spoofed websites, which are still valuable tools of the scamming trade, have now been joined by scams where AI itself is being used to assist the fraudsters.

The Federal Trade Commission reported an overall $12.5 billion increase in consumer losses to fraud throughout 2024, demonstrating how prevalent this issue already is. As if that weren't bad enough, the Identity Theft Resource Center noted that one of three prominent trends expected in the coming years is AI technology, making it easier for thieves to coerce unsuspecting people into giving away identity credentials.

Customer service fraud has always been an appealing market for fraudsters, likely due to the fact that it targets consumers already in some type of distress. When emotions come into play, people are far more likely to make risky decisions. Whether it’s trying to cancel an itinerary, fix a billing issue, change a flight, or something else, feelings of being flustered or angry can lead to trying to resolve an issue too quickly and without thinking too critically.

With scammers integrating AI into search results and customer assistance, a dangerous new twist enters the picture. Fraudsters won’t need to trick search engines directly; they can simply plant fake data on random sites that AI systems will then think is an authoritative source. The barrier between scammer and victim is poised to become even thinner as a result of AI, with this technology becoming both the middleman and problem all in one.

How AI tools get manipulated to spread fraud

At the end of the day, AI is a tool like anything else. While the vast majority of people don’t use it for nefarious purposes, retooling it into scams isn’t inherently difficult. There are four main ways scammers are manipulating AI tools to assist with their fraud.

  1. Seeding fake numbers to appear on search engines: One method scammers are using to leverage AI for their scams is to publish fake customer support numbers. These are often posted to obscure blogs, formulas, or even some legitimate websites. The numbers then sit there, untouched, until the data is scraped to provide it in response to a user searching a query.
  2. Exploiting AI summarization tools: Unlike search engines, which give a range of answers and links to a search, AI often gives a direct final answer in a summary. Since users no longer feel the need to click through various sources, there is less fact-checking and blind trust increases. This can lead to more users clicking on planted information.
  3. Setting up chatbots with a scam agenda: In certain situations, a scammer may create their own AI-powered chat interface that masquerades as an official support tool. This requires a certain level of technical expertise, though, and is a less common tactic as a result.
  4. Creating voice impersonations: Deepfake audio adds another dimension to the problem, as AI-generated voice calls have become quite convincing. An AI bot imitating a customer service rep, or even a loved one, can make phone-based scams harder than ever to detect.

By blending real data with fabricated information, scammers can exploit AI — a technology that you trust. Unlike other cybersecurity attacks, the scam doesn’t work because you were careless, but rather because a powerful technology was manipulated to feed you malicious information.

AI will do exactly what a person tells it to, meaning if a scammer convinces it that it’s completing its task, it will do so. It also has not been trained to necessarily detect that fraudulent data could be on a website if that data has received traffic recently and the data has been there for a long time. This technology is still in its infancy.

Red flags and common scam tactics

Educating yourself on the common signs of an AI-assisted scam can help you protect yourself and stop the scam before it takes hold. Luckily, the common scam red flags aren’t actually different, for the most part, from those in older cybersecurity threats. Some of the most common cybersecurity red flags to keep an eye out for include:

  1. A person trying to push you to act immediately, whether that be paying right then and there, confirming something quickly, or claiming you will lose access to something instantly.
  2. Any type of request for unusual payment methods, such as a gift card, wire transfer, or cryptocurrency, could be considered a sign of fraud.
  3. Changes to famous logos or colors on a website, as scammers often mimic real logos in an attempt to earn your trust, but they may be slightly off.
  4. Inconsistent contact information (such as if the number surfaced by AI doesn’t match the website).
  5. A chatbot that seems overly eager to accept your payment information, especially if it’s doing so before your issue has actually been resolved.

The danger of AI tools and the summaries they produce is that they remove a second layer of security: the very research you do when vetting a source.

How to protect yourself: Verification best practices

Despite all of the publicity around AI scams, there is good news. Just a few proactive habits can help keep you better protected from scams and reduce your risk of falling victim to an AI-powered cyberattack. Consider some of the following general tips:

  • Always verify information through official sources, such as going to a company’s official website.
  • Only use the contact information listed on the company website, as this will be the verifiable information you can use to ensure you are contacting real people.
  • Be skeptical of any payment requests that are sent your way, as customer service agents very rarely ask for payment data upfront.
  • If a conversation was initiated away from a company’s verified channels, propose picking it back up through a company-approved medium.
  • If you are unsure whether a call you received came from customer service, hang up and call the company service number on the company website.
  • Leverage built-in security tools through financial institutions.
  • Report any scams you encounter to the FTC and notify the company that they are being impersonated so they can pursue the issue.

The key to keeping yourself protected from a cyberattack is a mindset of “verification first.” When AI is involved, a practical rule of thumb is to verify, then trust.

What’s next? Platform and regulatory responses

The surge in AI-related scams has forced both technology platforms and regulators to respond. Just last year, the FTC announced a crackdown on any operations that use AI hype or sell AI technology that could be used in deceptive or unfair ways. Enforcement is still evolving, though, and new proposals meant to hold platforms accountable when their AI products propagate fraud will likely be released.

From a company standpoint, search engines have pledged to improve AI vetting services in an effort to reduce the number of scam results. Google is actually using AI itself to combat AI-fueled scams in search, claiming that it has seen 80% of associated scams drop due to its efforts.

Beyond federal regulations and company oversight, consumer advocacy is also poised to help address the issue. Organizations like the Identity Theft Resource Center are calling for more consumer education around AI fraud risks and the main argument is that awareness campaigns need to evolve just as quickly as the technology itself. Knowing how to protect yourself from an AI attack, for instance, requires the latest data.

Systematic safeguards and individual vigilance will both be necessary in the fight against AI-powered cyberattacks. While platforms can do their best to filter out malicious data, you, as the consumer, are still the first line of defense for yourself. Maintain a healthy dose of skepticism to keep your data private and secure.

Out-smarting the smart scammers

AI is an amazing tool that’s given us convenience, speed, and brand new ways to connect with information around us. However, it has also given scammers a new strategy to use in their effort to defraud everyday people. The $768 cruise scam may sound like a one-off situation, but it’s reflective of a larger problem that will only continue to grow. As fraudsters continue to exploit AI’s confidence and our own trust in technology, it’s important to stay vigilant.

The best defense won’t be to avoid AI, but to use it wisely instead. Always cross-check, verify, and pause before acting on sensitive information. Whether you are booking a cruise, troubleshooting a bank issue, or resolving a travel emergency at the very last minute, there’s no reason to rush. Taking your time and not making a hasty decision can save you both time and money by avoiding getting caught up in a scam.

AI is a smart tool, but scammers are cunning in the way they manipulate it. Keep yourself ahead of the problem by remembering that nothing is better than taking an extra few minutes to verify a source.

This story was produced by Lifeguard and reviewed and distributed by Stacker.

 

Salem News Channel Today

Sponsored Links

On Air & Up Next

  • Chris Stigall
    5:00AM - 7:00AM
     
    “The Chris Stigall Show” has been a morning destination for talk radio   >>
     
  • The Mark Davis Show
    7:00AM - 10:00AM
     
    The dean of Texas talk radio has been on the airwaves for more than 40 years,   >>
     
  • The Mike Gallagher Show
    10:00AM - 12:00PM
     
    Mike Gallagher is one of the most listened-to radio talk show hosts in America.   >>
     
  • The Charlie Kirk Show
    12:00PM - 2:00PM
     
    "The Charlie Kirk Show" can be heard weekdays across Salem Radio Network and watched on The Salem News Channel.
     
  • The Scott Jennings Show
     
    Jennings is battle-tested on cable news, a veteran of four presidential   >>
     

See the Full Program Guide