The concerns and warnings started last year. It probably wasn't soon enough.

New forms of artificial intelligence — generative AI — has made it relatively easy for fraudsters to imitate almost anyone or anything. That includes many aspects of commercial real estate.

The Federal Trade Commission mentioned the problem last year. "Thanks to AI tools that create "synthetic media" or otherwise generate content, a growing percentage of what we're looking at is not authentic, and it's getting more difficult to tell the difference," they wrote. "And just as these AI tools are becoming more advanced, they're also becoming easier to access and use. Some of these tools may have beneficial uses, but scammers can also use them to cause widespread harm."

Recommended For You

The FTC says there is already evidence that criminals and con people have put the technology into use. "They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks," they wrote. "They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that's very much a non-exhaustive list."

All well and good in theory. For what it means practically, the National Association of Realtors had an interesting list of possible problems, including the following:

  • Creation of false documents.
  • Use of deepfake media to build an impression of reality to a non-existent property or one in remarkably better shape than it is in reality.
  • Fake reviews, media appearances, testimonials, and the like.
  • Impersonate online experts, videos, and testimonials, even reshaping videos to say and do different things they once did.
  • Entangle CRE professionals in online meetings with imaginary people who pretend they're parties to a transaction.
  • "One of the most significant risks of AI in real estate has to do with wire fraud," they wrote. "The use of deepfakes has made wire fraud even more challenging to detect. Case in point: In 2019, a cybercriminal used AI voice technology to trick the CEO of a U.K.–based energy firm to transfer $243,000 to a secret account(link is external).

Some steps you can take include making careful use of deepfake detection software (still in its early days and may give false results), verify everyone's information, watermark all documents, and keep following developments.

NOT FOR REPRINT

© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.