ChatGPT Shouldn’t Be Your First Choice for CRE Documents

As impressive as it is, the program can’t reliably give you what you need in agreements.

Lawyers — who needs them/? In commercial real estate, everyone. Who likes paying them? The obvious answer: Are you kidding?

Tony Alfonso, a partner in Reed Smith’s real estate group, understands the sentiment. Plus, they often shoot down ideas that, legally, won’t work, even though businesspeople may consider those thoughts blinding flashes of brilliance. “The age-old question in the real estate-legal issue is everybody wants to figure out how to get rid of lawyers or get around them.” Alfonso laughingly tells GlobeSt.com but is also serious.

Between the two factors, many people, in and out of CRE, look for workarounds like finding contracts and other legal documents to download from the Internet. That’s a dangerous choice because a single draft won’t possibly cover the many variations in how state laws treat a subject.

If blank contracts are dangerous to download, Alfonso — who also describes himself as a co-founder of a legal technology startup called Seedjura — says that ChatGPT, the artificial intelligence program that’s received a lot of press about the written text it can generate, is a dangerous substitution.

“I think psychologically, using it seems like a direct answer,” Alfonso says. But it’s giving you less choice than Google. It seems that it makes sense even though you have no idea where this information is coming from.”

ChatGPT has received criticism about the quality of information in the generated answers to prompts, such as making simple arithmetic errors or providing a citation for a source that is completely fabricated. “ChatGPT sometimes writes plausible sounding but incorrect or nonsensical answers,” writes OpenAI, the company behind the system. “Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

In other words, you not only may fair to understand whether a response is correct, but there may be a clear path to verification.

“It’s like a beta version,” Alfonso says. “I don’t think they’re trying to have people rely on it. They want to show that it works, and that it converses in a way that’s more advanced than a chat bot.” Eventually this technology will improve, and newer ones will enter the scene.

Which brings the conversation back to the subject of spending money on actual attorneys.

“Most of what you pay a lawyer for is to stand behind the document,” adds Alfonso.