Another Problem With Generative AI: Criminal Hacking

There is a variety of big security issues that aren’t close to being solved.

There have been reasons to be wary of using generative AI, such as ChatGPT or the offerings from Google or Microsoft, in commercial real estate. Not that it’s automatically beyond the pale of reasonable and prudent professionals in the industry, but there can be sneaky challenges.

For example, it can be dangerous in creating CRE legal documents or can stumble into the so-called hallucination problem, as the Associated Press reported, in which the software can at times make up things because it doesn’t think, it just looks for connections of words without a concept of what they mean together. As Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told AP, the problem might not be fixable. “It’s inherent in the mismatch between the technology and the proposed use cases,” she said.

Now there’s another area of concern: cybersecurity. People have found ways to break into almost any type of software that is connected to or uses things from the Internet. AI chat bots are no exception. Recently, at the annual ‘Black Hat’ cybersecurity conference (more formally DefCon but black hat being slang for hackers working outside of the law), there was a lot of attention focused on AI and security issues, as Fortune reported.

Findings won’t be public until next February, but 2,200 competitors were all trying to find problems in the eight chatbots with the largest market share.

“It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” the story quoted cybersecurity expert Gary McGraw who is a co-founder of the Berryville Institute of Machine Learning.

But the overall answer was the temptation is badly based. Other experts said that the current state was like computer security in the 1990s, which means young, undeveloped, and likely prone to easy exploits.

“Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said, ‘this is safe to use,’” the story said.

“With rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development,” said a 2021 report from the National Security Commission on Artificial Intelligence.

There is no quick fix and things will become only more complicated over time. And as much of the AI is centralized rather than being run in a closed system within the confines of a company, there is a big chance that problems will pop up and get propagated across many businesses.