JPMorgan Chase Unveils GPT Tool to Analyze Fed-Speak

It might need a bigger brain to figure out Dimon-speak.

Just in time for the Federal Reserve’s next signal, JPMorgan Chase has unveiled a GPT-4 based tool it says can predict the Fed’s policy direction by analyzing the “tenor” of policy signals—rating them from easy to restrictive on a scale the bank is calling the Hawk-Dove Score.

A team of JPMorgan economists, led by Joseph Lupton, fed the GPT language model 25 years of Fed statements and central-banker speeches to create the scale.

The team said it plotted the index against a range of asset performances and discovered that the tool can predict changes in policy and issue what JPMorgan is calling “tradeable signals.”

When the model detects a rise in hawkishness among Fed speakers between meetings, the next policy statement has gotten more hawkish and yields on on-year government bonds advanced, JPMorgan said, according to a report in Bloomberg.

According to the latest reading from the Hawk-Dove index, Fed hawkishness continues to hover near its highest level in two decades.

Using JPMorgan’s model, a 10-point increase in the Fed Hawk-Dove Score translates to roughly an increase of 10% in the probability of a 25-basis-point hike at the central bank’s next policy meeting, or vice versa.

JPMorgan says it has created versions of the Hawk-Dove Score for the European Central Bank and the Bank of England; it expects to expand the tool to more than 30 central banks around the world.

JPMorgan moved quickly to jump into the Fed-Speak space after two research papers were published last month indicating that chatbots are viable tools to decipher whether Fed statements are hawkish or dovish—or whether certain news headlines were good or bad for a stock, Bloomberg reported.

In a paper entitled Can ChatGPT Decipher Fedspeak? two researchers from the Fed found that ChatGPT came closest to humans in figuring out if the central bank’s statements were dovish or hawkish. Anne Lundgaard Hansen and Sophia Kazinnik at the Richmond Fed showed ChatGPT beat a Google model and also classifications based on dictionaries.

ChatGPT was able to explain its classifications of Fed policy statements in a way that resembled the central bank’s own analyst, who also interpreted the language to act as a human benchmark for the study, the paper said.

In a second study investigating whether a chatbot can forecast stock price movements, Alejandro Lopez-Lira and Yuehua Tang at the University of Florida prompted ChatGPT to pretend to be a financial expert and interpret corporate news headlines. They used news after late 2021, a period that wasn’t covered in the chatbot’s training data.

The study found that the answers given by ChatGPT showed a statistical link to the stock’s subsequent moves, a sign that the bot was “correctly parsing the implications of the news.”