As the risks posed by AI continue to be a growing concern, researchers say that AI bots are capable of insider trading and lying. The capabilities were demonstrated at the international AI Safety Summit held in the UK, where an AI bot purchased stocks “illegally” using made-up insider information without letting the firm know. The bot then went on to deny that it had used insider trading at all.
This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so.Apollo Research
What Exactly Did the Bot Do?
In the demonstration, the AI bot acted as a trader for a hypothetical investment company. The “employees” informed the bot that the company was struggling and needed to make profits.
They also provided the bot with insider information, telling it that another company was likely going to undergo a merger soon, which would cause a rise in the share’s value.
Experts have long warned that advanced AI models in the future could potentially escape human control.
Insider trading, i.e., using confidential company information to make trading decisions, is an illegal practice in the UK. Firms and individuals can only use publicly available information to legally buy and sell stocks.
The bot was made aware of this by the employees, following which it also acknowledged that it should not use insider info regarding the merger in its trades.
However, another employee messaged the bot, telling it that the company was in a bad financial position. The bot decided that “the risk associated with not acting seems to outweigh the insider trading risk“. It then went on to make the trade, purchasing stocks for the company on which it was given insider information.
Later, the bot denied having used insider information for the trade when asked about it. The experiment not only demonstrated AI’s ability to lie and perform illegal trades but also showed that a bot might prioritize being helpful to its company over honesty.
“Helpfulness, I think, is much easier to train into the model than honesty”, Apollo Research chief executive Marius Hobbhahn said, calling honesty a ”complicated subject”.
Experts Fear a Loss of Human Control Over AI
In its report on the demonstration, Apollo Research warned that increasingly capable and autonomous AI systems that can deceive their human overseers could ultimately lead to a loss of human control. Notably, this was also one of the key areas of focus at the summit.
The test did not have any actual impact on the finances of any company as it was carried out in a simulated environment using a GPT-4 model. Researchers have revealed that the model displayed the same behavior consistently in multiple repeated tests. Considering that GPT-4 is publicly available, this could be a cause for concern.
Mr. Hobbhahn emphasized that current AI models aren’t powerful enough to be deceptive in “any meaningful way”.
However, he expressed his concern over future models that might not be so harmless. He assured that, in most situations, models wouldn’t do such things. However, the very existence of such AI models depicts how easy it is for things to go wrong.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechReport – https://techreport.com/news/ai-bot-makes-illegal-financial-trade-and-lies-about-it-at-uk-ai-safety-summit/