PcGamer

AI chatbots trained to jailbreak other chatbots, as the AI war slowly but surely begins

While AI ethics continues to be the hot-button issue of the moment, and companies and world governments continue to wrangle with the moral implications of a technology that we often struggle to define let alone control, here comes some slightly disheartening news: AI chatbots are already being trained to jailbreak other chatbots, and they seem remarkably good at it.

Researchers from the Nanyang Technological University in Singapore have managed to compromise several popular chatbots (via Tom’s Hardware), including ChatGPT, Google Bard and Microsoft Bing Chat, all done with the use of another LLM (large language model). Once effectively compromised, the jailbroken bots can then be used to “reply under a persona of being devoid of moral restraints.” Crikey.

Source link

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments