PcGamer

A Chatbot from the 1960s has thoroughly beaten OpenAI’s GPT-3.5 in a Turing test, because people thought it was just ‘too bad’ to be an actual AI

A small research group recently examined the performance of 25 AI “people“, using two large language models created by OpenAI, in an online Turing test. None of the AI bots ultimately passed the test, but all the GPT 3.5 ones did so badly that a chatbot from the mid-1960s was nearly twice as successful as passing itself off as a human—although that’s mostly because people thought the older chatbot was too bad at pretending to be human to be AI.

News of the work was reported by Ars Technica and it’s a fascinating story. The Turing test itself was first devised by famed mathematician and computer scientist Alan Turing in the 1950s. The original version of the test involves having a real person, called an evaluator, talk to two other participants via a text-based discussion. The evaluator knows that one of the respondents is a computer but doesn’t know which one.

Source link

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments