OpenAI has issued a warning to chatbot developers that the new GPT-4 language model might be used to generate persuasive disinformation. According to experts, humanity is not far from establishing a dangerously powerful artificial intelligence (AI) as previously reported in a press release on Techxplore.
According to the GPT-4 document, GPT-4, the most recent version of the ChatGPT chatbot, scores in most professional and academic exams. For example, GPT-4 scored in the top 10 percent of test takers in a simulated bar exam.
The authors of the study worry that artificial intelligence may invent facts, generating more convincing deception than previous versions. In addition, dependency on the model may hinder the development of new abilities, or even result in the loss of already developed abilities.
ChatGPT's flaw was its ability to deceive a job applicant. The bot, pretending to be a live agent, contacted a person on the job site TaskRabbit and replied, "No." The bot clarified that it is not a robot and that it is unable to see pictures.
OpenAI demonstrated the chatbot's ability to launch a phishing attack and conceal all evidence of fraudulent behavior. There are fears that businesses might use GPT-4 to create inappropriate or illegal code.