


OpenAI makes the text-davinci-003 API and other model APIs available to developers so they can integrate the AI bot into their applications. The technique works by using the application programming interface for one of OpenAI's GPT-3 models known as text-davinci-003, instead of ChatGPT, which is variant of the GPT-3 models that's specifically designed for chatbot applications. Hackers have found a simple way to bypass those restrictions and are using it to sell illicit services in an underground crime forum, researchers from security firm Check Point Research reported.
Hacked textart code#
Ask the service to write code for stealing data from a hacked device or craft a phishing email, and the service will refuse and instead reply that such content is “illegal, unethical, and harmful.” Opening Pandora’s Box The service actively blocks requests to generate potentially illegal content. People can use it to create documents, write basic computer code, and do other things. Hackers have devised a way to bypass ChatGPT’s restrictions and are using it to sell services that allow people to create malware and phishing emails, researchers said on Wednesday.ĬhatGPT is a chatbot that uses artificial intelligence to answer questions and perform tasks in a way that mimics human output. Both text-davinci-003 and ChatGPT are GPT-3 models (OpenAI later distinguished them as GPT-3.5 models.) ChatGPT is specifically designed for chatbot applications and has been fine tuned from GPT-3.5 models. This post was updated throughout on Thursday, Feb 9, to make clear that the method used to bypass ChatGPT restrictions is employing APIs for a GPT-3 model known as text-davinci-003 instead of ChatGPT. Getty Images | Carol Yepes reader comments 60 with
