- ChatGPT users say that the bot is becoming more difficult to work with.
- Now, OpenAI is looking into reports that the bot is “lazier.”
- ChatGPT has racked up 1.7 billion users in the past year.
ChatGPT users are lamenting a recent situation in which the AI bot is telling them to do their work like it’s their boss or something, prompting OpenAI to investigate.
The company announced on Thursday that it was looking into reports that ChatGPT began refusing user demands by suggesting that they finish tasks themselves or flat out refusing to complete them — while stopping short of returning an out-of-office email from Cabo.
OpenAI, via the ChatGPT account on X, said it was soliciting feedback on the model “getting lazier.”
“We haven’t updated the model since Nov 11th, and this certainly isn’t intentional,” the company wrote. “model behavior can be unpredictable, and we’re looking into fixing it.”
ChatGPT has been touted as a revolutionary tool for people who would rather play solitaire at work while outsourcing their tasks. The bot has racked up 1.7 billion users, per estimates, since its launch in November of last year. In that time, research has shown that ChatGPT helped some users become more efficient employees and allowed them to return more quality work.
But now, people are saying they are being met with sass from the bot that’s supposed to make their lives easier.
For example, Semafor reported that a startup founder tried asking the bot to list the days of the week up to May 5. It responded that it couldn’t complete an “exhaustive list.” When Business Insider tested this out, ChatGPT provided detailed directions on calculating how many weeks existed between December 9 and May 5 and also provided an answer.
On Reddit, users complain about the laborious task of getting ChatGPT to appropriately respond to assigned tasks by switching out various prompts until they reach the desired response. Many of the complaints are focused on ChatGPT’s ability to write code and want the company to return to the original GPT models. Users also said that the quality of responses is also in decline.
Employees have previously attributed some of the issues to a software bug, but OpenAI said Saturday that it is still investigating user complaints. In a statement on X, it stressed that the training process can result in different personalities for its models.
“Training chat models is not a clean industrial process. different training runs even using the same datasets can produce models that are noticeably different in personality, writing style, refusal behavior, evaluation performance, and even political bias,” the company wrote.
OpenAI did not immediately respond to a request for comment.
Source link