ANIMALS
FUNNY
INSPIRING
LIFESTYLE
NEWS
PARENTING
RELATIONSHIPS
SCIENCE AND NATURE
WHOLESOME
WORK
Contact Us Privacy Policy Cookie Policy
© GOOD Worldwide Inc. All Rights Reserved.

People are calling ChatGPT 'dumber' and 'lazier' after being dissatisfied with recent updates

The invention of ChatGPT seemed to be a relief. However, the recent versions have been dissatisfying, causing a fuss over the internet.

People are calling ChatGPT 'dumber' and 'lazier' after being dissatisfied with recent updates
Representative Cover Image Source: Pexels| Matheus Bertelli, Twitter| @shakoistslog

The advent of chatGPT brought mixed emotions for users. For some, it was their way out of getting their work done easily and quickly, while for others, it seemed to pose a threat. However, with the overuse of the technology and many understanding its capabilities, users on Twitter are expressing their discontent with the same. The recent updates and developments on the program have caused certain malfunctions or are simply not up to the mark. Matt Wensing posted a tweet mentioning the cons of the program and its passive potential. Soon, many users found similar issues.

Representative Image Source: Pexels| Matheus Bertelli
Representative Image Source: Pexels| Matheus Bertelli

In his post, he wrote, “GPT has definitely gotten more resistant to doing tedious work.  Essentially giving you part of the answer and then telling you to do the rest.” He further asked users to compare the situation to that of having only 10 rows appear in a database when one runs a search. He also shared a visual of the chatGPT results and it was evident why users were put off. One of the queries read, “Can you give me the list of weeks from now till May 5th, 2024.” To this, the result delivered read, “I can provide you with the number of weeks but can't provide an exhaustive list of each individual week.” While many have been able to get their work done in a jiffy, others are claiming that the program has become “lazy” and “useless” in attending to their requirements.



 

In another post, @krishnanrohit wrote that the program has become “lazy” and “incompetent” and that it is frustrating. Sharing a few passive results, he wrote, “Convert this file? Too long. Write a table? Here are the first three lines. Read this link. Sorry can't. Read this py file? Oops, not allowed.” He also joked in a thread about the common result ChatGPT provides viz. "'Error in analyzing,” by saying, “Error analyzing" is GPT language for ‘afk (away from keyboard), be back in a couple of hours.’” The theory is that the recent updates have caused disturbing issues that seem to have become nerve-wracking for many users.



 

The common complaints are that the program simply refuses to provide results or displays an error, thereby avoiding an answer to the input query altogether. @shakoistslog said, “Dawg won't listen to me anymore. I beg it, write the code in full and don’t leave comments for me to fill in. It won’t listen.” @yacineMTB said, “I've been coding for the past week and a half now because of the huge drop in instruction adherence on GPT4.” Many users are demanding that the original version of GPT4 be launched for better efficiency. @iroasmas shared another image of the results he received on adding a query and said, “Damn, new GPT-4 s*cks.”



 



 

In his image, the user had been requesting the meaning and etymology of a Hebrew word. However, the program very vaguely replied, “It is not my job to educate you.” On further requesting a hint or reference, the program delivered the same response again. Open AI, responsible for the developments and updates of the program, has been looking for solutions to user complaints. @willdepeu, a researcher, has been noting down complaints to be resolved. In a reply, he said, “If you have any examples of where you’re seeing this, would love to look into it for you. Chances are it’s some weird regression that we can patch in the next model version.” Hopefully, there is a silver line to look forward to in the new model so it can ease the work as it initially would.

More Stories on Upworthy