ZDNET's key takeaways
- OpenAI adds reminders to take a break.
- ChatGPT will also have improved functions for mental health support.
- The company is working with experts, including physicians and researchers.
As OpenAI prepares to drop one of the biggest ChatGPT launches of the year, the company is also taking steps to make the chatbot safer and more reliable with its latest update.
Also: Could Apple create an AI search engine to rival Gemini and ChatGPT? Here's how it could succeed
On Monday, OpenAI published a blog post outlining how the company has updated or is updating the chatbot to be more helpful, providing you with better responses in times when you need support, or encouraging a break when you use it too much:
We build ChatGPT to help you thrive in the ways you choose — not to hold your attention, but to help you use it well. We’re improving support for tough moments, have rolled out break reminders, and are developing better life advice, all guided by expert input.…
— OpenAI (@OpenAI) August 4, 2025New get off ChatGPT nudge
If you have ever tinkered with ChatGPT, you are likely familiar with the feeling of getting lost in the conversation. Its responses are so amusing and conversational that it is easy to keep the back-and-forth volley going. This is especially true for fun tasks, such as creating an image and then modifying it to generate different renditions that meet your exact needs.
To encourage a healthy balance and give you more control of your time, ChatGPT will now gently remind you during long sessions to take breaks, as seen in the photo above. OpenAI said it will continue to tune the notification to be helpful and feel more natural.
Mental health help
People have been increasingly turning to ChatGPT for advice and support due to several factors, including its conversational capabilities, its availability on demand, and the comfort of receiving advice from an entity that does not know or judge you. OpenAI is aware of this use case. The company has added guardrails to help deal with hallucinations or prevent a lack of empathy and awareness.
For example, OpenAI recognizes that the GPT-4o model fell short in recognizing signs of delusion or emotional dependency. However, the company continues to develop tools to detect signs of mental or emotional distress, allowing ChatGPT to respond appropriately and providing the user with the best resources.
Also: OpenAI's most capable models hallucinate more than earlier ones
ChatGPT is also rolling out a new behavior for high-stakes personal decisions soon. When approached with big personal questions, such as "Should I break up with my boyfriend?", the technology will help the user think through their options instead of providing quick answers. This approach is similar to ChatGPT Study Mode, which, as I explained recently, guides users to answers through a series of questions.
OpenAI is working closely with experts, including 90 physicians in over 30 countries, psychiatrists, and human-computer interaction (HCI) researchers, to improve how the chatbot interacts with users in moments of mental or emotional distress. The company is also convening an advisory group of experts in mental health, youth development, and HCI.
Even with these updates, it is crucial to remember that AI is prone to hallucinations, and entering sensitive data has privacy and security implications. OpenAI CEO Sam Altman raised privacy concerns when inputting sensitive information into ChatGPT in a recent interview with podcaster Theo Von.
Also: Anthropic wants to stop AI models from turning evil - here's how
Therefore, a healthcare provider is still the best option for your mental health needs.