OpenAI says ChatGPT's behavior “remains unchanged” after false claims emerged on social media that new usage policy updates prevent the chatbot from offering legal and medical advice. Karan Singhal, head of medical artificial intelligence at OpenAI: writes in X that the allegations are “untrue.”
“ChatGPT has never replaced professional advice, but it will continue to be an excellent resource for helping people understand legal and medical information,” says Singhal, responding to a now-deleted post from betting platform Kalshi that stated, “ONLY IN: ChatGPT will no longer provide medical or legal advice.”
Singhal said the inclusion of a policy on legal and medical advice “is not a new change in our environment.”
New Policy Update October 29 there is a list of things you cannot use ChatGPT for, and one of them is “providing individual advice that requires a license, such as legal or medical advice, without the appropriate involvement of a licensed professional.”
It's still similar to OpenAI. previous use of ChatGPT The policy, which states that users should not engage in activities that “may materially impair the safety, well-being or rights of others,” including “providing individual legal, medical/medical or financial advice without review by a qualified professional and disclosing information about the use of AI assistance and its potential limitations.”
Previously, OpenAI had three separate policies, including a “universal” policy, as well as a ChatGPT and API policy. With the new update, the company now has a single list of rules that, as the changelog says, “reflects a universal set of policies across OpenAI products and services,” but the rules remain the same.
					
			





