News

These days, it's not unusual to hear stories about people falling in love with artificial intelligence. People are not only using AI to solve equations or plan trips, they are also telling chatbots ...
A new study from researchers at University of Pennsylvania shows that AI models can be persuaded to break their own rules ...
OpenAI and Meta will adjust chatbot features to better respond to teens in crisis after multiple reports of the bots ...
After a California teenager spent months on ChatGPT discussing plans to end his life, OpenAI said it would introduce parental controls and better responses for users in distress.
The parents of a teenager who died by suicide have filed a wrongful death suit against ChatGPT owner OpenAI, saying the chatbot discussed ways he could end his life after he expressed suicidal ...
OpenAI and Meta are adjusting how their chatbots respond to teenagers showing signs of distress. OpenAI, the maker of ChatGPT ...
The company will limit its AI characters and train the chatbot not to discuss self-harm and suicide, or have romance conversations with children.
OpenAI announced Tuesday that it will soon let parents link accounts with their teens, set age-appropriate rules and get alerted when ChatGPT detects “acute distress.” ...
The lack of guardrails leaves AI chatbots open to manipulation and has resulted in them generating antisemitic and other hateful online content, researchers said.
Generally, AI chatbots are not supposed to do things like call you names or tell you how to make controlled substances. But, just like a person, with the right psychological tactics, it seems like at ...