Original title: Meta is struggling to rein in its AI chatbots
Meta is changing some of the rules governing its chatbots two weeks after a Reuters investigation revealed disturbing ways in which they could, potentially, interact with minors. Now the company has told TechCrunch that its chatbots are being trained not to engage in conversations with minors around self-harm, suicide, or disordered eating, and to avoid inappropriate romantic banter. These changes are interim measures, however, put in place while the company works on new permanent guidelines. The updates follow some rather damning revelations about Meta’s AI policies and enforcement over the last several weeks, including that it would be permitted to “engage a child in conversations that are romantic or sensual,” that it would generate shirtless images of underage celebrities when asked, and Reuters even reported that a man died after pursuing one to an address it gave him in New York. Meta spokesperson Stephanie Otway acknowledged to TechCrunch that the company had made a mistake in a