AI Chatbots Exhibit an ‘Empathy Gap’ That May Neglect Children

Dr. Nomisha Kurian’s study highlights the risks associated with the “empathy gap” in artificial intelligence (AI) chatbots, particularly for young users. Key points from the study include:

  1. Empathy Gap: Chatbots, despite being linguistically capable, often fail to understand and respond appropriately to emotional and abstract stimuli. This empathy gap can have negative effects, especially on children.
  2. Risk Incidents: The study cites incidents where chatbots provided dangerous or inappropriate advice, such as Alexa instructing a child to touch an electrical outlet with a coin and Snapchat’s My AI offering unsuitable advice to adults pretending to be teenagers.
  3. Need for Proactive Measures: Dr. Kurian emphasizes the need for proactive measures and design frameworks to ensure child safety. She proposes a framework of 28 questions to assess new AI tools and enhance their safety.
  4. Design Framework: The framework suggests a child-centered approach, urging developers to work with child safety experts and children themselves during the design process.
  5. Need for Clear Policies: It’s crucial to have clear policies regarding child safety in AI technologies to ensure these technologies are used safely.

The study concludes that while AI has great potential, it requires responsible design and preventive measures to protect children.

Share it!

ΣΧΕΤΙΚΑ ΑΡΘΡΑ