top of page
Search

“Kids Push Limits. Ama Pushes Back—Safely.”

  • Writer: Karri Haen Whitmer
    Karri Haen Whitmer
  • Jul 20
  • 2 min read
Ama Intercepting Inappropriate Speech and Redirecting
Ama Intercepting Inappropriate Speech and Redirecting

Our new Student/Ama Moderation system

At NarrateAR, we know kids will test boundaries. Our summer beta testing has shown that kids are creative, and if you create a limit or rule, the first thing they try to do is get around it or break it.—It’s how they explore, learn, and grow. That’s why we made a

major upgrade to our moderation and system. Our new Student & Ama Moderation feature isn’t about restriction, but smart, gentle guidance that puts the parent in full control of what their child does with Ama.


We’ve built a system that puts humans in the loop, defaults to safe mode, and gives parents customizable control—so that every interaction stays meaningful and worry-free.


How It Works:

  • Dual-Sided Moderation: Conversations are monitored at both ends—student inputs and Ama’s AI responses—ensuring accuracy, clarity, and appropriateness.

  • Safe by Default: We’ve designed the system to default to the safest settings unless a parent specifies otherwise

  • Parental Customization: Families can easily tone moderation up or down—can specify off limits topics, types of language, and now has a notifcation system for parents. Plus it now enables more freedom as trust builds, or ratcheting restrictions for sensitive topics or younger users.


Why It Matters:

  • Built-In Safety with Transparency: Our human-in-the-loop design gives parents and educators immediate oversight—if something seems off, they can step in and course-correct. That builds confidence and trust

  • Respecting Autonomy: Research shows that parental control tools work best when balanced with feedback and transparency. Opaque or overbearing moderation can feel punitive, not protective. Our “safe-by-default + adjustable” approach supports trust without stifling

What It Looks Like in Action:

You’ll see in the short video how, in a real conversation, Ama AI not only intercepts bad language but can now also intercept other types of inappropriate topics, communications, and tone if desired. These settings default to safe and are controlled by a parent; you can set your specialized limits if desired, and if a response seems inappropriate or off-limits, the moderator steps in, offers feedback, and notifies the parent. It’s a prime example of our continuous improvement and commitment to AI safety for kids.



 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

Personalized AI Learning Companions for Special Education 

© 2025 by NarrateAR. 

bottom of page