CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
7.3 Applying Filters and Safety Controls (Content Filters)
Training Hub7. Administering CurricuLLM7.3 Applying Filters and Safety Controls (Content Filters)

7.3 Applying Filters and Safety Controls (Content Filters)

How to configure content filtering for safe use.

Content filters ensure CurricuLLM only provides safe, age-appropriate, and curriculum-aligned responses for different groups of users.

What content filters are

  • A content filter is a set of rules that controls what CurricuLLM will allow or block.
  • Filters apply to both:
    • Input → what users are allowed to ask.
    • Output → what CurricuLLM is allowed to return.
  • Administrators can set up different filters for different roles (e.g. younger students vs. staff).

Managing content filters in CurricuLLM

  • View filters
    • The Content Filter screen lists all available filters.
    • Use the dropdown to switch between them.
  • Create a new filter
    • Click the + button and give the filter a unique name.
  • Edit a filter
    • Tick or untick categories (see table below) to enable or disable them.
    • Changes are saved automatically and remain after refreshing.
  • Delete a filter
    • Filters can only be deleted if no role is currently using them.
    • If a filter is assigned, move those users to a different filter first.

Standard filter categories

CategoryWhat It CoversCode
Violent CrimesEncouraging or describing violent or criminal acts.S1
Non-Violent CrimesCrimes without physical harm (e.g. fraud, hacking, theft).S2
Sex CrimesCriminal sexual behaviour, such as assault.S3
Child ExploitationAny sexual content involving minors.S4
DefamationFalse statements harming a person's reputation.S5
Specialized AdviceExpert-level legal, medical, or financial instructions.S6
PrivacySharing personal or identifying information.S7
Intellectual PropertyCopyright or trademark infringement.S8
Indiscriminate WeaponsInstructions for creating weapons of mass destruction.S9
HateHate speech or discrimination against protected groups.S10
Self-HarmEncouragement or facilitation of self-harm or suicide.S11
Sexual ContentExplicit or erotic content involving adults.S12
ElectionsFalse or misleading information about elections.S13
Code Interpreter AbuseAttempts to misuse system capabilities (e.g. security bypasses).S14
ProfanityUse of offensive or vulgar language, even if not linked to other categories.P

Note: Jailbreak filtering uses code J. When content is blocked, the code will be prepended with Input- or Output- to identify whether the content was filtered by the input filter or the output filter. For example: Input-J means the user's message was blocked by jailbreak filtering, while Output-S11 would indicate the AI's response was blocked by the Self-Harm filter.

Tips for administrators

  • Apply stricter filters to lower age student roles.
  • Allow more open filters for staff roles, while still blocking harmful categories.
  • Don't filter staff too much, they need freedom to teach.
  • Name filters clearly (e.g. "Primary Students," "Senior Students," "Staff") so they are easy to manage.
  • Review filters regularly to make sure they match your school's safety policies.

What this means for schools

Content filters are the backbone of safe use. By setting them correctly, schools can trust that every interaction stays age-appropriate, safe, and aligned with teaching and learning goals.

Previous
7.2 Setting Permissions and Access Levels (Roles)
Next
7.4 Licensing and Subscriptions
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlog