
How to configure content filtering for safe use.
Content filters ensure CurricuLLM only provides safe, age-appropriate, and curriculum-aligned responses for different groups of users.
| Category | What It Covers | Code |
|---|---|---|
| Violent Crimes | Encouraging or describing violent or criminal acts. | S1 |
| Non-Violent Crimes | Crimes without physical harm (e.g. fraud, hacking, theft). | S2 |
| Sex Crimes | Criminal sexual behaviour, such as assault. | S3 |
| Child Exploitation | Any sexual content involving minors. | S4 |
| Defamation | False statements harming a person's reputation. | S5 |
| Specialized Advice | Expert-level legal, medical, or financial instructions. | S6 |
| Privacy | Sharing personal or identifying information. | S7 |
| Intellectual Property | Copyright or trademark infringement. | S8 |
| Indiscriminate Weapons | Instructions for creating weapons of mass destruction. | S9 |
| Hate | Hate speech or discrimination against protected groups. | S10 |
| Self-Harm | Encouragement or facilitation of self-harm or suicide. | S11 |
| Sexual Content | Explicit or erotic content involving adults. | S12 |
| Elections | False or misleading information about elections. | S13 |
| Code Interpreter Abuse | Attempts to misuse system capabilities (e.g. security bypasses). | S14 |
| Profanity | Use of offensive or vulgar language, even if not linked to other categories. | P |
Note: Jailbreak filtering uses code J. When content is blocked, the code will be prepended with Input- or Output- to identify whether the content was filtered by the input filter or the output filter. For example: Input-J means the user's message was blocked by jailbreak filtering, while Output-S11 would indicate the AI's response was blocked by the Self-Harm filter.
Content filters are the backbone of safe use. By setting them correctly, schools can trust that every interaction stays age-appropriate, safe, and aligned with teaching and learning goals.