Synthetic Content Moderation Policy
URL: legal.fansit.com/ai-content-moderation
Effective Date: February 20, 2025
Last Updated: February 20, 2025
This Synthetic Content Moderation Policy defines how AI-generated content is moderated on the Fansit platform, establishing standards for automated detection, human review, prohibited content categories, and transparency reporting.
1. Automated Detection
1.1. All content uploaded to the Platform is scanned by AI detection tools for synthetic media indicators.
1.2. Content flagged above a defined confidence threshold (as calibrated by our Trust & Safety team) is queued for human review before publication.
1.3. Content uploaded to AI Creator accounts is subject to enhanced scanning focused on likeness matching, minor resemblance detection, and consent verification.
2. The Reasonable Person's Test
2.1. All flagged AI content is reviewed by a panel of at least three (3) human moderators.
2.2. The Reasonable Person's Test asks: "Would a reasonable person, viewing this content without additional context, conclude that it [depicts a minor / is a non-consensual deepfake / violates platform policies]?"
2.3. The panel assesses the following criteria:
- Compliance with AI disclosure requirements: Is the content properly labeled?
- Minor resemblance check: Could the depicted individual reasonably be perceived as under 18?
- Likeness consent verification: If a real person is depicted, is consent on file?
- General content standards compliance: Does the content violate any other policy?
2.4. A majority decision (two out of three moderators) determines the outcome. In cases involving potential CSAM, a unanimous decision is required to approve the content.
3. AI-Specific Prohibited Content
3.1. The following AI-generated content is strictly prohibited:
| Category | Action | |----------|--------| | AI-generated CSAM or CSAM-adjacent content | Zero tolerance. Immediate removal, permanent ban, and referral to law enforcement (NCMEC, IWF, and local authorities) | | Non-consensual deepfakes | Immediate removal and permanent ban | | Deceptive content | AI content designed to deceive Fans into believing it is authentic human content without disclosure | | Creator impersonation | AI content that mimics a specific real Creator on Fansit without consent |
4. Response Times
| Case Type | Response Time | |-----------|---------------| | Urgent cases (CSAM, impersonation, serious legal risk) | Response within hours | | Standard AI content flags | Response within 24 hours | | Appeals of AI content enforcement | Reviewed within 7 calendar days |
5. Transparency Reporting
5.1. Fansit publishes quarterly AI moderation statistics in the Safety & Transparency Report, including:
- Total AI content uploads during the period
- Total AI content flagged by automated systems
- Total AI content removed after human review
- False positive rate (content flagged but approved after review)
- Appeal outcomes for AI content enforcement actions
- Deepfake detection statistics
Need Help?
Contact support@fansit.com or visit our Help Center.