Content Moderation & Protection Policy

URL: legal.fansit.com/content-moderation

Effective Date: February 20, 2025

Last Updated: February 20, 2025


This Content Moderation & Protection Policy outlines Fansit's procedures for moderating user-generated content, verifying user identity, and protecting platform content. Its purpose is to ensure the safety, legality, and security of the Platform for all Users.

1. Definitions

  • "Creator" means a Creative User authorized to upload and monetize Content under the Creator Terms of Service.
  • "KYC" (Know Your Customer) means the process of verifying a user's identity and age through an approved third-party provider.
  • "Moderation" means the review of User Content for compliance with Fansit's policies, either through automated tools, human review, or a combination of both.

2. Age Confirmation on Access

2.1. All Users must confirm their age as 18 or older upon first access to the Platform.

2.2. If a User confirms they are under 18, they are immediately redirected and denied further access.

2.3. If Fansit discovers that a User has misrepresented their age and is under 18, their Account will be immediately removed and their email address blacklisted.

2.4. Users in jurisdictions with mandatory age verification laws must complete additional verification as described in the Age Verification Policy.

3. Automated Content Scanning

3.1. All uploaded User Content is scanned both pre-publication and post-publication using industry-standard AI moderation tools.

3.2. Automated scanning flags content for the following:

  • Nudity and explicit material (for SFW compliance on public-facing content)
  • Child sexual abuse material (CSAM)
  • AI-generated or synthetic content
  • Violence, gore, and graphic content
  • Potential identity or impersonation issues

3.3. Content flagged by automated systems is queued for human moderator review. Flagged content may be automatically blurred or held from publication pending review, depending on severity.

4. Human Moderator Review

4.1. Flagged content is reviewed by trained human moderators who may:

  • Approve the content for publication
  • Remove or disable the content
  • Contact the Creator for clarification or additional documentation
  • Escalate the matter to senior staff, legal counsel, or law enforcement

4.2. For AI-generated content, a minimum of three (3) moderators must review the content (the "Reasonable Person's Test"). Further details are in the Synthetic Content Moderation Policy.

4.3. All moderators undergo regular training in legal compliance, safety protocols, and platform policies.

5. Creator KYC Verification

5.1. Any User applying to become a Creator must undergo full KYC verification through Fansit's approved verification provider.

5.2. KYC verification includes:

  • Submission of a valid government-issued identification document
  • Facial recognition matching (liveness check)
  • Confirmation that the User is at least 18 years old
  • A truthful declaration of whether the User intends to upload explicit content
  • For AI Creators: declaration that the account will feature AI-generated content

5.3. Fansit retains ID documentation securely via its approved verification provider and links it to internal moderation systems for ongoing compliance.

6. Content Protection

6.1. Fansit implements multiple features to detect and deter unauthorized use, reproduction, or scraping of Creator Content, including:

  • Visible and invisible watermarking
  • DRM encryption for video content
  • Disabled right-click and save-as functionality
  • Session analytics and account locking for suspicious behavior
  • Anti-scraping technology

6.2. Any attempt to access, scrape, download, or copy Content in violation of the Acceptable Use Policy will result in immediate investigation and possible Account termination.

6.3. Fansit partners with content protection and anti-piracy vendors to monitor the internet for unauthorized distribution of User Content.

7. Law Enforcement Cooperation

7.1. Fansit cooperates with law enforcement in all jurisdictions in response to reports of:

  • Child sexual abuse or exploitation
  • Human trafficking
  • Non-consensual intimate content
  • Threats of violence or imminent harm
  • Other criminal matters involving uploaded Content or communications on the Platform

7.2. Fansit may share user information in accordance with the Privacy Policy and in line with applicable legal obligations.

8. User Reporting

8.1. Users may report any Content they believe violates the Acceptable Use Policy using the in-Platform report function.

8.2. All reports are reviewed by Fansit moderators under the procedures outlined in the Complaints Policy.

9. Appeals

9.1. Users whose Content or Accounts have been restricted, suspended, or permanently removed may request a formal appeal.

9.2. Appeals must be submitted within seven (7) calendar days of the enforcement action via support@fansit.com or through the Help Center.


Need Help?

Contact support@fansit.com or visit our Help Center.