Content Moderation & Protection Policy
URL: legal.fansit.com/content-moderation
Effective Date: April 16, 2026
Last Updated: April 16, 2026
This Content Moderation & Protection Policy outlines Fansit's procedures for moderating user-generated content, verifying user identity, and protecting platform content. Its purpose is to ensure the safety, legality, and security of the Platform for all Users. "Fansit" means Persona Payments LLC, dba Fansit, 1309 Coffeen Ave STE 1200, Sheridan, WY 82801.
1. Definitions
- "Creator" means a Creative User authorized to upload and monetize Content under the Creator Terms of Service.
- "KYC" (Know Your Customer) means the process of verifying a user's identity and age through an approved third-party provider.
- "Moderation" means the review of User Content for compliance with Fansit's policies, either through automated tools, human review, or a combination of both.
2. Age Confirmation on Access
2.1. All Users must confirm their age as 18 or older upon first access to the Platform.
2.2. If a User confirms they are under 18, they are immediately redirected and denied further access.
2.3. If Fansit discovers that a User has misrepresented their age and is under 18, their Account will be immediately removed and their email address blacklisted.
2.4. Users in jurisdictions with mandatory age verification laws must complete additional verification as described in the Age Verification Policy.
3. Automated Content Scanning
3.1. All uploaded User Content is scanned both pre-publication and post-publication using industry-standard AI moderation tools.
3.2. Automated scanning flags content for the following:
- Nudity and explicit material (for SFW compliance on public-facing content)
- Child sexual abuse material (CSAM)
- AI-generated or synthetic content
- Violence, gore, and graphic content
- Potential identity or impersonation issues
3.3. Content flagged by automated systems is queued for human moderator review. Flagged content may be automatically blurred or held from publication pending review, depending on severity.
4. Human Moderator Review
4.1. Flagged content is reviewed by trained human moderators who may:
- Approve the content for publication
- Remove or disable the content
- Contact the Creator for clarification or additional documentation
- Escalate the matter to senior staff, legal counsel, or law enforcement
4.2. All moderators undergo regular training in legal compliance, safety protocols, and platform policies.
4A. Synthetic & AI-Generated Content Review
4A.1. Content that is AI-generated, AI-assisted, or otherwise synthetic is subject to all of the rules in AUP § 3, including disclosure obligations and the prohibition on depicting real, identifiable individuals without documented consent.
4A.2. Synthetic content is subject to the same automated and human review described in this Policy, with additional checks for:
- Disclosure compliance — that the AI label is present on the profile, post, and content as required.
- Real-person likeness — that any depicted individual is either (a) a unique non-real identity, or (b) a real individual with documented, revocable consent for the specific depiction.
- Synthetic CSAM, age-play, and prohibited categories — apparent-minor synthetic content, non-consensual depictions, and other Section 1 zero-tolerance prohibitions are treated identically to real-camera content. There is no carveout for AI generation.
- Voice and audio cloning — voice and audio clones depicting a real person are reviewed against the same identity and likeness standards.
4A.3. For potentially identity-bearing synthetic content (face, voice, distinctive physical features), a minimum of three (3) trained moderators must independently review the content (the "Reasonable Person's Test") before it is approved or restored on appeal.
4A.4. The Creator may be required to produce documentation of consent for any depicted individual within seventy-two (72) hours of request. Failure to do so results in removal and may result in account suspension.
4B. Real-Time Livestream Monitoring
4B.1. All live streams on Fansit are subject to real-time monitoring. Only Creators who have completed full KYC verification (Section 5) are permitted to go live.
4B.2. Automated real-time monitoring includes:
- Frame sampling: stream video is sampled at regular intervals and passed through the same AI classifiers used for uploaded content, covering CSAM, nudity compliance, violence, and prohibited categories.
- Audio analysis: stream audio is transcribed and scanned for keyword and phrase violations.
- Session signals: stream metadata, viewer reports, and Creator behavior signals are monitored for escalation.
4B.3. Human moderators are on duty during all hours streams are active and directly review:
- Streams flagged by automated systems
- Streams reported by viewers in real time via the in-Platform report function
- A rotating sample of active streams regardless of flags
- Any stream produced by a Creator currently under elevated review
4B.4. Moderators have immediate takedown authority during a stream, including:
- Terminating the stream without notice
- Muting or disabling the Creator's microphone or camera
- Suspending the Creator's account for the duration of review
- Preserving the recorded stream for post-event review, appeals, and, where applicable, law enforcement
4B.5. All live streams are recorded and retained for a minimum of thirty (30) days for post-event moderation, appeals, and evidentiary purposes, and longer where preservation is required by law or a pending investigation.
4B.6. Viewers may report an in-progress stream via the "Report" control available in every stream view. Reports trigger immediate routing to an on-duty moderator.
4B.7. Suspected CSAM or content involving minors detected during a stream is subject to the immediate-action procedures described in the Anti-CSAM Statement, including termination, preservation, and reporting to NCMEC and law enforcement.
5. Creator KYC Verification
5.1. Any User applying to become a Creator must undergo full KYC verification through Fansit's approved verification provider.
5.2. KYC verification includes:
- Submission of a valid government-issued identification document
- Facial recognition matching (liveness check)
- Confirmation that the User is at least 18 years old
- A truthful declaration of whether the User intends to upload explicit content
- For AI Creators: declaration that the account will feature AI-generated content
5.3. Fansit retains ID documentation securely via its approved verification provider and links it to internal moderation systems for ongoing compliance.
6. Content Protection
6.1. Fansit implements multiple features to detect and deter unauthorized use, reproduction, or scraping of Creator Content, including:
- Visible and invisible watermarking
- DRM encryption for video content
- Disabled right-click and save-as functionality
- Session analytics and account locking for suspicious behavior
- Anti-scraping technology
6.2. Any attempt to access, scrape, download, or copy Content in violation of the Acceptable Use Policy will result in immediate investigation and possible Account termination.
6.3. Fansit partners with content protection and anti-piracy vendors to monitor the internet for unauthorized distribution of User Content.
7. Law Enforcement Cooperation
7.1. Fansit cooperates with law enforcement in all jurisdictions in response to reports of:
- Child sexual abuse or exploitation
- Human trafficking
- Non-consensual intimate content
- Threats of violence or imminent harm
- Other criminal matters involving uploaded Content or communications on the Platform
7.2. Fansit may share user information in accordance with the Privacy Policy and in line with applicable legal obligations.
8. User Reporting
8.1. Users may report any Content they believe violates the Acceptable Use Policy using the in-Platform report function.
8.2. All reports are reviewed by Fansit moderators under the procedures outlined in the Complaints Policy.
9. Appeals
9.1. Users whose Content or Accounts have been restricted, suspended, or permanently removed may request a formal appeal.
9.2. Appeals must be submitted within seven (7) calendar days of the enforcement action via support@fansit.com or through the Help Center.
Need Help?
Contact support@fansit.com or visit our Help Center.