AI Disclosure

Effective Date: [Date]

Last Updated: [Date]

Transparency about AI is important to us. Here's what you should know.

AI Systems We Use

Content Moderation

  • Purpose: Detect policy violations and harmful content
  • Type: Image and text classification models
  • Human oversight: Flagged content reviewed by humans

Recommendation Systems

  • Purpose: Suggest content you might enjoy
  • Type: Collaborative filtering and content-based models
  • User control: You can reset or disable recommendations

Spam Detection

  • Purpose: Prevent spam and abuse
  • Type: Pattern recognition models
  • Accuracy: High accuracy with low false positive rate

Search

  • Purpose: Help you find content and creators
  • Type: Natural language processing
  • Ranking: Based on relevance and quality signals

What AI Doesn't Do

We do NOT use AI to:

  • Make final account termination decisions
  • Set creator payout amounts
  • Read private messages (except for abuse detection)
  • Train on your content without permission

Challenging AI Decisions

If you believe an AI system made an error:

  1. Request human review
  2. Contact support with details
  3. Submit an appeal if needed

All appeals are reviewed by humans.

Third-Party AI

We use some third-party AI services:

  • Cloud AI providers for infrastructure
  • Specialized content moderation services

All partners comply with our privacy and security requirements.

Future Changes

We'll update this disclosure as we add or change AI systems.

Contact

Questions? Reach out to ai@fansit.com.