Youth Safety Rules of Engagement
Our Commitment to Protecting Young People
We are dedicated to creating a safe space for all users, especially minors. We strictly ban any content or activity that could harm or exploit young people. This includes inappropriate behavior, child sexual abuse material (CSAM), and any form of grooming.
Who Can Use Media.com:
- Users 16 and Over: You must be at least 16 years old to create a profile and post content on Media.com. Your identity will be verified through a third-party platform before you can interact publicly.
- Users Under 16: You can only access a limited, read-only version of Media.com. This means you cannot create posts, send messages, or upload media.
What’s Not Allowed
We do not tolerate any of the following behavior, and violating these rules will lead to a permanent ban:
- Inappropriate Contact with Minors: Trying to groom or engage in sexual conversations with minors through messages or posts.
- Explicit Content Involving Minors: Sharing or posting nude or sexually explicit images of minors.
- Exploitation or Objectification: Encouraging or promoting the sexual exploitation of minors in any way.
- Third-Party Links: Sharing links to websites that host illegal or inappropriate content involving minors.
- Impersonating Minors: Pretending to be a minor to engage in harmful or inappropriate behavior.
- Coercion or Threats: Forcing minors to send intimate photos or personal information through threats or manipulation.
- Sharing Victim Information: Revealing the identity of child sexual abuse victims, including their names, images, or personal details.
Reporting Violations
Anyone can report violations of this policy, including users without a Media.com account, by using our report form.
How We Enforce These Guidelines
We use a combination of technology and human moderators to enforce these rules. Here’s how we handle violations:
- User Reports: Users can flag inappropriate profiles by clicking the “Report” button on any post or profile page.
- Government Reports: We also receive reports from government agencies that help us find and remove harmful content.
- Automated Systems: Our automated tools continuously scan content for potential violations, which are either flagged for review or removed automatically.
- Moderator Review: Our internal team regularly checks the platform to identify and address issues proactively.
Working with Law Enforcement
We work closely with law enforcement to protect users from harm. If we find content that breaks the law, we may share details with authorities as part of our Law Enforcement Liaison Procedure.
For more information on how we handle dangerous or harmful behavior, see our High Risk User Policy and Bullying and Harassment Policy.
Our goal is to create a safe, welcoming environment where all users—especially young people—can engage without fear of harm or exploitation.