Safety on 7 Cups
Last updated: May 4, 2019
Individual and Community Safety on 7 Cups
7 Cups aims to be a safe, trusted resource for giving and receiving emotional support. We take matters of confidentiality, privacy, safety and all forms of harassment very seriously. We have a series of policies, procedures, and programs in place to ensure safety across the platform.
- Identification and Participation: We monitor and actively participate in 1:1 chat, Noni chat, group chat, and community forums in order to identify unsafe activity and to role model, encourage, and reward healthy emotional support behaviors.
- Validation and Intervention: Our active participation is complemented by several programs to confirm or disconfirm reported activity, and to respond appropriately.
- Iteration : We are constantly updating our approach to incorporate user feedback and include a wider range of machine learning techniques, product enhancements, engineering features, training resources, and support for our users.
The following Terms and Policies govern our site:
- Terms of Service: The goal of this document is to prevent misuse or abuse of our services. It governs our right to suspend, ban, or stop providing our services if policies or terms of service are not followed, or if we are investigating suspected misconduct.
- Community Guidelines - The goal of this document is to promote a safe, warm, comfortable, inviting, and supportive atmosphere for those seeking support and to our fellow Listeners. It contains General, Forum, and Teen Mentor Guidelines, Consequences of Violating Guidelines, and the bios of our Community Management Team.
- Teen Safety - The goal of this document is to explain the extra measures we have in place to protect our users under 18. We take the safety of our teen population very seriously with protocols for general safety, reporting of sexual abuse, crisis, and crime reporting.
- General Support and FAQs - The goal of this document is to provide answers to support queries including "what is active listening" and "What do I do if my listener is being inappropriate, abusive, or hurtful?".
7 Cups employs a sophisticated and mature set of safety measures informed by the experience of mental health professionals, and aligned with online best practices for building safety into social environments that address the needs of vulnerable user populations.
What happens when a user reports another user?
- Members can leave Text Reviews for Listeners and all users can file Block Reports and Profile Flags in real time
- Censor Reports are automatically triggered based on specific phrases
- Supervised volunteers review reports and flags and categorize them into one of three escalation levels
- Green - Text Reviews are approved to display on the Listener's profile
- Yellow/Orange - A Listener who criticizes or provides advice instead of being empathetic will automatically receive feedback when flagged. Five+ flags at this level will result in rejection from site
- Red flag - Sexual and flirtatious content, harassment, bullying, racist or hate speech, false reports of age group (e.g. teen with an adult account) results in immediate rejection from the community. When a user is flagged for these behaviors they are blocked from engaging until subsequent human review.
- Each report generated is assigned a risk score based on the severity of the report (e.g. requesting contact information, harassing behavior, inappropriate sex chat.)
- The reporting individual is assigned a trust score based on their overall activity and impact on the site.
- Risk calculations are cumulative and not publicly displayed.
Sanctions for reports in order of escalation:
- Direct feedback correspondence
- Mandatory self-care breaks
- Account automatically rejected
- Account banned
- Group support warnings via moderators and automatic mute functionality
- Customer support team on call to manage reports
- Forum flagging tool used for spam and inappropriate behavior,
- Dedicated Safety and Knowledge Forum to discuss or report any issues
- Community leadership teams onsite 24/7 who are trained to manage and support a variety of situations
- 50+ trainings for listeners available 24/7, Peer support, Mentor support, Moderators who can remove inappropriate content
Machine Learning/AI Efforts:
We also deploy computational linguistic models of user behaviors that are likely to distinguish between banned and non-banned members, rejected and non-rejected active listeners. We use this as a means of monitoring the level of potentially unsafe language throughout the platform to guide awareness of overall activity.