Abstract
Anonymized, informal environments, such as social media, provide opportunities for individuals to naturally exchange information and receive or provide social support for stigmatized conditions, such as mental health. The sensitive nature of the content shared on these platforms requires automated moderation, which is often based on keyword detection. However, what is considered as support versus risk in these contexts can be controversial and more situated. To investigate how we might better define what is supportive versus harmful content, we examined 49,006 YouTube comments on videos about college students’ mental health, using statistical tests and qualitative content analysis with a clinical psychologist. We studied (1) the association between community ‘Likes’ and both self-disclosure and related linguistic features and (2) when comments exhibiting features associated with community ‘Likes’ reflect perceived support versus harm in the context of mental health. We discuss the situatedness of how community-generated comments containing self-disclosure can be either supportive or potentially harmful from a clinical perspective. This work highlights the need for a paradigmatic change in developing automated moderation rules and assumptions toward social media environments for supporting mental health.
| Original language | English |
|---|---|
| Article number | CSCW253 |
| Journal | Proceedings of the ACM on Human-Computer Interaction |
| Volume | 9 |
| Issue number | 7 |
| DOIs | |
| State | Published - 16 Oct 2025 |
Keywords
- CSCW
- college students
- emerging adults
- engagement
- harmful
- mental health
- online communities
- self disclosure
- social media
- supportive
- young adults