Social media host Alarming degrees
Social media host alarming degrees of hate messages directed at individuals and groups, threatening victims’ psy- chological and physical well-being. Traditional approaches to online hate often focus on perpetrators’ traits and their atti- tudes toward their targets.
Such approaches neglect the social and interpersonal dynamics that social media afford by which individuals glean social approval from like-minded friends. A theory of online hate based on social approval suggests that individuals and collaborators generate hate messages to garner reward, for their antagonism toward mutually hated targets, by providing friendship and social support that en- hances perpetrators’ well-being as it simultaneously deepens their prejudices.
Recent research on a variety of related pro- cesses supports this view, including notions of moral grand- standing, political derision as fun, and peer support for interpersonal violence.
Addresses Dept. of Communication/Center for Information Technology and Soci- ety, University of California at Santa Barbara, USA
Corresponding author: Walther, Joseph B (jwalther@ucsb.edu)
Current Opinion in Psychology 2022, 45:101298
This review comes from a themed issue on Social Media and Well- Being
Edited by Patti Valkenburg, Ine Beyens, Adrian Meier and Mariek Vanden Abeele
For a complete overview see the Issue and the Editorial
Available online 5 January 2022
https://doi.org/10.1016/j.copsyc.2021.12.010
2352-250X/© 2022 Elsevier Ltd. All rights reserved.
Keywords Social media, Online hate, Approval-seeking, Racism, Peer support.
Introduction The proliferation of hate online has become recognized as one of the biggest challenges to social media in- dustries: ‘throughout 2020 and early 2021, major tech- nology companies announced that they were taking unprecedented action against the hate speech, harass-
ment . that had long flourished on their platforms . (yet) the level of online hate and harassment reported by users barely shifted’ [1]. Online hate is pervasive: surveys across several countries indicate that 42%e67% of young adults observed ‘hateful and degrading writings
www.sciencedirect.com
or speech online’ [2,3], and 21% have been victims themselves [4]. Online hate has negative effects on the well-being of both victims and observers, including ‘depression, isolation, paranoia, social anxiety, self- doubt, disappointment, loneliness, and lack of confi- dence’ [5].
The effects of observing and receiving hate via social media, however, may be quite different than the effects of expressing hate via social media. Although some scholars suggest that ‘dark participation’ in online hate reflects individuals’ malevolent impulses [6], an alter-
native perspective considers whether social media help satisfy the needs for inclusion and recognition that ‘bad actors’ also have. From that perspective, the conduct of online hate, as a social process, enhances its perpetra- tors’ well-being. By identifying, collaborating, and rein- forcing other like-minded culprits (see, e.g. studies by Ferguson and Glasgow [7]), and forming bonds with them through virtual interaction, they may experience validation and social support.
A social approval-based theory of online hate suggests
that the motivations and gratifications of those who post hate messages are not primarily to antagonize their ostensible victims. Alternatively, people generate hate messages online primarily to accrue signals of admiration and praise from sympathetic online peers and to make friends [8]. As a by-product, because social media magnify self-persuasion [8], their prejudices should become more extreme as they obtain more social rein- forcement in response to their public hate messaging.
The prevalence and dual impacts of online hate demand
that we ask whether there are aspects of social media that inspire or facilitate hatred in unique ways. The aim of this article is to outline a theory with which to un- derstand the generation of hate in social media from a perspective of social approval seeking and getting.
To do so, it presents recent research about how social media foster status-seeking and how hate-mongering, as a collaborative social activity, provides fun for its perpe- trators. The article also identifies certain affordances of social media that may propel online hate, including the ability to find an appreciative audience, easy access
to targets, the publicness of messaging activity and its consequences for attitude reinforcement, and the variety of easy-to-deploy signals of social approval that social media users transmit at scale to reinforce others’
Current Opinion in Psychology 2022, 45:101298
2 Social Media and Well-Being
messaging. Finally, it references anthropological research that makes similar arguments about the provision of social support to enhance the well-being among perpe- trators of other deviant and malevolent behaviors and its application to online hate.
The scope of online hate Online hate includes generalized racist, sexist, religious, anti-immigrant, gender, and sexual orientation-related insults, and verbal attacks based on ethnicity, political orientation, or other categorical characteristics [9]. It also
includes deliberate harassment of specific individuals in new forms of expression that are indigenous to the internet, ranging from ‘trolling’ (persistent pestering and goading) [9] to doxxing (publicly documenting some- one’s personal offline information) [10], degrading or insulting memes and altered images, and revenge porn.
These actions threaten victims’ well-being in terms of their real-life privacy, safety, and/or sanity. They appear in interactive websites and social media sites such as Facebook (especially in private groups), Twitter, Gab, Parler, 4chan, and 8chan, and in email discussion lists.
The frequent use of certain codes, images, and abbre- viations in hate messages reinforces the notion of its socially organized nature (e.g. posting ‘88,’ each 8 representing the eighth letter of the English alphabet, the letter H, therefore, HH, for Heil Hitler; see work by the Anti-Defamation League [1]). Although there are numerous platforms on which established offline hate groups or white supremacist groups insulate themselves from public interaction [11], this review focuses on the use of more general social media platforms.
Motivations for online hate: from psychopathology to social identity to social approval Traditional approaches to online hate tend to draw on assumptions that have little to do with contemporary social media. Similar to cyberbullying research, studies often focus on perpetrators’ personality differences and psychopathologies that lead them to hurt other people online ([12]; see also [13,14] for review).
For instance, individuals’ sadistic desires impel them to bring psychic harm to the targets, from trolling [15] to more aggres- sive online hate [16], leading others to experience schadenfreude when observing the victims of their own
or others’ antagonism [17]. Such approaches do not particularly emphasize the social nature of social media, including interactivity among its participants, the po- tential effect of a large audience on the perpetrators’ mindsets, and other dimensions of communication (see, e.g. the essay by Walther [18]) that play significant roles, or interact with individual differences, affecting online communication.
In contrast to the individual differences approach, assuming that individuals recognize others with similar
Current Opinion in Psychology 2022, 45:101298
attitudes as their own on a social media platform, online hate is also amenable to social identification principles. Certainly, racially-based attacks on a group or individual promote in-group identification by derogation of an outgroup. Social identification and self-categorization appear in several explanations of online moralizing, especially concerning moral statements containing emotionally arousing terms [19], which characterize
online hate in most instances. Similar to the individual differences approach, social identification ignores the important aspect of interactivity that makes social media social. Although social identification forces are not to be dismissed, they may be secondary from a perspective that focuses on the display of online hate messages as performative efforts to garner interpersonal attention and approval.
An approval-seeking theory of online hate posits that the primary motivation for individuals to express hate online
is the interaction among haters they experience and the gratifications that they glean from mutual social atten- tion and social approval (see, e.g. the study by Brady et al. [20]), from both unknown in-group partisans as well as from those with whom they develop intensely interpersonal relationships online.
People collaborate with and perform online hate activities primarily to accrue social gratifications online from others. Just as individuals join traditional, offline hate groups to culti- vate a sense of belonging [21], individuals can make friends by posting hateful statements online. Although
messages may appear to be directed toward individual targets [22], social media make those messages public. The primary audience for online hate messages, therefore, may be the friends that haters cultivate through their performances, more so than the victims of their antagonism.
Although some cyberbullying research has also focused on the drive for, and gains in peer popularity that online bullying affords [23], and a more general notion exists that all activity on social network sites is driven to a great extent by the desire for popularity [24],
an approval-based theory of social media hate differs in some critical respects: first, it applies these assump- tions to a broader realm of negative online behavior; second, it assumes that the perpetration, mutual admiration, and collaboration in online aggression occur among individuals and groups who are initially unknown to one another and whose interpersonal re- lationships form on the basis of complementary antag- onism, rather than among those who are already known to one another; third, it considers certain large-scale communication symbols and patterns afforded by
social media. Before explicating these dimensions further, additional discussion follows addressing the social motivations that social media encourage that may prompt online hate.
www.sciencedirect.com
Social media and online hate Walther 3
Moral grandstanding Grubbs et al. [25] explore the notion of ‘moral grand- standing’ as the motivation for posting caustic and vol- atile comments on social media. Moral grandstanding suggests that individuals post potentially polarizing ideological statements not so much to persuade others of one’s position, but rather, as a form of value-signaling that is ‘directly or indirectly concerned with increasing one’s influence, rank, or social standing.’ Although Grubbs et al. [25] focus on public statements reflecting moral outrage, hate messages often contain moralistic
judgments [11], suggesting that hate messages, too, may be motivated by seeking recognition, status, and pres- tige, when individuals ‘impress others with their moral qualities’.
Moral grandstanding is defined by its motivations and not by its observable output. It has been conceptualized as an individual act that is displayed in a social context. Research on moral grandstanding is so far limited to its measurement as a trait and its measure’s concurrent validity with other constructs. Behaviorally, future
research might explore whether more or less attention to one’s posts from other social media users affects in- dividuals’ perceptions of their status, and leads to greater (or lesser) frequency or ferocity of moral and hateful diatribes by such individuals.
Fun A more collaborative view of online derision appears in Udupa’s [26] contention that the motive and rewards of collective aggression through social media is that it is fun for its participants. Observing the mobilization of right-wing partisans through social media in India after Modi’s election, Udupa chronicles social media behav-
iors that are elsewhere seen in the alt-right’s and white supremacists’ online behavior. Deriding and humiliating others’ public statements take the form of mocking and lampooning, in attempts to be funny. People produce memes ‘that combine sarcasm, parody, allegory, and irony to caricaturize’ the opposition (p. 3157). Publicly falsifying an opposition’s statements is an opportunity for name-calling and derision as a form of play.
Unlike moral grandstanding’s depiction of an individual seeking social reward, Udupa’s analysis suggests that
the gratification for online aggression, the fun of it, lies in the social interaction it provokes. Participants comment on each other’s posts and compliment each other for pithy attacks toward others. Online hate in- volves a participating audience, ‘chiding and clapping together, exchanging online ‘high fives’’ (p. 3151).
How social media facilitate online hate What do ‘high fives’ look like in social media? Many aspects of social media platforms are conducive to the social organization, delivery, and social reward of hate
www.sciencedirect.com
messages. For one thing, social media make similarly hateful collaborators/sympathizers, as well as targets of potential attacks, more available and more identifiable than offline social interaction often provides [27]. Twitter, for instance, makes it easy to find and identify people’s religion, gender, sexuality, race, and political orientation, by inferring them from the searchable hashtags people often include in tweets, the online
profiles that many complete, and the selfies that people often append to their tweets [28].
Social media facilitate feedback among social media users in a variety of forms. Users comment on one another’s message postings, privately or in public message threads. In an investigation of Facebook for purposes other than hate, research found that the more that individuals’ messages reveal their emotional opinions, feelings, or thoughts, the more positive are the comments their posts get from other Facebook users, and the more such com-
ments they receive, the more satisfied they feel [29]. In addition to comments, social media feedback appears as symbolic, stylized graphics that users append to others’ messages, that act as reinforcers to the receiver. Facebook ‘likes’ (with a thumb-up graphic), a Twitter ‘favorite’ (a heart graphic), or a Reddit ‘upvote’ (an upward arrow graphic) are ‘one-click responses’ that convey social approval, affirmations, and sociability; they bestow social status on the recipient [30,31].
These kinds of feedback, through comments or one-
click graphics, have the potential to stimulate three additional, important processes. First, they may lead haters to post yet more hate messages [20]. Second, they may increase the posters’ hatred of their targets. People tend to internalize positions that they advocate publicly [32], particularly in social media [33,34] and especially when they get reinforcing feedback [33,35]. Because online hate is public and visible to others, social reinforcement should lead posters not just to repeat their messages but to believe them more strongly. Third, reciprocal interaction with sympathetic others online over time can lead to the development of intense per-
sonal relationships, even if strictly virtual. Research on virtual communities and online support groups clearly shows that participants who are initially attracted to the social identity of the group often pair up or splinter off, to form interpersonal friendships related to but recog- nizably independent from the larger and more anony- mous group (e.g. studies reported by Parks and Floyd [36] and Turner et al. [37] [36,37]).
Intensity of online relationships Moreover, relationships that develop or progress online are prone to become unusually intimate. This should be no surprise, because interpersonal attraction is known to increase as partners discover things that they mutually hate [38]. This is the case even if relationships are
Current Opinion in Psychology 2022, 45:101298
4 Social Media and Well-Being
transacted through texting, direct messaging, email, and other text-based messaging systems. Online messaging provides control over content and style, affecting impression management and message construction, reciprocally increasing the intensity of online relation- ships (see for review [39]).
Such relationships, and the potential for approval and
praise from such partners’ messages, are expected to provide especially potent motivations and gratifications for those who try to impress their online friends through hateful attacks on other groups or individuals [8]. The primacy of such online relationships is exemplified in the words of a QAnon leader who exhorted his online fol- lowers, when their prophesy of the second United States presidential inaugural for Donald Trump evaporated, to think instead of ‘all the friends and happy memories we made together over the past few years’ [40].
Online relationships also enhance haters’ well-being by providing social support for one another, as described in DeKeseredy’s anthropological work on various forms of abuse by men toward women [41]. That is, abusive men offline ‘have male friends with similar beliefs and values who act to develop and then reinforce beliefs and values that promote abuse of women’ (p. 4). These dynamics flourish online as well [42]: connected by social media, they form close-knit (virtual) commu- nities. They offer attachment, sympathize with one another’s grievances about the targets of their hostility,
and absolve one another for their antagonism. The groups ‘encourage, justify, and support’ (p. 5) abusive attacks by individuals.
Conclusion Social media enable collaborative social interaction to
promote any number of outcomes, and the production of hatred is no exception. Contributing to online col- lective efforts to insult, ridicule, and terrorize other individuals or groups based on the targets’ behaviors, or their ideological, political, or other categorical differ- ences, is for some people, a rewarding sociable activity. Some research about offline hate groups also concludes that their attraction is also not so much about antag- onism toward targets, as about attachment, accep- tance, and recognition by other like-minded people. Hate crimes committed offline, too, are seldom
committed by individuals acting alone rather than in concert with others or with others’ awareness (see for review [43]). Thus, the best understanding of online hate is not that it is antisocial in nature. The expres- sion of online hate is better understood as a prosocial phenomenon that takes place in virtual relationships, the common bonds of which develop in the social and interpersonal reinforcement of individuals’ expressions of hate toward others.
Current Opinion in Psychology 2022, 45:101298
Conflict of interest statement None declared.
Acknowledgements The author thanks Julien Labarre and Nichole Pulaski for their assistance.
References Papers of particular interest, published within the period of review, have been highlighted as:
* of special interest * * of outstanding interest
1. Anti-Defamation League: Online Hate and Harassment: The American Experience 2021, Anti-Defamation League (n.d.), https://www.adl.org/online-hate-2021. Accessed 17 October 2021.
2. Keipi T, Näsi M, Oksanen A, Räsänen P: Online hate and harmful content: cross-national perspectives. London: Routledge; 2016, https://doi.org/10.4324/9781315628370.
3. Räsänen P, Hawdon J, Holkeri E, Keipi T, Näsi M, Oksanen A: Targets of online hate: examining determinants of victimiza- tion among young Finnish Facebook users. Violence Vict 2016, 31:708–726, https://doi.org/10.1891/0886-6708.VV-D-14- 00079.
4. Oksanen A, Hawdon J, Holkeri E, Näsi M, Räsänen P: Exposure to online hate among young social media users. In Soul soc. Focus lives child. Youth. Emerald Group Publishing Limited; 2014:253–273, https://doi.org/10.1108/S1537- 466120140000018021.
5. SELMA: Social and emotional learning for mutual awareness), hacking online hate: building an evidence base for educators. European Schoolnet; 2019. https://hackinghate.eu/assets/ documents/hacking-online-hate-research-report-1.pdf.
6. Quandt T. Dark participation, vol. 6. Media Commun; 2018: 36–48, https://doi.org/10.17645/mac.v6i4.1519.
7. Ferguson CJ, Glasgow B: Who are GamerGate? A descriptive study of individuals involved in the GamerGate controversy. Psychol. Pop. Media. 2021, 10:243–247.
8. Walther JB: Online hate: a prosocial explanation of antisocial behavior and affordances of social media. In Our online emotional selves: the link between digital media and emotional experience. Oxford University Press; 2022. forthcoming.
9. Rieger D, Kümpel AS, Wich M, Kiening T, Groh G: Assessing the extent and types of hate speech in fringe communities: a case study of alt-right communities on 8chan, 4chan, and Reddit. Soc Media Society 2021, 7, https://doi.org/10.1177/ 20563051211052906. 20563051211052904.
10. Eckert S, Metzger-Riftkin J: Doxxing, privacy and gendered harassment. The shock and normalization of veillance cul- tures. Medien Kommun 2020, 68:273–287, https://doi.org/ 10.5771/1615-634X-2020-3-273.
11. Douglas KM, McGarty C, Bliuc A-M, Lala G: Understanding cyberhate: social competition and social creativity in online white supremacist groups. Soc Sci Comput Rev 2005, 23: 68–76, https://doi.org/10.1177/0894439304271538.
12. Frischlich L, Schatto-Eckrodt T, Boberg S, Wintterlin F: Roots of incivility: how personality, media use, and online experiences shape uncivil participation. Media Commun 2021, 9:195–208, https://doi.org/10.17645/mac.v9i1.3360.
13. Chen L, Ho SS, Lwin MO: A meta-analysis of factors predicting cyberbullying perpetration and victimization: from the social cognitive and media effects approach. New Media Soc 2017, 19:1194–1213, https://doi.org/10.1177/ 1461444816634037.
14. Moffitt TE: Adolescence-limited and life-course-persistent antisocial behavior: a developmental taxonomy. Psychol Rev 1993, 100:674–701.
www.sciencedirect.com
Social media and online hate Walther 5
15. Phillips W: This is why we can’t have nice things: mapping the relationship between online trolling and mainstream culture. Cambridge: MIT Press; 2015.
16. Buckels EE, Trapnell PD, Andjelovic T, Paulhus DL: Internet trolling and everyday sadism: parallel effects on pain perception and moral judgment. J Pers 2019, 87:328–340, https://doi.org/10.1111/jopy.12393.
17. Lewis B, Marwick AE: Media manipulation and disinformation online. Data Soc 2017. https://datasociety.net/library/media- manipulation-and-disinfo-online/.
18. Walther JB: The merger of mass and interpersonal commu- nication via new media: integrating metaconstructs. Hum Commun Res 2017, 43:559–572, https://doi.org/10.1111/ hcre.12122.
19. Brady WJ, Crockett MJ, Van Bavel JJ: The MAD model of moral contagion: the role of motivation, attention, and design in the spread of moralized content online, perspect. Psychol Sci 2020, 15:978–1010, https://doi.org/10.1177/ 1745691620917336.
20. Brady WJ, McLoughlin K, Doan TN, Crockett M: How social learning amplifies moral outrage expression in online social networks. Sci Adv 2021, 7, https://doi.org/10.31234/osf.io/gf7t5. eabe5641.
21. Simi P, Blee K, DeMichele M, Windisch S: Addicted to hate: identity residual among former white supremacists. Am Socio Rev 2017, 82:1167–1187, https://doi.org/10.1177/ 0003122417728719.
22. Munger K: Tweetment effects on the tweeted: experimentally reducing racist harassment. Polit Behav 2017, 39:629–649, https://doi.org/10.1007/s11109-016-9373-5.
23. Romera EM, Ortega-Ruiz R, Runions K, Camacho A: Bullying perpetration, moral disengagement and need for popularity: examining reciprocal associations in adolescence. J Youth Adolesc 2021, 50:2021–2035, https://doi.org/10.1007/s10964- 021-01482-4.
24. Utz S, Tanis M, Vermeulen I: It is all about being popular: the effects of need for popularity on social network site use. Cyberpsychol Behav Soc Netw 2012, 15:37–42, https://doi.org/ 10.1089/cyber.2010.0651.
25 * * . Grubbs JB, Warmke B, Tosi J, James AS, Campbell WK: Moral
grandstanding in public discourse: status-seeking motives as a potential explanatory mechanism in predicting conflict. PLoS One 2019, 14, e0223749, https://doi.org/10.1371/ journal.pone.0223749.
Moral grandstanding involves public expressions of morality that is motivated by self-promotion and the expectation of status enhance- ment. The authors argue that seeking dominance and prestige partially explains the generation of the highly prevalent moral outrage, caustic comments, and conflict in social media. Although it does not address online hate per se, it is consistent with understanding online hate as motivated by approval-seeking from peers and friends. The article’s six studies validate a self-administered trait measure of moral grand- standing and its relation to other personality traits.
26 * * . Udupa S: Nationalism in the digital age: fun as a metapractice
of extreme speech. Int J Commun 2019, 13:3143–3163. https:// ijoc.org/index.php/ijoc/article/view/9105/2715.
This study describes aggressive derision of others online and the fun its participants experience, fun being a “metapractice” of online communication ecologies generally. Although it focuses on right-wing nationalists’ repudiation and humiliation of political opponents, the exclusionary strategies and behaviors it describes are similar to those of right-wing online hate practices in other cultures. Fun occurs as people try to make their messages funny and witty, share colloquial- isms, hold meme contests, and celebrate collective successes such as hashtag trending, reinforcing the notion that collaboration and public performance are endemic aspects of online hate.
27. Malamuth N, Linz D, Weber R: The internet and aggression: motivation, disinhibitory, and opportunity aspects. In Social Net: Understanding Our Online Behavior. 2nd ed. Oxford: Oxford University Press; 2013.
28. Daniels J: Twitter and white supremacy, A love story. Dame Mag.; 2017. https://www.damemagazine.com/2017/10/19/
www.sciencedirect.com
twitter-and-white-supremacy-love-story/. Accessed 18 October 2021.
29. Sannon S, Choi YH, Taft JG, Bazarova NN: What comments did I get? How post and comment characteristics predict inter- action satisfaction on Facebook. Proc. Int. AAAI Conf. Web Soc. Media. 2017, 11:664–667.
30. Carr CT, Wohn DY, Hayes RA: Thumb up as social support: relational closeness, automaticity, and interpreting social support from paralinguistic digital affordances in social media. Comput Hum Behav 2016, 62:385–393, https://doi.org/ 10.1016/j.chb.2016.03.087.
31. Hayes RA, Carr CT, Wohn DY: One click, many meanings: interpreting paralinguistic digital affordances in social media. J Broadcast Electron Media 2016, 60:171–187, https://doi.org/ 10.1080/08838151.2015.1127248.
32. Tice DM: Self-concept change and self-presentation: The looking glass self is also a magnifying glass. J Pers Soc Psychol 1992, 63:435–451, https://doi.org/10.1037/0022- 3514.63.3.435. 19930101.
33. Carr CT, Foreman AC: Identity shift III: effects of publicness of feedback and relational closeness in computer-mediated communication. Media Psychol 2016, 19:334–358, https:// doi.org/10.1080/15213269.2015.1049276.
34 * . Valkenburg P: Understanding self-effects in social media.
Hum Commun Res 2017, 43:477–490, https://doi.org/10.1111/ hcre.12113.
This article reviews theories that appear independently in various disciplines, all pertaining to the process of “self-effects” by which in- dividuals modify their very own perceptions and attitudes when they communicate ostensibly to others face-to-face. Self-effects are likely to be stronger in social media than traditional settings for several reasons, including greater control over message articulation online than offline, and greater scale and increased likelihood for affirming feedback from others via social media than traditional face to face settings, among other reasons. While not explicitly focusing on online hate, the process describes how sharing hateful comments online for social reasons may nevertheless exacerbate prejudicial perceptions by those who comment.
35. Walther JB: The effect of feedback on identity shift in com- puter-mediated communication. Media Psychol 2011, 14:1–26, https://doi.org/10.1080/15213269.2010.547832.
36. Parks MR, Floyd K: Making friends in cyberspace. J Commun 1996, 46:80–97, https://doi.org/10.1111/j.1460- 2466.1996.tb01462.x.
37. Turner JW, Grube JA, Meyers J: Developing an optimal match within online communities: an exploration of CMC support communities and traditional support. J Commun 2001, 51: 231–251, https://doi.org/10.1111/j.1460-2466.2001.tb02879.x.
38. Bosson JK, Johnson AB, Niederhoffer K, Swann Jr WB: Inter- personal chemistry through negativity: bonding by sharing negative attitudes about others. Pers Relat 2006, 13:135–150, https://doi.org/10.1111/j.1475-6811.2006.00109.x.
39 * . Walther JB, Whitty MT: Language, psychology, and new new
media: the hyperpersonal model of mediated communication at twenty-five years. J Lang Soc Psychol 2021, 40:120–135, https://doi.org/10.1177/0261927X20967703.
This essay reviews an influential theoretical model describing the for- mation of intense interpersonal relationships through computer- mediated communication, and updates the model to consider affor- dances of social media. It discusses how selective self-presentation and reciprocal online interaction lead to systematically distorted per- ceptions of relationships, relationship partners, and the self. It specu- lates on the potential for online interpersonal relationships to nourish affiliations among those who espouse online hate and how online social approval within such relationships hold potential to further exacerbate individually-held prejudice and hate.
40. Thompson SA: Opinion | three weeks inside a pro-trump QAnon chat room. N. Y. Times; 2021. https://www.nytimes.com/ interactive/2021/01/26/opinion/trump-qanon-washington-capitol- hill.html. Accessed 15 October 2021.
41. DeKeseredy WS, Schwartz MD: Thinking sociologically about image-based sexual abuse: the contribution of male peer
Current Opinion in Psychology 2022, 45:101298
6 Social Media and Well-Being
support theory. Sex. Media Soc. 2016, 2:1–8, https://doi.org/ 10.1177/2374623816684692.
42 * * . DeKeseredy WS, Schwartz MD, Harris B, Woodlock D, Nolan J,
Hall-Sanchez A. Technology-facilitated stalking and unwanted sexual messages/images in a college campus community: the role of negative peer support, vol. 9. SAGE Open; 2019, https:// doi.org/10.1177/2158244019828231. 2158244019828231.
In this and other recent works (e.g., [41]), DeKeseredy and col- leagues apply his negative/male peer support theory to revenge porn. The theory is well-established in anthropological and socio- logical investigations of domestic violence. They demonstrate how
Current Opinion in Psychology 2022, 45:101298
friendships among men with similar attitudes systematically help to exonerate and condone one another for these cycles of attack. Negative male peer support describes social support processes– which in other contexts is seen as valuable and pro-social–as buff- ering those with extreme, minority views against societal misunder- standing and repudiation. The work shows how friends and peers (in this case virtual) encourage and facilitate publicly interpersonal attacks.
43. Woolf LM, Hulsizer MR: Hate groups for dummies: how to build a successful hate-group. Humanity Soc 2004, 28:40–62, https:// doi.org/10.1177/016059760402800105.
www.sciencedirect.com
- Social media and online hate
- Introduction
- The scope of online hate
- Motivations for online hate: from psychopathology to social identity to social approval
- Moral grandstanding
- Fun
- How social media facilitate online hate
- Intensity of online relationships
- Conclusion
- Conflict of interest statement
- Acknowledgements
- References
- Introduction

