Discourse of Twitter and Social Media: How We Use Language to Create Affiliation on the Web

Detest Speech in the Political Discourse on Social Media: Disparities Across Parties, Gender, and Ethnicity

Social media has become an indispensable channel for political advice. However, the political discourse is increasingly characterized by hate spoken communication, which affects not only the reputation of individual politicians but also the functioning of society at large. In this work, we empirically analyze how the corporeality of detest spoken communication in replies to posts from politicians on Twitter depends on personal characteristics, such every bit their party amalgamation, gender, and ethnicity. For this purpose, we use Twitter's Historical API to collect every tweet posted by members of the 117th U. S. Congress for an observation menstruum of more than six months. Additionally, we get together replies for each tweet and use auto learning to predict the amount of hate speech they embed. Subsequently, we implement hierarchical regression models to clarify whether politicians with certain characteristics receive more hate speech communication. We find that tweets are especially probable to receive detest speech in replies if they are authored by (i) persons of color from the Autonomous party, (2) white Republicans, and (iii) women. Furthermore, our assay reveals that more negative sentiment (in the source tweet) is associated with more than hate voice communication (in replies). All the same, the association varies across parties: negative sentiment attracts more detest speech for Democrats (vs. Republicans). Altogether, our empirical findings imply significant differences in how politicians are treated on social media depending on their party affiliation, gender, and ethnicity.

CCS Concepts:Human being-centered calculating → Social media; • Human-centered computing → Empirical studies in collaborative and social calculating; • Applied computing~Folklore;


Keywords: Social media, political soapbox, detest speech, sentiment analysis, disparities, computational social scientific discipline, explanatory modeling

ACM Reference Format:
Kirill Solovev and Nicolas Pröllochs. 2022. Hate Speech in the Political Discourse on Social Media: Disparities Across Parties, Gender, and Ethnicity. In Proceedings of the ACM Spider web Conference 2022 (Www '22), April 25–29, 2022, Virtual Effect, Lyon, French republic. ACM, New York, NY, USA 6 Pages. https://doi.org/ten.1145/3485447.3512261

ane INTRODUCTION

Social media has go an indispensable communication channel for politicians in the U. Southward. and around the globe. Compared to traditional media, it provides a number of fundamental benefits for politicians: (i) social media provides a tool to spread messages to the public at scale, thereby increasing people's sensation of their (political) agenda [24, 29, 45]. (ii) Social media encourages the dialogue between politicians and users, allowing for direct feedback from constituents and discussions of political ideas [17]. (3) Due to its interactive nature, social media can exist used equally a tool for political mobilization [xxx, 33]. These benefits are further reinforced by the openness of social media as politicians are no longer restricted by geography, scope, or content and can accomplish significantly wider audiences [22].

However, the shift from traditional channels towards social media does not necessarily improve the quality of the political soapbox. Instead, social media is known to foster echo chambers and "u.s. versus them" rhetoric [35]. These factors correlate with cyber-bullying, harassment, and, in particular, hate speech [18]. Broadly speaking, hate speech refers to abusive or threatening oral communication (or writing) that expresses prejudice against a particular group, often on the footing of ethnicity or sexual orientation [48]. Detest spoken language often originates from semi-anonymous trolls [27, 35], and is particularly frequent in discussions that crusade a strong emotional response, such as in political topics [57]. The adoption of social media past politicians is a double-edged sword posing risks both to themselves and order as a whole [28]. At the individual level, hate speech communication can threaten reputations and may fifty-fifty lead to long-run mental health problems [56]. At the societal level, it fosters political polarization [31], which can have severe consequences. Examples include erosion of intergroup political relations and increased opportunities for the spread of ideologically branded misinformation [xx, 42, 50].

Research Goal: In this study, we empirically analyze how the user base of operations on Twitter responds to posts from members of the U. S. Congress. We are interested in agreement whether differences in the prevalence of hate speech can be explained by personal characteristics of politicians, such as their party amalgamation, gender, and ethnicity. More precisely, we accost the following research questions:

  • (RQ1) Are members of the U. South. Congress more likely to receive hate spoken communication in the replies to their tweets depending on their party amalgamation, gender, and ethnicity?
  • (RQ2) Does detest spoken language in the replies to tweets depend on the sentiment of the source tweet? Does the strength of the association differ depending on their political party, gender, and ethnicity?

Data & Methods: To address our enquiry questions, we employ the Twitter Historical API to collect all tweets from members of the 117th U. S. Congress between the first session on January 3, 2021 and the end of July 2021. In addition, we collect replies to each source tweet. We then apply machine learning to determine the share of replies of each tweet that embeds hate spoken language. Subsequently, we implement a multilevel binomial regression model with random effects to approximate whether Twitter users are more likely to reply with hate speech communication depending on the party affiliation, gender, and ethnicity of the pol that has posted the tweet.

Contributions: To the best of our noesis, this study is the commencement to empirically model how hate speech in replies to tweets from politicians depends on their personal characteristics (political party affiliation, gender, ethnicity). All else being equal, we find that tweets are more than likely to receive hate spoken communication in replies if they are authored by (i) persons of color from the Autonomous party, (ii) white Republicans, and (3) women. As an additional contribution, our assay reveals that more negative sentiment (in the source tweet) is associated with more detest spoken language (in replies). However, the association varies across parties: negative sentiment attracts more hate spoken language for Democrats (vs. Republicans). Birthday, our findings fuel new insights into ongoing discussions on political polarization on social media and highlight disparities in how politicians are treated depending on their political party amalgamation, gender, and ethnicity

2 Groundwork

Political communication on Twitter: The use of social media by U. S. politicians has experienced a rapid surge. At the start of 2009, only 69 individual members of Congress had a Twitter business relationship [23]. Today, every member of the U. S. Congress has a professional person Twitter account and often a second personal account being active at the same time. Existing studies suggest that there are three main reasons why politicians prefer social media [28]. First, social media allows for unidirectional delivery of information to the public. Compared to classical media, there is less moderation and real time scrutiny assuasive politicians to freely express themselves [iii]. Second, social media enables dialogue between politicians and the public. Politicians can utilize social media as a tool to connect with constituents to discuss political issues and receive feedback [17]. Engaged users may further spread the message with likes and/or reshares. Tertiary, social media can be seen as a tool for political mobilization. Specifically, it allows politicians to rally for projects, events, and movements [55], though it does non guarantee success [34].

Hate spoken communication: Although there is no extensive definition [9], hate voice communication is typically considered to refer to abusive or threatening speech (or writing) that expresses prejudice against a particular group, often on the basis of ethnicity or sexual orientation [48]. While inquiry on hate spoken communication has received increasing attending lately [e. g., 2, 11, 12, sixteen, 36, 37, 40, 46, 58, lx], studies that clarify hate speech in the context of political communication are scant. The few existing works typically focus on qualitative insights or analysis of summary statistics. For instance, previous works have studied hate speech towards female person Japanese politicians [21], far-right political political party discourse in Spain [8], mean propaganda towards politicians in Republic of macedonia [8], hate speech confronting Members of Parliament in the U.1000. [ane], and hate confronting German politicians [14]. We are aware of only one paper analyzing hate speech and incivility in the context of tweets from members of the U. S. Congress [54]. However, this study once more focuses on summary statistics. In particular, information technology does not model the furnishings of personal characteristics of politicians (e. one thousand., ethnicity) on the likelihood of receiving detest speech.

Disparities across parties, gender, and ethnicity: Existing enquiry propose that political political party leanings in the U. S. correlate with different spoken language patterns: Democrats tend to apply more swear words and college sentiment, while Republicans prefer to communicate more negative sentiment and group identity [51]. Besides party differences, a vast strand of studies has shown that there are discrepancies in communication behavior across genders. For instance, women are more likely to hide expressive and negative emotions [xiii], and are guided by a greater focus on intendance in moral dilemmas [38]. This is straight applicative to the domain of social media, where women are more likely to report messages targeting racial minorities and women [15]. Gender differences are further reinforced by widespread stereotypes regarding the part of women in society [41], who are perceived as less persuasive and are often outright dismissed when displaying aggressive and forceful beliefs online [59]. Furthermore, survey studies suggest that women more than ofttimes tend to be a target of cyber-bullying and hateful attacks [7], particularly if they present an openly active opinion, such as feminism [25]. Ethnicities and racial stereotypes play a like function in offline and online discourse and differ profoundly across countries [52]. For instance, for the U. S., existing studies suggest frequent hate speech against African Americans [32].

Research gap: Existing inquiry on hate speech in the political soapbox focuses either on qualitative insights or on summary statistics. We are not aware of previous works empirically modeling the effect of personal characteristics on the likelihood of a politician to receive detest speech. This presents our contribution.

iii DATASET

Members of the U. S. Congress: We analyze tweets from all 541 members of the 117th U. S. Congress that convened on January 3, 2021. Data on the members of Congress was gathered from the official webpage of the U. South. Congress [39], which provides links to personal and campaign web pages. Past following these links, we collected the post-obit information virtually each political leader: (i) party amalgamation, (2) branch of Congress in which the political leader serves, (iii) fourth dimension served in Congress, (iv) gender, and (five) ethnicity. Fig. 1 provides an overview of the limerick of the 117th U. S. Congress. Virtually voting seats are held by members of the 2 major political parties with 269 Democrats (D) and 263 Republicans (R), while two seats are occupied by independent senators. Women (W) hold 27% of all Congress Seats, accounting for 39% of all Democrats and 15% of all Republicans, respectively. Notably, the 117th U. Due south. Congress is the most ethnically diverse so far with 39% of Democrats and 8% of Republicans identifying equally people of color (PoC).

For the sake of simplicity and interpretability, we focus our after empirical assay on tweets from Republican and Autonomous members; and exclude tweets from the 2 independent senators.

Drove of tweets: Twitter handles (user names) of every politician in the U. S. Congress are provided by the University of California San Diego library [49]. We employed the Twitter Historical API to download the complete timelines of every politico between January 3, 2021 and the end of July 2021. Hither nosotros collected the entire tweet history of each person, excluding retweets and replies, resulting in a total number of 199,294 tweets. The boilerplate number of tweets per politician is 368.38. Nosotros additionally queried Twitter's Historical API to gather the replies to every source tweet in our data set. To ensure feasibility, we restricted the data collection to up to 250 replies for each original tweet, starting with the primeval respond. The itch process resulted in a total number of 8,362,555 replies.

Figure 1
Figure 1: Venn Diagram visualizes the composition of the 117th U. S. Congress.

iv METHODS

four.1 Hate Speech Detection

In this work, we use machine learning to detect hate speech communication in replies to tweets. Compared to dictionary-based methods that merely count detest-related words [4], this arroyo is mostly considered every bit existence more authentic [5]. Nonetheless, as part of our robustness checks, we validate our results with the frequently-employed Hatebase lexicon [26], finding confirmatory results.

Nosotros implement machine learning for detest speech detection equally follows: we employ the annotated Twitter dataset from [12], containing 25,000 tweets labeled as hateful or not hateful. Each tweet was annotated by at least iii users who were explicitly instructed to think about the context of the message and not only the words contained inside [12]. We use the annotated tweets to implement a deep neural network classifier that predicts whether or not a tweet is hateful.one The hate speech classifier is then used to predict a binary label of whether or not a tweet is hateful (= one if true; otherwise = 0) for each reply tweet in our dataset. For each source tweet, we calculate the share of replies that are hateful. The resulting variable ranges from 0 to one, with 0 indicating the lack of hate speech communication in replies, and 1 indicating that every reply is hateful.

four.two Explanatory Regression Model

We implement a multilevel binomial regression to guess the effects of party, gender, and ethnicity on the likelihood of a tweet receiving hate oral communication.

Formally, we model the number of hate speech replies, HReplies, equally a binomial variable with probability parameter θ. The number of trials is given past the full number of replies a tweet receives (Replies). The key explanatory variables are the politicians' party affiliation (Party; = i if Republican, otherwise 0), gender (Gender; = i if Human, otherwise 0), and ethnicity (Ethnicity; = 1 if Person of Color, otherwise 0). Furthermore, for each source tweet, nosotros calculate a sentiment score (SourceSentiment) using SentiStrength [53]. We as well control for the historic period of the members of Congress (Age), the number of years served (YearsInOffice), whether media was attached to the tweet (AttachedMedia; = 1 if true, otherwise 0), and the chamber of Congress at which the politician serves (Sleeping accommodation; = 1 if Senate, otherwise 0). Based on these variables, we specify the following regression model:

\begin{marshal} \operatorname{logit}(\theta) = & \, \beta _0 + \beta _{1} \mathit {Party} + \beta _{ii} \mathit {Gender} + \beta _{3} \mathit {Ethnicity} \\& + \beta _{4} \mathit {SourceSentiment} + \beta _{five} \mathit {YearsInOffice} + \beta _{6} \mathit {Age} \nonumber \\ & + \beta _{seven} \mathit {AttachedMedia} + \beta _{eight} \mathit {Sleeping accommodation} \nonumber \\ & + u_\text{user} + \varepsilon \text{,} \nonumber \end{align}
(one)

\begin{marshal} HReplies \sim Binomial[Replies, \theta ], & \end{align}
(ii)

with intercept β 0, error term ɛ, and user-specific random effects $u_\text{user}$ . Note that the latter is important as it allows us to control for heterogeneity in users' social influence (e. g., some accounts have many followers and reach different audiences) [ 43, 44].

We gauge Eq. 1 and Eq. two using MLE and generalized linear models. To facilitate the interpretability of our findings, nosotros z-standardize all variables, and so that we can compare the effects of regression coefficients on the dependent variable measured in standard deviations. Our regression analyses are implemented in R iv.0.5 using the lme4 package [6].

5 EMPIRICAL Analysis

five.1 Summary Statistics

We start our analysis past evaluating summary statistics. The average share of hateful replies per tweet in our dataset amounts to $ane.99 \,\%$ . Nosotros perform both t − tests and Kolmogorov-Smirnov (KS) tests to evaluate whether there are statistically significant differences across parties, genders, and ethnicities. Our findings are as follows: (i) tweets from Democrats (vs. Republicans) receive, on average, a 3.67% higher share of hate replies. (ii) Tweets from women (vs. men) politicians receive vii.71% higher share of hate replies. (iii) Tweets from persons of color (vs. whites) receive 37.75% higher share of hate replies. For each of these comparisons, ii-sided t − tests confirm that the differences in means are statistically significant (p < 0.01). In Fig. 2, we visualize the complementary cumulative distribution functions (CCDFs) for the ratio of hate speech in replies. We again find that Democrats, women and persons of color receive more detest spoken communication. KS-tests confirm that all differences in distributions are statistically significant (p < 0.01).

Figure 2
Figure 2: CCDFs for the ratio of hate speech in replies separated by (a) party, (b) gender, and (c) ethnicity.

5.2 Regression Analysis

Nosotros estimate a multilevel binomial regression to understand the effects of political party amalgamation, gender, and ethnicity on the likelihood of a tweet receiving hate speech (come across model due west/o interactions in Fig. 3). In dissimilarity to summary statistics, this allows us to estimate effect sizes later controlling for confounding effects. The largest effect size is estimated for Ethnicity with a coefficient of 0.346 (p < 0.01), which implies that the odds of receiving hate oral communication for persons of color are e 0.346 ≈ ane.41 times the odds for whites. Nosotros farther discover pronounced party and gender furnishings. Compared to Democrats, the odds for tweets from Republicans to receive hate speech are 22.02% higher (β = 0.199, p < 0.01). The odds for men to receive detest speech are eight.33% (β = −0.087, p < 0.05) lower than for women. Nosotros also notice that a more negative sentiment in the source tweet is associated with more hate speech communication in replies. A one standard difference increase in SourceSentiment is associated with a 25.99% (β = −0.301, p < 0.01) decrease in the odds of receiving hate speech. We notice no statistically significant effects from a political leader's age, time in function, chambers, and media attachments.

We add together interaction terms to test whether users react differently to gender, ethnicity, and sentiment depending on the party amalgamation (see model w/ interactions in Fig. 3). Here we find a statistically pregnant interaction term between Party and Ethnicity (β = −0.287, p < 0.01). This implies that persons of color from the Democratic party have college odds for receiving hate speech than persons of colour from the Republican party. Furthermore, the strength of the association between sentiment in the source tweet and hate oral communication varies across parties (β = 0.235, p < 0.01). Specifically, negative sentiment attracts more hate spoken language for Democrats. The interaction between party affiliation and gender is not meaning at common statistical significance thresholds.

Altogether, our assay implies that iii groups of politicians are peculiarly likely to receive hate speech in response to their tweets: (i) persons of color from the Autonomous party, (two) white Republicans, and (three) women.

Figure 3
Figure 3: Coefficient estimates for binomial regression due west/o (coral) and w/ (teal) interaction terms for political party. The horizontal bars represent 95% confidence intervals. User-specific random effects are included.

5.three Robustness Checks

We conducted additional checks to validate the robustness of our analysis: (1) Nosotros repeated our assay with a lexicon-based approach for hate speech detection, specifically the Hatebase dictionary [26]. (2) We calculated variance inflation factors for all contained variables in our regression model and found that all remain beneath the critical threshold of four. (3) We repeated our assay with alternative estimators (e. g., beta regression), controlled for outliers, tested for quadratic effects, and added multiple interaction terms for each explanatory variable. In all cases, our results are robust and consistently back up our findings.

vi DISCUSSION

Summary of findings: This work empirically models how the amount of hate speech in replies to tweets from politicians depends on their personal characteristics (political party affiliation, gender, ethnicity). All else beingness equal, nosotros find that Tweets are specially likely to receive hate voice communication replies if they are authored past (i) persons of colour from the Democratic party, (ii) white Republicans, and (iii) women. Furthermore, our analysis reveals that more than negative sentiment (in the source tweet) is associated with more hate oral communication (in replies). Even so, the association varies across parties: negative sentiment attracts more hate speech for Democrats (vs. Republicans). Altogether, our empirical findings imply statistically pregnant differences in how politicians are treated on social media depending on their party amalgamation, gender, and ethnicity.

Implications: Our findings are relevant both for politicians and from a societal perspective. Politicians should be aware that social media is a double-edged sword as it comes with the hazard of receiving vast numbers of hate comments. This is concerning as hate speech can destroy reputations and may fifty-fifty lead to long-run mental health consequences [56]. Given that hate speech tin can affect peoples' decision to participate in politics [47], this may also impede diversity in the composition of political institutions. Furthermore, hate spoken communication goes hand in hand with increased polarization, hyper-partisanship, and less common ground betwixt opposing political sides  [19], thereby threatening the operation of democracy itself.

REFERENCES

  • Pushkal Agarwal, Oliver Hawkins, Margarita Amaxopoulou, Noel Dempsey, Nishanth Sastry, and Edward Wood. 2021. Hate speech in political soapbox: A case study of U.k. MPs on Twitter. In ACMHT.
  • Sohail Akhtar, Valerio Basile, and Viviana Patti. 2020. Modeling annotator perspective and polarized opinions to improve hate speech detection. In HCOMP.
  • Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives 31, 2 (2017), 211–236.
  • Ahlam Alrehili. 2019. Automated hate speech detection on social media: A cursory survey. In AICCSA.
  • Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for detest speech communication detection in tweets. In World wide web Companion.
  • Douglas Bates, Deepayan Sarkar, Maintainer Douglas Bates, and L Matrix. 2021. lme4. https://cran.r-projection.org/web/packages/lme4/index.html version ane.1.27.
  • Linda Beckman, Curt Hagquist, and Lisa Hellström. 2013. Discrepant gender patterns for cyberbullying and traditional bullying–An assay of Swedish adolescent data. Computers in Man Behavior 29, 5 (2013), 1896–1903.
  • Anat Ben-David and Ariadna Matamoros Fernández. 2016. Hate speech and covert discrimination on social media: Monitoring the Facebook pages of extreme-right political parties in Spain. International Journal of Communication ten (2016), 27.
  • Susan Benesch. 2014. Defining and diminishing detest oral communication. Country of the World'due south Minorities and Ethnic Peoples 2014 (2014), 18–25.
  • Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Céspedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv 1803.11175 (2018).
  • Shivang Chopra, Ramit Sawhney, Puneet Mathur, and Rajiv Ratn Shah. 2020. Hindi-english hate speech detection: Author profiling, debiasing, and practical perspectives. In AAAI.
  • Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the trouble of offensive linguistic communication. In ICWSM.
  • Teresa L. Davis. 1995. Gender differences in masking negative emotions: Ability or motivation? Developmental Psychology 31, four (1995), 660–667. https://doi.org/10.1037/0012-1649.31.4.660
  • Tom de Smedt and Sylvia Jaki. 2018. The Polly corpus: Online political fence in Federal republic of germany. In Computer-Mediated Communication.
  • Daniel Grand. Downs and Gloria Cowan. 2012. Predicting the importance of freedom of speech communication and the perceived harm of hate speech communication. Journal of Practical Social Psychology 42, half dozen (2012), 1353–1375.
  • Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, and Elizabeth Belding. 2018. Hate lingo: A target-based linguistic analysis of hate spoken communication in social media. In ICWSM.
  • Gunn Sara Enli and Eli Skogerbø. 2013. Personalized campaigns in party-centred politics: Twitter and Facebook as arenas for political communication. Information, Advice & Club 16, v (2013), 757–774.
  • Karmen Erjavec and Melita Poler Kovačič. 2012. "You don't sympathize, this is a new war!" Analysis of hate speech in news web sites' comments. Mass Advice and Society 15, 6 (2012), 899–920.
  • Eli J Finkel, Christopher A Bond, Mina Cikara, Peter H Ditto, Shanto Iyengar, Samara Klar, Lilliana Bricklayer, Mary C McGrath, Brendan Nyhan, David G Rand, et al. 2020. Political sectarianism in America. Science 370, 6516 (2020), 533–536.
  • Deen Freelon, Alice Marwick, and Daniel Kreiss. 2020. False equivalencies: Online activism from left to right. Scientific discipline 369, 6508 (2020), 1197–1201.
  • Tamara Fuchs and Fabian Schäfer. 2019. Normalizing misogyny: hate speech and verbal abuse of female politicians on Japanese Twitter. In Japan Forum.
  • Jason Gainous and Kevin M. Wagner. 2013. Tweeting to power: The social media revolution in American politics. Oxford University Printing.
  • Jennifer Golbeck, Justin M. Grimes, and Anthony Rogers. 2010. Twitter use by the The states Congress. Journal of the American Society for Information Scientific discipline and Applied science 61, viii (2010), 1612–1621.
  • Todd Graham, Marcel Broersma, Karin Hazelhoff, and Guido Van'T Haar. 2013. Between broadcasting political messages and interacting with voters: The use of Twitter during the 2010 UK general election campaign. Data, Communication & Gild 16, 5 (2013), 692–716.
  • Claire Hardaker and Marker McGlashan. 2016. "Real men don't hate women": Twitter rape threats and grouping identity. Journal of Pragmatics 91 (2016), 80–93.
  • Hatebase. 2021. A collaborative, regionalized repository of multilingual hate speech. https://hatebase.org/
  • Kenneth E Himma and Herman T Tavani. 2008. The handbook of information and computer ethics. John Wiley & Sons.
  • Sounman Hong, Haneul Choi, and Taek Kyu Kim. 2019. Why exercise politicians tweet? Extremists, underdogs, and opposing parties every bit political tweeters. Policy & Internet 11, 3 (2019), 305–323.
  • Sounman Hong and Daniel Nadler. 2012. Which candidates do the public discuss online in an election campaign? The use of social media by 2012 presidential candidates and its affect on candidate salience. Authorities Information Quarterly 29, four (2012), 455–461.
  • Nigel A. Jackson and Darren G. Lilleker. 2009. Edifice an architecture of participation? Political parties and Web ii.0 in U.k.. Periodical of Data Technology & Politics 6, 3-4(2009), 232–250.
  • James A. Piazza. 2020. Politician hate speech and domestic terrorism. International Interactions 46, 3 (2020), 431–453. https://doi.org/10.1080/03050629.2020.1739033
  • Irene Kwok and Yuzhou Wang. 2013. Locate the hate: Detecting tweets against blacks. In AAAI.
  • Anders Olof Larsson. 2015. Pandering, protesting, engaging. Norwegian political party leaders on Facebook during the 2013 'Brusque entrada'. Information, Advice & Lodge 18, 4 (2015), 459–473.
  • Helen Margetts, Peter John, Scott Hale, and Taha Yasseri. 2015. Political turbulence: How social media shape collective activity. Princeton University Press.
  • Mainack Mondal, Leandro Araújo Silva, and Fabrício Benevenuto. 2017. A measurement study of detest voice communication in social media. In ACMHT.
  • Zewdie Mossie. 2020. Social media nighttime side content detection using transfer learning emphasis on hate and conflict. In WWW Companion.
  • Seema Nagar, Sameer Gupta, C. S. Bahushruth, Ferdous Ahmed Barbhuiya, and Kuntal Dey. 2021. Empirical assessment and characterization of homophily in classes of hate speeches. In AAAI Workshop on Melancholia Content Assay.
  • Nhung T. Nguyen, M. Tom Basuray, William P. Smith, Donald Kopka, and Donald McCulloh. 2008. Moral issues and gender differences in upstanding judgment using Reidenbach and Robin'southward (1990) multidimensional ethics scale: Implications in teaching of business organization ideals. Journal of Business Ethics 77, iv (2008), 417–430.
  • Library of Congress. 2021. Members of the U.Southward. Congress. https://www.congress.gov/members
  • Alexandra Olteanu, Carlos Castillo, Jeremy Boy, and Kush Varshney. 2018. The effect of extremist violence on hateful speech online. In ICWSM.
  • Deborah A. Prentice and Erica Carranza. 2003. Sustaining cultural beliefs in the face of their violation: The case of gender stereotypes. In The Psychological Foundations of Culture. Psychology Press, 268–289.
  • Nicolas Pröllochs. 2022. Community-based fact-checking on Twitter's Birdwatch platform. In ICWSM.
  • Nicolas Pröllochs, Dominik Bär, and Stefan Feuerriegel. 2021. Emotions explain differences in the diffusion of true vs. false social media rumors. Scientific Reports xi, 22721 (2021).
  • Nicolas Pröllochs, Dominik Bär, and Stefan Feuerriegel. 2021. Emotions in online rumor diffusion. EPJ Data Science ten, i (2021), 51.
  • Karen Ross, Susan Fountaine, and Margie Comrie. 2015. Facing up to Facebook: politicians, publics and the social media (ted) turn in New Zealand. Media, Culture & Society 37, ii (2015), 251–269.
  • Punyajoy Saha, Binny Mathew, Kiran Garimella, and Animesh Mukherjee. 2021. "Short is the road that leads from fearfulness to hate": Fear speech in Indian WhatsApp groups. In WWW.
  • Jennifer Scott. 2019. Women MPs say abuse forcing them from politics. https://world wide web.bbc.com/news/election-2019-50246969
  • Andrew Sellars. 2016. Defining hate speech. Berkman Klein Centre Research Publication 2016-twenty (2016), 16–48.
  • Kelly Fifty. Smith. 2021. LibGuides: Congressional Twitter accounts. https://ucsd.libguides.com/congress_twitter
  • Kirill Solovev and Nicolas Pröllochs. 2022. Moral emotions shape the virality of COVID-xix misinformation on social media. In Www.
  • Karolina Sylwester and Matthew Purver. 2015. Twitter language use reflects psychological differences between democrats and republicans. PLOS ONE 10, nine (2015), e0137422.
  • Beverly Daniel Tatum. 2017. Why are all the Black kids sitting together in the cafeteria? And other conversations nearly race. Hachette United kingdom.
  • Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the American Society for Information science and Technology 61, 12 (2010), 2544–2558.
  • Yannis Theocharis, Pablo Barberá, Zoltán Fazekas, and Sebastian Adrian Popa. 2020. The dynamics of political incivility on Twitter. Sage Open 10, ii (2020), 1–15.
  • Yannis Theocharis, Will Lowe, January W. van Deth, and Gema García-Albacete. 2015. Using Twitter to mobilize protest activeness: Online mobilization patterns and action repertoires in the Occupy Wall Street, Indignados, and Aganaktismenoi movements. Data, Communication & Gild 18, two (2015), 202–220.
  • Bertie Vidgen, Emily Burden, and Helen Margetts. 2021. Understanding online hate: VSP regulation and the broader context. Ofcom.
  • Angelia Wagner. 2020. Tolerating the trolls? Gendered perceptions of online harassment of politicians in Canada. Feminist Media Studies (2020), ane–xvi.
  • Maximilian Wich, Melissa Breitinger, Wienke Strathern, Marlena Naimarevic, Georg Groh, and Jürgen Pfeffer. 2021. Are your friends also haters? Identification of hater networks on social media: Data Paper. In WWW Companion.
  • Julia Winkler, Annabell Halfmann, and Rainer Freudenthaler. 2017. Backlash effects in online discussions: Effects of gender and counter-stereotypical communication on persuasiveness and likeability. Annual International Communication Association Conference .
  • Savvas Zannettou, Barry Bradlyn, Emiliano de Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringini, and Jeremy Blackburn. 2018. What is Gab: A bastion of free speech or an alt-right echo chamber. In WWW Companion.

FOOTNOTE

iWe utilize Universal Judgement Encoder (USE) [x] as text representation. The automobile learning classifier yields a weighted out-of-sample F1 score of 0.89, which is similar to previous works [12] and tin can be seen as reasonably accurate in the context of our study. The model is implemented in Python 3.8.five using TensorFlow 2.six.0.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies comport this find and the full citation on the first folio. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To re-create otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

WWW '22, Apr 25–29, 2022, Virtual Event, Lyon, France

© 2022 Copyright held by the owner/author(southward). Publication rights licensed to ACM.
ACM ISBN 978-ane-4503-9096-five/22/04.
DOI: https://doi.org/10.1145/3485447.3512261

Comments

Popular posts from this blog

Juventus Vs Barcelona / Hasil Juventus vs Barcelona Skor 0-2 di Liga Champions ...

Skeeter Doug : Nicktoons - Doug - Skeeter Was Black! - YouTube

Tessa - Tessa Kozma « da Costa Talent