light bulb suggesting idea
CIND 2024 Workshop, April 25-26

Political and Information Networks

The Center for Information Networks and Democracy will bring together a stellar group of interdisciplinary scholars to discuss research on information networks and their political impact.

  speakersscheduletalkslogistics  

 

Confirmed Invited Speakers

 

Schedule

Schedule for CIND 2024 workshop

Thursday, April 25

Friday, April 26

 

Talks

"Trustworthy Online Governance: Some Preliminary Concepts & Evidence", by Aaron Shaw

Online environments hold tantalizing potential to create resilient and accountable collective governance arrangements that sustain vibrant information ecosystems. However, trustworthy online institutions remain stubbornly out of reach in the face of numerous threats, such as autocratic system defaults, attacks on content integrity, and asymmetric labor relations. The rise of generative AI and large language models have inflicted further disruptions to the revenue streams and stability of many platforms, provoking new layers of crisis and uncertainty. These ubiquitous challenges underscore the need for more robust approaches to understanding and designing effective online governance institutions. This workshop submission pursues an analytic model as well as an initial empirical investigation of trustworthy online governance. Part one focuses on the analytic model: I adapt some existing concepts from institutional theory and elaborate testable propositions for trustworthy online platform governance. Part two presents an empirical assessment of one of the basic propositions from the analytic model: the relationship between effective platform governance institutions and trust and compliance behaviors in the context of gig work platforms and gig workers. Overall, the conceptual and empirical results indicate that diverse effective and accountable institutions can support the (re)establishment of trustworthy online platform governance arrangements.

"The Persistence of Misinformation Ties", by Ceren Budak

Many studies explore how people "come into" misinformation exposure. However, much less is known about how people "come out of" such exposure. Do people organically sever ties to misinformation spreaders? And what predicts them doing so? Over six months, we tracked the frequency and predictors of 1M followers unfollowing health misinformation spreaders on Twitter. We will discuss this project to shed light on how people come out of misinformation exposure. I will first discuss our unfollowing frequency analysis and show that misinformation ties are persistent. In fact, they are more persistent than ties to those who do not spread misinformation. The fact that users rarely unfollow misinformation spreaders suggests a need for external nudges and the importance of preventing exposure from arising in the first place. Next, we will turn our attention to our predictive model that identifies factors that increase the tendency to unfollow. We will discuss the implications of this model for future interventions that aim to cut ties/access to misinformation. [Paper]

"Learning from randomized interventions in social media", by Dean Eckles

How should we reason about the effects of interventions in social media? What effect sizes should be expected from such changes to algorithms and content? And, given the fundamentally social nature of these services, what conclusions can we draw from individual-level experiments? I will comment on the published results from prominent, recent experiments on Facebook and Instagram conducted during the 2020 US Elections. I will also draw on two new field experiments on Facebook (N>3.3e7) and Twitter (N>7.5e4), each randomizing exposure to advertising featuring content-general messages reminding people to think about accuracy. [Paper]

"Rumors Have Rules", by Emma Spiro

Rumors are prominent features of our networked information landscape, particularly in crisis settings where uncertainty and anxiety are high, and information can be highly dynamic and ambiguous. These phenomena are clearly evident in numerous recent cases including national elections and the pandemic. In this talk, I review work at the Center for an Informed Public that explores how and why people share rumors in online settings. This work is grounded in social science traditions of collective sensemaking, which has examined the production and spread of rumors as natural manifestation of social behavior with productive informational, social, and psychological aspects. Our work aims to build a better understanding of today’s information systems and their impact, with potential implications for crisis response, and public trust. [Paper]

"Liberals Engage With More Diverse Policy Topics and Toxic Content Than Conservatives on Social Media", by Herbert Chang

The rise of social media provides citizens with direct access to information shared by political elites. Yet, more than ever before, citizens play a critical role in diffusing elite-generated content. What kinds of content spread on social media? Do conservative and liberal citizens differ in the elite content with which they engage? These questions relate to long-standing academic and popular debates about whether political behavior is symmetric or asymmetric with respect to political ideology. We analyze more than 13 million users’ retweets of messages by U.S. Members of Congress on Twitter from 2009 to 2019, leveraging estimates of users’ political ideology constructed from over 3.5 billion prior retweets. We find limited evidence for ideological symmetry where the strength of ideology predicts diffusion choices similarly on the left and right. In contrast, we find robust support for ideological asymmetry. Consistent with theories of ideological asymmetry, liberals retweeted a more diverse set of policy topics than conservatives by 19.4%. They also engaged more with toxic content from in-group elites by 56%. Given the tendency for people to follow like-minded others on social media, these diffusion patterns imply that liberals are exposed to more politically diverse and toxic elite-generated content on social media, while conservatives receive more politically homogeneous and less toxic content. The demand and supply dynamics of political information suggest the existence of polarized information bubbles such that liberals and conservatives reside in distinct information ecosystems. [Paper]

"Emergence and collapse of reciprocity in semiautomatic driving coordination experiments with humans", by Hirokazu Shirado

Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments. [Paper]

"The role of networks in evaluating and correcting political information" by Katya Ognyanova

New social and technological factors are shaping the way we evaluate facts. Political polarization and declining trust in institutions have contributed to concerns about our vulnerability to digital misinformation. Suggested solutions to this problem include regulatory measures, technological innovations, and literacy initiatives. Examining the challenge through a network lens indicates that a successful intervention could leverage mechanisms of social influence. The trust we place in messages, information sources, and institutions is not a simple, individual-level choice. Trust emerges in a social context and is influenced by interpersonal and community factors. Perceived social costs and benefits guide our decision whether to spread, ignore, or debunk a piece of information. This talk explores the social aspects of content evaluation and dissemination. It offers an overview of the role social networks play in our exposure to, engagement with, spreading of, and belief in misinformation. Social influence is discussed as a pathway to debunking false stories. The talk outlines a case study that demonstrates the potential of network approaches to correcting misperceptions. [Paper]

"Bridging the Digital Divide: Data Access and Integration of Venezuelan Migrants in Colombia" by Nejla Asimovic

The crisis in Venezuela has forced nearly two million people to seek refuge in Colombia, creating significant challenges for both the displaced individuals and the Colombian government. A notable hurdle is the limited internet access that impedes the acquisition of crucial information on government programs, economic opportunities, and social networks. In collaboration with Innovations for Poverty Action Colombia and the National Planning Department of Colombia, our study aims to assess the impact of enhanced data access on the lives of Venezuelan migrants in Colombia. Specifically, we seek to measure how improved data access influences their awareness of assistance programs, trust in the government, success in the job market, and overall well-being. To achieve this, we provide mobile data credits to a selected sample of Venezuelan migrants in Colombia who currently lack internet access. Within this sample, a subgroup receives WhatsApp messages directly from a moderator trained by Colombian public officials, with the delivery method varying – some participants receive messages within WhatsApp groups, fostering networking among themselves, while others receive messages directly from the moderator. These messages offer information about available social programs and actively encourage enrollment on an online portal. By analyzing the impact of this intervention through attitudinal and behavioral data, we aim to gain insights that can inform policies to strengthen connections between migrants and host countries. Furthermore, we seek to leverage the widespread use of WhatsApp as a means to activate networks and enhance public service delivery. [Paper]

Logistics

When?

The workshop will take place on April 25-26, 2024.

Where?

Room 500 at the Annenberg School for Communication (please, use the Walnut Street entrance to be directed on how to reach the room).

Travel and Accommodation

We have reserved rooms at The Sofitel Rittenhouse, which is a short ride from Annenberg. If you need assistance with your travel arrangements, please contact Luisa Jacobsen.

  speakersscheduletalkslogistics