Kid sitting on a man's shoulders in a crowd. KE ATLAS on Unsplash.
SAFELab

About SAFELab

The SAFELab is a transdisciplinary research lab drawing on qualitative and computational methods and leveraging reflexive social work values. We examine well-being, healthy equity, and social justice with youth of color and marginalized communities.

Photo Credit (top image): KE ATLAS / Unsplash

SAFELab is dedicated to using innovative methods to promote joy and healing in both online and offline spaces. We aim to build equitable partnerships with community members and community-based organizations that are engaged in building cultures of belonging, health, and safety.

We strive to connect the dots between professional knowledge, community engagement, and technology. We listen to and elevate lived realities in order to co-design and reimagine how AI and social media platforms can work for all.

Our Aims

  • Improve well-being outcomes for youth of color
  • Be a resource for all working in violence prevention and intervention
  • Train future social work scholars interested in urban research
  • Contribute new knowledge regarding the phenomenon of community violence and social media behaviors
  • Identify best practices to improve community-level work

Ethics

We acknowledge that identifiable social media data from marginalized populations can be used to criminalize and incarcerate communities of color. We also acknowledge that the data we collect and methods have the potential to cause harm if used improperly or fall into the wrong hands. As such, the SAFElab has developed a set of ethical guidelines describing our research process, collaborations with data science, and dissemination efforts. The nature of our work requires proactive, iterative, and ever changing ethical considerations in order to prevent  any potential harm that may come to communities with which we work and from which our data originates. The following ethical guidelines are a first step in confronting the challenges that arise from our work on social media to include the use of artificial intelligence as a tool for violence prevention in marginalized communities.

Transparency

  • Prioritizing a list of community needs around violence prevention, updating this list as new needs become apparent

  • Describing our data collection and analysis process and the ways the data is used and applied

  • Convening an advisory board of experts in the field, violence prevention workers, and community members (including formerly gang-involved youth) (Monthly meetings)

  • Community validation and evaluation of the decisions around our data analysis and labeling

Data Collection

  • Institutional Review Board (IRB) approval for all of our research studies. However, due to the IRB considering public social media data as exempt, we must find other ways of accountability

  • The social media data we work with comes from hard to reach populations, which makes consent not only hard to obtain, but unreasonable to expect. We are working to find other ways to protect the young people who are involved in our studies:

    • Community consent

    • Family member consent

Data Analysis

  • Password protected annotation system

  • No one outside of our research team has access to the data

  • We are considering having all of our data annotators sign a Memorandum of Understanding (MOU) around the importance of not sharing the data with anyone

  • Weekly conversations on the ethics of our work. We iteratively revisit our work and create space for anyone on the team to bring up any ethical issues, and to address issues brought up by people outside of our research team, including organizational partners and community members

Sharing Data

  • We currently do not share any of our datasets with law enforcements agencies or anyone using punitive and criminalizing methodologies. We will continue to review these practices with our community partners

Research Presentations and Publishable Work

Text Social Media Data

  • No longer using usernames

  • Altering the text of the social media post to render it unsearchable

  • Proactively removing social media posts and users from our dataset who have gone private or been removed/suspended from a certain platform

Image Social Media Data

  • No usernames

  • No pictures of faces

  • Images from our dataset in publishable work

    • No images from our dataset

    • Using similar creative commons images in our publishable work as examples

  • Provide a password protected URL to reviewers with anonymized examples of social media posts and posts with images from our dataset

Founded and directed by Desmond Upton Patton, Ph.D., the lab began at Columbia University, affiliated with the School of Social Work, Data Science Institute, and Digital Storytelling Lab. In 2022, SAFELab moved to the University of Pennsylvania, where it is jointly hosted by the Annenberg School for Communication and the School of Social Policy and Practice.

Test

Get in Touch!

Three students talking and laughing outdoors. Jed Villejo on Unsplash.

Join Us

The SAFELab welcomes people from all backgrounds to join us in our research. Time commitments typically range from 8-12 hours per week.

Desmond Patton speaking on a panel

Speaking Engagements & Interview Requests

SAFELab members are available to provide interviews or speak at conferences and universities about our research and projects.

Annenberg Building view from Walnut Street with blue sky

Contact Us

3901 Walnut St.
Philadelphia, PA 19104
(215) 746-6290
safelabupenn@gmail.com
@SAFELab