Technical Research Consultancy

Amnesty International

London, United Kingdom

Amnesty Tech is a programme in Amnesty International's Research Advocacy and Policy Directorate with a mission to protect and extend human rights in a world of rapid, technological change. We are an inter-disciplinary team of technologists and human rights experts currently based in 13 global locations, including: Amsterdam, Brussels, Berlin, Toronto, London, New York, Tunis, and San Francisco.

The Consultant will specifically work with Amnesty Tech's newly established Children and Young Peoples' Digital Rights, Health and Well-being (CYD) team. The overarching vision of this team is of a world where online platforms and other digital technologies are safe, healthy and supportive spaces for children and young people, helping them realize their human rights.

Building on Amnesty International's 2019 report “Surveillance Giants”, the CYD team's first research project investigates the effects of prominent social media platforms' surveillance-based business model on children and young people, asking the following core research question: What are the effects of surveillance-based business models on children and young people, especially in terms of abuses of their health & well-being rights to privacy, and freedom of opinion and thought? 

We are collaborating on aspects of this research project with national Amnesty entities and youth groups in the Philippines, Kenya and Argentina.

Purpose of the assignment

The consultancy is intended to deliver research, including replicable research methodologies and data collection frameworks, on the availability and algorithmic amplification of content depicting and/or promoting depression, self-harm and suicide to children and young users (up to the age of 24) on Instagram and TikTok, respectively.

We would like to develop research approaches, in collaboration with the Amnesty CYD team, to explore these types of questions:

  • Types of content: What kind of material can children and young people using Instagram and TikTok find when searching for terms, including slang words and alterations of words relating to self-harm, depression and suicide?
  • (When) Are children and young people steered towards helplines and professional organizations advising children and young people on mental health issues?
  • How do the quality of content moderation, recommendations and design features intended to detect potentially harmful searches and user behaviour and to steer CYP towards seeking professional help differ depending on location and language
  • Amplification: Under what circumstances are children and young people shown harmful material depicting and/or promoting depression, self-harm and suicide through algorithmic recommendations?

If so, how quickly and at what frequency? How would we measure this?
How often are children and young people seeking out depressive and harmful content shown content that addresses these themes in constructive ways / how often is the cycle of depressive content in their feeds broken?

Deliverables

In order to answer these questions, the consultant will deliver:

  • A research methodology that sets out a framework for gathering representative research results on the availability and algorithmic amplification of content depicting and/or promoting self-harm, depression and suicide on Instagram using research accounts. The methodology should ideally be replicable in future investigations in non-English speaking locations and transferrable to other types of potentially harmful content.

Taking the project timeframe into consideration, the methodology will set out:

  • The number of research accounts used as well as the assigned age, location (if found to be feasible, a diversity of “home states” attributed to the user profiles including programme focus countries Kenya and the Philippines in addition to the UK and the US would be desirable), and any other properties, interest tags, etc. applied
  • (including proposed solutions to the new biometric age assurance applied to Instagram accounts in the EU and the US);
  • The accounts' various “user behaviours” applied to test which kinds of behaviours trigger harmful search results (as opposed to the display of help features such as pointers to helplines) and algorithmic recommendations of such content;
  • An approach through which we will test whether the recommender system can be trained to stop showing such content when a user makes a deliberate effort to steer away from such themes.
  • The analysis plan, including relevant tools/code and the approach taken to annotate/categorize recommended posts

A research methodology (including all the components mentioned above) suitable for such investigations into algorithmic recommendations served on TikTok

  • A pilot research project applying these research methodologies once finalized and decided upon with the Amnesty CYD team to English-language accounts on Instagram and TikTok to be conducted and documented, producing answers to the aforementioned research questions.
  • Amnesty reserves the right to use and re-purpose the information provided by the consultant for various purposes that serve our larger research objectives. 

Source:https://careers.amnesty.org/vacancy/-technical-research-consultancy-algorithmic-harms-of-social-media-platforms-3595/3623/description/