
On 25 June 2025 in Athens, at the ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025), researchers from the European Commission’s Joint Research Centre (JRC), presented their analysis of how to investigate algorithmic-driven risks in online platforms and search engines. The work offers a novel taxonomy of study designs to examine these risks in a scientifically rigorous and practically useful way, supporting the risk management framework of the EU Digital Services Act (DSA).
A framework inspired by the DSA
Algorithms embedded in digital platforms profoundly shape online experiences, from content recommendations and targeted ads to content moderation. These systems, while central to digital services, can contribute to societal risks including, among others, harms to minors, risks to users’ mental well-being, radicalisation, discrimination and risks to civic discourse and electoral processes. To address these concerns, the DSA imposes risk management and transparency obligations, particularly for Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs).
The paper proposes a structured approach to algorithm auditing, showing how existing methodologies can help assess the risks posed by algorithms on large online platforms and search engines. It identifies four distinct categories of audit study design, each aligned with different needs linked to DSA risk management framework:
- Risk-uncovering studies: These are often the first step in any auditing process, it’s how a potential algorithmic harm is first brought to the attention of the public. They typically begin with anecdotical observation by users in real-world contexts. From there, the investigation evolves, sometimes through community collaboration, sometimes through journalistic work, into a broader understanding of how an algorithmic system might cause harm.
- Reverse engineering studies: These studies aim to understand the inner workings of algorithms, often without direct access. By systematically varying inputs and observing outputs, these “black-box” or “grey-box” studies reveal the parameters driving algorithmic decisions, such as location-based search result biases or ad targeting discrepancies. This understanding is often pivotal in investigating algorithmic-driven risks.
- Interface design studies: Drawing from human-computer interaction research, these studies assess how interface features — like reactions, comments, or layout — influence user behaviour and feed into algorithmic risks, such as whether design features on social media may amplify polarising discourse or stimulate addictive behaviors.
- Risk-measuring studies: These provide statistically robust, quantitative assessments of the impact of algorithmic systems on specific risks. Researchers employ experimental or observational methods to evaluate, for example, whether certain users are disproportionately exposed to harmful content or to algorithmic rabbit holes that could pose risks to users’ mental well-being.
New tools and opportunities for auditors
Beyond categorisation, the authors address the practical challenges of conducting algorithm audits, especially the strengths and limitations of various data collection methods. Crucially, the DSA introduces new instruments to enhance oversight, such as:
- Article 40, allowing vetted researchers to request access to platform data to conduct research on systemic risks.
- The Transparency Database, providing statements of reasons in the context of content moderation decisions.
- Ad repositories, with details on paid advertisement campaigns.
These tools are expected to support rigorous, compliance-oriented research and investigations, including those conducted by the algorithmic auditing community.
Looking ahead
While the review offers a solid foundation, it also highlights existing gaps, particularly linked to the limited visibility into internal audits performed by platform providers. Looking forward, as the DSA supports further research and scrutiny of online services, the landscape of algorithm auditing is expected to evolve rapidly.
Details
- Publication date
- 30 June 2025
- Author
- Joint Research Centre