Skip to main content
European Commission logo
European Centre for Algorithmic Transparency
  • News article
  • 27 June 2024
  • Joint Research Centre
  • 2 min read

Open vacancy for AI Specialist

We are looking for an AI specialist interested in supporting EU AI policies, with a focus on auditing and evaluation (e.g., benchmarking, adversarial testing, red-teaming, training compute) and trustworthy AI (fairness, transparency, human oversight and social and environmental impact).

The successful candidate will contribute to the unit’s research activities on analysing the risks of AI and relevant mitigation measures, and designing methodologies for AI evaluation, fairness, transparency, human oversight and accountability. This research involves several methodologies, including system-centric and user-centric approaches, with a focus on data-driven approaches.

Concrete areas of work may include:

  • Developing methodologies to assess the capabilities and systemic risks of general-purpose AI models
  • Defining methodologies for the auditing and evaluation of data, algorithms, models, systems and applications to assess issues linked to fairness, accountability, transparency, and risks to individuals, groups and society. This includes work on metrics, measurements or evaluation practices. 
  • Evaluating the broader societal and ethical impact of AI, with a focus on recommender systems and generative AI 
  • Conducting science mapping, system- and user-centric evaluation studies, 
  • Advancing towards methodologies for ensuring trustworthy AI, including requirements for transparency, fairness, accountability, data governance and human oversight, 
  • Providing scientific contributions to EU AI policies

Essential experience and skills:

  • A PhD or university degree plus at least 2 years of professional experience in artificial intelligence
  • Research expertise in topics linked to AI evaluation and safety, with a focus on trustworthy AI (transparency, fairness, human oversight, societal impact)
  • Practical experience in AI evaluation and development, e.g. machine learning and deep learning models, in topics linked to natural language processing, computer vision, audio processing or recommender systems. 
  • Very good (C1 level) knowledge of English

Any of the following additional experience and skills would be desirable:

  • Experience in performing AI evaluation activities (e.g., benchmarking, adversarial testing, red-teaming, tasks, capabilities, metrics, generality, multilingual evaluation), or analysing their risks and impact on society and individuals (e.g. safety, impact on fundamental rights, fairness, explainability, trust). 
  • Experience in evaluating and adapting generative AI models (e.g., large language models, text-to-image generative AI, etc.), including techniques such as N-shot learning, retrieval-augmented generation (RAG), reinforcement learning with human feedback (RLHF), parameter efficient fine-tunning, quantization techniques, etc.
  • Knowledge of the architecture and lifecycle of large-scale AI algorithms, such as those used by online platforms and search engines. 
  • Software engineering and development skills; ability to develop, maintain and deploy production-grade data science and AI libraries, tools and applications in Python (e.g., Tensorflow, Keras, PyTorch, Scikit-learn). 
  • Experience in AI policies, notably the AI Act and the Digital Services Act.

Place of employment: Seville (ES)

Application deadline: 25 July 2024

Vacancy code: 2024-SVQ-T3-FGIV-025542

Read the full vacancy notice and apply here

Details

Publication date
27 June 2024
Author
Joint Research Centre