Trustworthy AI

Lecturer: Francisco (Paco) Herrera, University of Granada, Spain

Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. Our multidisciplinary vision of trustworthy AI also includes a regulation debate to serve as an entry point to this crucial field in our society's present and future progress.

Sessions


Trustworthy Autonomous Vehicles

Lecturer: David Fernández Llorca, European Centre for Algorithmic Transparency, Spain

The implementation and relevance of the assessment list established by the independent High Level Expert Group on Artificial Intelligence (AI HLEG) as a tool to translate the seven requirements that AI systems should meet in order to be trustworthy, defined in the Ethics Guidelines, will be discussed in detail and contextualized for the field of autonomous vehicles (AVs). The general behavior of an AV depends on a set of multiple, complex, interrelated Artificial Intelligence (AI) based systems, each dealing with problems of different nature. Following the final version of the EU regulation for AI (the AI Act), we can consider that most of these AI systems will be safety relevant (i.e., safety components or products), and some of them subject to third conformity assessments, so they will require to comply with the AI regulation for high-risk AI systems. One of the main challenges will be to deal with risks to fundamental rights, in a well-established sector that do not inherently has this kind of perspective. The adoption of AVs involves addressing significant technical, political and societal challenges. However, AVs could bring substantial benefits, improving safety, mobility, and the environment. Therefore, although challenging, it is necessary to deepen the application of the assessment criteria of trustworthy AI for AVs.

Trustworthy AI for Industry

Lecturers: Jose Antonio Martin Hernandez and Jose Javier Valle Alonso, Repsol, Spain

The application of AI in industrial risk environments requires an intensive security assessment, where control over data in data-driven models is critical. Attributes such as technical robustness, security, transparency, and human oversight are essential for these AI systems. The diversity of industrial processes requires a general-purpose AI approach that can adapt to different scenarios while maintaining these attributes.

Reliability in AI models is a multifaceted concept that encompasses various aspects to ensure that AI systems perform consistently and predictably in industrial environments. Here are some key characteristics that AI models should possess in terms of reliability:

  1. Robustness: AI models must be robust to a wide range of inputs and conditions, including those that are infrequent or unexpected. This includes the ability to handle noisy, incomplete, or ambiguous data without significant performance degradation.

  2. Resilience: Models should be designed to recover quickly from perturbations and continue to operate effectively in the face of errors or changes in the environment.

  3. Reproducibility: The results produced by AI models should be consistent and reproducible across runs and settings, which is critical for trust and validation purposes.

  4. Transparency: Reliable AI models should be transparent in their operations, allowing users to understand how decisions are made. This is especially important in high-risk environments where accountability is critical.

  5. Explainability: Closely related to transparency, explainability refers to the ability of AI models to provide understandable explanations for their decisions, which is essential for gaining user trust and facilitating human oversight.

  6. Security: AI models must be safe and not cause harm to humans or the environment. This includes having mechanisms to prevent, detect, and mitigate harmful outcomes.

  7. Compliance: AI systems must comply with relevant laws and regulations.

  8. Performance monitoring: Continuous monitoring of the performance of AI models is necessary to ensure that they are functioning as intended and to detect any deviations that may indicate reliability issues.

  9. Maintenance: AI models require regular maintenance and updates to remain reliable over time, including retraining with new data to adapt to changing conditions.

  10. Human-in-the-loop: Incorporating human oversight can increase the reliability of AI models by providing an additional layer of review and intervention when necessary.

Federated Learning and Data Privacy 

Lecturers: Joao Gama and Paula Raissa (University of Porto, Portugal), Praneeth Vepakomma (MIT & Mohammed Bin Zayed University of Artificial Intelligence, United Arab Emirates)

Federated Learning (FL) is an innovative approach to machine learning that allows models to be trained across multiple decentralized devices or servers holding local data samples without exchanging them. This course comprehensively introduces federated learning, exploring its principles, methodologies, and applications. Participants will delve into the foundational concepts of FL, including its architecture, algorithms, models, and methods. The course will be illustrated with relevant applications and demos.

It has recently been shown that privacy-enhancing techniques based on previous approaches to K-Anonymity do not satisfy GDPR (the European Union’s privacy law) on the required clause of ‘predicate singling out,’ while differentially private mechanisms (a mathematical notion of privacy) satisfy this clause. This tutorial thereby provides methods for differentially private federated learning.

FLEXible: working with Federated Learning

Lecturers: Nuria Rodríguez-Barroso, University of Granada, Spain

Federated learning is a distributed learning paradigm designed to maintain the privacy of user data. Although it is a very popular concept, would we know how to do an experiment with federated learning? In this workshop we present FLEXible, a tool developed by DaSCI for experimentation in federated environments. We will talk about the advantages of this framework compared to other existing frameworks, and we will do our first experiment with this platform.

Generative AI in the Open Source World: The Falcon Case

Lecturers: Merouane Debbah, Khalifa University, United Arab Emirates

Recently, Generative AI has emerged as a transformative technology in AI. Within the realm of open-source development, the Falcon series is a testament to the UAE's ongoing efforts in generative AI. This talk explores the journey of Falcon from its inception to its current state, highlighting the innovative contributions and challenges we have overcome. We will also discuss the landscape of Generative AI in the open-source world and our commitment to open-source within the Falcon Foundation.

Privacy Enhancing Technologies

Lecturer: Praneeth Vepakomma, MIT & Mohammed Bin Zayed University of Artificial Intelligence, United Arab Emirates

This talk presents a holistic view of various privacy-enhancing technologies such as differential privacy, secure multi-party computation, homomorphic encryption, oblivious transfer, and garbled circuits. In addition, it provides an algorithmic view of what it means to protect in alignment with regulatory requirements such as copyright infringement, timing or latency attacks on devices, and identification of bad actors on networks. It looks at various tradeoffs between privacy and social choice constructs of influence, welfare, and statistical robustness concepts. It provides multiple methods for watermarking and fingerprinting data and models to detect infringement of terms of usage to help trigger an audit. The area of AI regulations and compliance is evolving into new territories. It gives way to pockets of open problems at their intersection that must be solved to maintain social good.

Navigating Privacy Risks in (Large) Language Models

Lecturer: Peter Kairouz, Google, USA

The emergence of large language models (LLMs) presents significant opportunities in content generation, question answering, and information retrieval. Nonetheless, training, fine-tuning, and deploying these models entails privacy risks. This talk will address these risks, outlining privacy principles inspired by known LLM vulnerabilities when handling user data. We demonstrate how techniques like federated learning and user-level differential privacy (DP) can systematically mitigate many of these risks at the cost of increased computation or reduced performance. In scenarios where only moderate-to-low user-level DP is achievable, we propose a strong (task-and-model-agnostic) membership inference attack that allows us to quantify risk by estimating the actual leakage (empirical epsilon) accurately in a single training run. The talk will conclude with a few projections and compelling research directions.

Models that Prove Their Answers

Lecturer: Shafi Goldwasser, UC Berkeley, USA

How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured on average over a distribution of inputs, giving no guarantee for any fixed input. We propose a theoretically founded solution to this problem: To train Self-Proving models that prove the correctness of their output to a verification algorithm V via an Interactive Proof. Self-Proving models satisfy that, with high probability over a random input from a specified distribution, the model generates a correct output and successfully proves its correctness to V. The soundness property of V guarantees that, for every input, V will not be convinced of the correctness of an incorrect output. Thus, a Self-Proving model proves correctness of most of its outputs, while all incorrect outputs (of any model) are detected by V. We devise a generic method for learning Self-Proving models, and prove convergence bounds under certain assumptions. The theoretical framework and results are complemented by experiments on arithmetic capabilities.

Regulation and Governance of AI 

Lecturer: Carme Artigas, United Nations, Spain

This session will cover the governance and regulatory landscape for Artificial Intelligence (AI). It will explore key principles and frameworks guiding AI technologies' ethical development and deployment. The session will examine current regulations, including the EU's AI Act and other international standards, and their implications for businesses and developers. Topics will include challenges and opportunities in implementing these regulations and best practices for compliance and risk management. The session will provide insights into navigating the evolving legal environment to ensure responsible and trustworthy AI innovation through case studies and interactive discussions.

Approaching AI Regulation Compliance from a Practical Standpoint: From Principles and Standards to Certifications and Tools

Lecturer: Justo Hidalgo, Adigital, and Leticia Gómez Rivero, Minsait, Spain

In this session, participants will examine the fundamental principles and ethical standards that underpin global AI regulations and assess the necessity for these to evolve into comprehensive yet flexible risk management systems for companies implementing AI systems. The session will provide a comprehensive overview of the AI governance landscape, drawing insights from the development and market introduction of one of Europe's pioneering certifications for AI system transparency and explainability. Participants will gain practical knowledge of the latest tools and frameworks for achieving regulatory compliance, ensuring that AI systems are transparent, accountable, and fair. Through real-world case studies, they will learn strategies for navigating the evolving regulatory landscape and aligning AI development with global regulatory requirements.

This session is designed for AI professionals, policymakers, and academic researchers who are interested in applying their expertise in AI ethics and regulatory compliance in a practical and impactful way. By the conclusion of the session, participants will have acquired a comprehensive understanding of the principles and ethical standards associated with AI, as well as practical knowledge of the tools and frameworks utilized for regulatory compliance. Additionally, they will have gained insights from successful AI governance case studies and strategies for aligning AI development with global regulatory requirements.

Causal AI

Lecturer: Marcos Lopez de Prado, ADIA, United Arab Emirates

Scientific theories are falsifiable statements of the form “𝑋 causes 𝑌 through mechanism 𝑀”. Accordingly, discovering causal relations plays a fundamental role in the scientific method. In contrast, current AI algorithms primarily focus on the discovery of associations. The problem is that associations do not imply causation. Hence, current AI algorithms can potentially introduce biases into the scientific process, leading to false discoveries. This seminar discusses associational AI's potential uses and misuse in scientific research and advocates for developing causal AI tailored to scientific applications.

Responsible AI: From ethical principles to responsible practices

Lecturer: Atia Cortés, Barcelona Supercomputing Center, Spain

As Artificial Intelligence increasingly permeates various aspects of our daily lives, ensuring its trustworthiness becomes paramount. To do so, we need to change the way we design and evaluate AI-based systems to include the ethical, legal, socio-economic, and cultural aspects of AI (ELSEC-AI). This talk will explore fundamental ethical principles such as fairness, transparency, and accountability, and examine their practical applications in AI development and deployment. We will review frameworks and methodologies for integrating these principles into AI systems, addressing challenges such as bias, privacy, and governance. The session aims to bridge the gap between theoretical ethics and practical implementation, highlighting real-world examples and best practices for promoting AI that is not only innovative, but also consistent with societal values and human rights.

Empowering AI with Causal Reasoning

Lecturer: Darko Matovski, CausaLens, United Kingdom

Generative AI has made significant strides in recent years, but concerns around hallucinations, bias, transparency, and interpretability have hampered its adoption for critical business decision-making. To address these challenges and unlock the full potential of Gen AI, we must look to the emerging field of Causal AI. In this talk, Dr. Darko Matovski will explore how causal AI can elevate Gen AI by introducing a layer of causal reasoning that mimics human-like intelligence. By understanding the underlying causes and effects within data, causal AI can help overcome the limitations of Generative AI models. Join us as we delve into the exciting possibilities when grounding GenAI with causal AI, paving the way for more intelligent, transparent, and trustworthy AI systems.

Explaining black box machine learning model predictions using prototypes and counterfactuals

Lecturer: Jerzy Stefanowski, Poznań University of Technology and Polish Academy of Sciences, Poland

Current machine learning (ML) systems are often perceived as "black boxes" that don't provide insight into their inner workings or decision-making processes. This is the focus of explainable artificial intelligence (XAI). In the first part of the talk, we will briefly discuss these aspects in the context of the recent EU recommendations towards Trustworthy AI. We will then focus on XAI methods that offer symbolic and readable knowledge representations, such as prototypes, counterfactuals, and rules. Prototypes are representative examples defined from the learning dataset to explain model predictions. We will show their use in explaining random forest ensembles and a prototype-based neural network for text classification. In this network, we will demonstrate how to identify prototypes that link to important phrases in the input training text. Counterfactual explanations suggest how feature values of an input example should change to achieve a desired model prediction. We will present our multi-stage ensemble approach, which selects a single counterfactual based on multiple criteria, and our new method for robust explanations when the underlying ML model changes due to data shifts or retraining. In the third part, we will discuss the usefulness of discovering classification rules and selecting the best subset using Bayesian confirmation measures. This will be illustrated with examples from medical data. Finally, we will outline open problems and challenges in XAI.

SAFE Machine Learning

Lecturer: Paolo Giudici, University of Pavia, Italy

Machine learning models boost Artificial Intelligence (AI) applications in all human activities. This is mainly due to their advantage in terms of predictive accuracy with respect to ``classic'' statistical learning models. However, although machine learning models may reach high predictive performance, their predictions are not explainable and have an intrinsic black-box nature. Furthermore, they may not be robust, and they may use data that is not representative, generating unfairness. To better understand the opportunities and risks of machine learning methods, in the lecture, we will discuss a set of statistical metrics that can assess the trustworthiness of AI applications and, specifically, that can measure whether AI applications are Sustainable, Accurate, Fair, and Explainable (S.A.F.E.) We will present the mathematical background of the metrics related to Lorenz curves and Gini inequality and their application to real use cases. The talk is based on the work in the paper: Safeaipackage: A Python Package for AI Risk Measurement. 

Trustworthy AI in industrial systems: from high-level system design to AI model behaviour analysis

Lecturer: Simon Fossier, Senior Research Engineer in AI, Thales, France

Recent advances in Artificial Intelligence have opened the way for its integration into safety-critical complex systems. Designing such systems involves defining a structured process to ensure reliability, which requires bringing safety analysis to AI-based sub-components. Robustness, embeddability, formal and experimental guarantees, ethical and legal aspects – many dimensions of trustworthiness are involved in designing systems, and AI-based systems must tackle them despite their specificities. In particular, the notion of Operational Design Domain is a useful concept, and its characterization relies on a structured methodology coupling ODD space design and experimental behavior sampling.

Gemini and Gemma LLM models deep dive

Lecturer: Rafael Sánchez, GenAI/ML CE Specialist Manager, Google Cloud, Spain

This presentation will provide an in-depth exploration of Google's Gemini and Gemma models. Gemini represents the cutting-edge in Google's model capabilities, featuring a context window of up to 2 million tokens. The Gemma family comprises open models including CodeGemma, PaliGemma, and RecurrentGemma. The session will cover the technical papers detailing these models and the essential supporting technologies, such as Mixture of Experts and Linear Recurrent Units (LRUs), along with some practical demonstrations.

Generative AI Real Applications from an SME

Lecturer: José Carlos Calvo Tudela, Chief Innovation Officer, Nazaríes / Intelligenia, Spain

In this session, Jose Carlos Calvo will guide you through the creation of a multi-agent oriented architecture within an organization. The presentation will begin with an introduction to the operation of Retrieval-Augmented Generation (RAG), followed by an exploration of agents capable of accessing databases and APIs to query or modify information. The session will culminate in a comprehensive overview of a multi-agent oriented architecture. By the end of the talk, you will understand how to develop a multi-agent system that can interact with people through chat, emails, or digital platforms, effectively managing any type of interaction and performing complex tasks.

Operationalising trustworthy and ethical AI in healthcare: The FUTURE-AI guideline

Lecturer: Karim Lekadir, University of Barcelona, Spain

Despite major advances in AI for healthcare, the deployment of AI technologies remains limited in real-world practice. Over the years, concerns have been expressed on the potential risks, ethical implications and general lack of trust associated with emerging AI technologies. In particular, AI tools continue to be viewed as complex and opaque, prone to errors and biases, and potentially unsafe or unethical for patients. This course will present FUTURE-AI (www.future-ai.eu), a code of practice established by an international consortium of over 115 experts, to ensure future medical AI tools are developed to be trusted and accepted by patients, clinicians, health organisations and authorities. The guideline provides structure and guidance for building AI solutions which are Fair, Universal, Traceable, Usable, Robust and Explainable (FUTURE-AI), and offer recommendations that cover the whole AI production lifecycle, from AI design and development to AI validation and deployment.

The promises and perils of AI innovation in healthcare

Lecturer: Ira Ktena, Google DeepMind, United Kingdom

AI permeates various facets of life and has already proved to be a capable assistant for numerous predictive tasks in the medical domain. Rendering such predictive AI systems safe and robust remains an unsolved challenge. This involves ensuring that the performance they achieve during the development stage is not compromised on unseen populations or under unprecedented conditions. In this tutorial, I am going to cover different applications of generative AI in healthcare, and demonstrate capabilities that it can unlock to solve important problems. I will also cover risks that arise from the use of AI in this domain, as well as latest research in the space of trustworthy and reliable machine learning that have been developed to mitigate them.

Sustainable AI

Lecturer: Luis Seco, University of Toronto, Canada

Climate risk, water availability, urban living, architecture design… are some of the challenges that we face and all have one thing in common: they are expressed by very complext systems where traditional modeling based on equations does not apply. In this course we will examine the evolution of machine learning and AI in three phases: neural networks, Large Language Models and Large Multimodal Models, which encompass increasing databases, originally originating from numbers alone, then expanded into words and textual databases, and the future databases including sensor data, images and video.

Artificial Intelligence: The Green Revolution in Technology

Lecturer: Amparo Alonso Betanzos, University of A Coruña, Spain

The success of artificial intelligence (AI) has been based on the development of increasingly precise, but also more complex models, with a greater number of parameters to estimate. However, this has led to a decrease in transparency and explainability of the models, as well as an increase in the energy cost resulting from training and executing them. It is estimated that by 2030 AI may be responsible for more than 30% of the planet's energy consumption.

In this context, green and responsible AI emerges, characterized by smaller carbon footprints, smaller model sizes, lower computational complexity, and greater transparency. There are various strategies to achieve this, such as providing algorithms with higher quality data, developing more efficient models in execution (such as edge computing), or improving the energy efficiency of the models. Other ethical considerations are also important, such as ensuring that AI applications are free from bias, transparent, designed with privacy in mind, and that they incorporate psycho-social decision models into their design and use. These ethical measures are presented as key elements to move towards a more ethical and responsible AI, which will promote the democratization of technology and strengthen citizens' confidence in its use.

Interpretable deep learning models

Lecturer: Vincent Zoonekynd, ADIA, United Arab Emirates

Deep learning models are often seen as blackboxes, and interpretability is only an afterthought. We will see, on several examples, that simple modifications to deep learning models can make them interpretable by construction.