Artificial Intelligence (AI) and, in particular, Machine Learning (ML) algorithms are becoming de-facto standard pillars for creating innovative services and applications. Nowadays, they are commonly found in many aspects of daily processes, both in public and in private sectors.
Actually, their pervasiveness is so deep that institutions have been prompted to address the necessity of specific regulations, also taking ethical principles into account.
In 2017, the G7 partners issued a Declaration, highlighting the importance of the “vision of human-centric AI which drives innovation and growth in the digital economy” . One year later, in 2018, the European Commission tasked a group of independent experts to produce a document called the “Ethic Guidelines for Trustworthy AI”, followed in 2021 by a proposal for a “Regulation of the European parliament and of the council laying down harmonized rules on artificial intelligence” .
A number of requirements should be met by an AI system in order to be considered “trustworthy”: among others, “privacy and data governance” and model “transparency”.
The privacy aspect is often considered of utmost importance by data-owning organizations, which are often reluctant to share their data with other parties. Also, data are perceived as a precious asset by companies and often exploited as an “unfair advantage” over competitors. Finally, user data often involve sensitive information that needs to be treated carefully to avoid privacy issues.
In these scenarios, with private raw data spread over multiple physical locations, traditional Machine Learning approaches are not always feasible, as they require the availability of the overall dataset stored in one centralized server. On the other hand, when data are naturally and necessarily distributed in isolated silos, every single data owner may not have sufficient data to properly train an AI model. These considerations lead to the necessity to adopt novel paradigms and propose alternative methodologies.
Federated Learning (FL) can be a valid solution to cope with the data privacy issue: the key idea of FL is to learn local AI models from local data, and then aggregate the (locally-computed) models or updates to generate a global aggregated model. Thus, data-privacy is preserved, since raw data are not exchanged between the different clients responsible for the local models, but the overall information is collaboratively used to learn the aggregated model.
The aspect of explainability is at the heart of so-called trustworthy AI: General Data Process Regulation recital 71 states that: “[...] In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”.
As a consequence, industry and academia are placing increasing attention on eXplainable AI (XAI).
The acronym FED-XAI stands for Federated learning of XAI models and is conceived to provide a leap forward toward trustworthy AI.
The FED-XAI approach will be a common presence in the future AI-based application ecosystem.
José Luis Corcuera Bárcena, Mattia Daole, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, Fabrizio Ruffini, Alessio Schiavo: Fed-XAI: Federated Learning of Explainable Artificial Intelligence Models. XAI.it@AI*IA 2022: 104-117