Resumen
AI-based decision support systems are increasingly deployed in industry, in the public and private
sectors, and in policy-making. As our society is facing a dramatic increase in inequalities and
intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather
mitigate it. To trust these systems, domain experts and stakeholders need to trust the decisions.
Fairness stands as one of the main principles of Trustworthy AI promoted at EU level. How these
principles, in particular fairness, translate into technical, functional social, and lawful
requirements in the AI system design is still an open question. Similarly we don't know how to test
if a system is compliant with these principles and repair it in case it is not. AEQUITAS proposes
the design of a controlled experimentation environment for developers and users to create
controlled experiments for - assessing the bias in AI systems, e.g., identifying potential causes
of bias in data, algorithms, and interpretation of results, - providing, when possible, effective
methods and engineering guidelines to repair, remove, and mitigate bias, - provide
fairness-by-design guidelines, methodologies, and software engineering techniques to design new
bias-free systems The experimentation environment generates synthetic data sets with different
features influencing fairness for a test in laboratories. Real use cases in health care, human
resources and social disadvantaged group challenges further test the experimentation platform
showcasing the effectiveness of the solution proposed. The experimentation playground will be
integrated on the AI-on-demand platform to boost its uptake, but a stand-alone release will enable
on-premise privacy-preserving test of AI-systems fairness. AEQUITAS relies on a strong consortium
featuring AI experts, domain experts in the use case sectors as well as social scientists
and associations defending rights of minorities and discriminated groups.