Responsible Partner: RWTH Aachen University
To ensure AI systems are reliable and trustworthy, they must be tailored to human needs rather than forcing humans to adapt to technology. Users trust AI systems when they function effectively and operate correctly, which requires transparency regarding performance and underlying mechanisms. This aligns with the European Commission’s requirements for trustworthy AI: human agency, technical robustness, privacy, fairness, and transparency. The aim in the FAIRWork project was therefore to analyze transparency from users’ perspectives and help developers implement it in their services. The results of our research show that transparency measures can foster trust in AI for (non-technical) end users if applied correctly. Global process explanations, local reasoning of single results, background information, or accuracy communication can change the way a (AI) decision support system is perceived. Our research in FAIRWork on transparent AI and trust identified different system factors that change trust, acceptance, and usage of AI systems positively but also negatively. According to these factors, different options to implement transparency in AI arise. This innovation item provides insights into which measures promote this trust and how they can be applied to develop or to improve AI systems.
Find more information at: https://innovationshop.fairwork-project.eu/items/7/