This week, we had a compelling presentation on Intelligent Trust Services at the European Union Agency for Cybersecurity (ENISA) Trust Services Forum, part of CA Day 2024 in Heraklion. The session, delivered by my colleague Jörg Lenz, delved into the integration of artificial intelligence (AI) within digital trust products and processes, highlighting both the potential and the responsibilities that come with leveraging AI in such critical domains.
Intelligent Trust Services and AI Integration
The primary objective of incorporating AI into Intelligent Trust Services is to enhance human activities through co-intelligence. This approach aims to promote convenience and efficiency while ensuring trust across various applications and processes. During the session, Jörg emphasized the necessity of human oversight, accountability, and ownership. These elements are crucial for carefully examining use cases, addressing jurisdictional challenges, and selecting the most appropriate AI models for specific tasks.
The interactiona during the coffee breaks and online underscored the community’s interest and the relevance of integrating AI responsibly within trust services.
Contributions and Collaborations
In preparing for the talk, I had the honor of co-authoring the presentation with Jörg. My contributions focused on demystifying AI by addressing common myths, outlining effective tools and methodologies, and exploring the architecture of Agentic Systems. Our goal was to provide a nuanced understanding of AI’s capabilities and limitations, especially within highly regulated environments like those governed by EU Trust Service Providers.
The slides were well-received and praised for their detail and informativeness. We’re currently looking forward to more upcoming discussion on Intelligent Trust Services at the Forrester Technology & Innovation Summit EMEA in London on October 10.
Community Engagement and Reflections
The event also featured valuable exchanges with peers in the field. For instance, Patrick von Braunmühl (Director of Public Affairs at Bundesdruckerei) highlighted the ongoing challenge of defining and measuring trust in AI systems. In response, I referenced Maximilian Seeth’s talk, “Can we trust AI?”, which provided a philosophical perspective on trust in AI, suggesting that reliability is a more precise term in technical contexts.
Sławomir Górniak (Cyber Security Expert at ENISA) questioned whether AI is just another buzzword. I drew parallels to our foundational principles, such as “Taking Signatures Seriously,” and discussed the balance between simplicity and necessary complexity in AI integrations.
Adrian Doerk appreciated that the trust service community is already thinking on how to bring trust services to the next level.
Clemens Wanko (Head Trust of Trust Infrastructure at TÜV) shared his appreciation for Jörg’s perspective of viewing AI as an evolution rather than a revolution within eIDAS and Trust Services. I provided additional context by referencing traditional models and the potential benefits of more complex AI solutions in specific applications like PII redaction.
Ethan Mollick’s timely reference to a paper on using large language models (LLMs) as co-intelligence partners in research echoed our approach to integrating AI in trust services. The collaborative potential between human expertise and AI-driven exploration aligns with our vision of co-intelligence, where AI augments rather than replaces human decision-making.
Moving Forward
The discussions and feedback from CA Day 2024 have reinforced the importance of responsible AI integration in digital trust services. As we continue to develop and refine Intelligent Trust Services, our focus remains on enhancing human activities through co-intelligence, ensuring trust, and maintaining accountability.
[Update 2024-10-03] Slides are available on the event page.