One of the risks of using AI systems is the black-box nature of the results produced by the models. Why were certain decisions made? What criteria were used? What information was used to produce a result?

”Explainable AI” is the umbrella term used for AI systems that are capable of rationalizing their decisions in a way that is understandable for humans. In a world that is rapidly moving towards automation with AI systems, implementing explainability in our software is going to be an important step to create transparency, trust and comply with inevitable future regulations.

To supercharge our developments in explainable AI, we’ve partnered up with Markus Heid, Bastiaan van de Rakt from Deeploy, a software solution that provides functionality to host AI models with built-in explainers, providing users with easily explorable explanations for results and extensive audit trails.

Do you want your next AI implementation to be fully transparent and compliant with future regulations? Let us know, and we’ll tell you all about the possible solutions!

https://www.linkedin.com/posts/peterpeerdeman_one-of-the-risks-of-using-ai-systems-is-the-activity-7136299437143814144-_MkS?utm_source=share&utm_medium=member_desktop