Prof. Dr. Matthias C. Kettemann
is one of the speakers at the webinar Explainable AI. The event is hosted by the Heinrich-Böll-Stiftung Hongkong.
About the Event
Artificial intelligence (AI) algorithmic designs may involve algorithms such as neural networks and machine learning mechanisms, which can enhance robust power and predictive accuracy of the applications. However, how AI systems arrive at their decisions may appear opaque and incomprehensible to general users, non-technical managers, or even technical personnel. Algorithmic design may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives to create AI applications that are transparent, interpretable, and explainable to users and operations managers. These initiatives seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups.
You can find more information about the event and the possibility to register here