Explainable AI (XAI) Section
Chair: Asim Roy
Email: [email protected]
Arizona State University

WCCI 2024: AI & Regulations Section Meeting

You are invited to view the presentation slides from the AI & Regulations Section Meeting, held on 2 July at WCCI 2024 in Yokohama, Japan.

View the Presentation Slides

 Find out more from Explainable AI 

I am pleased to announce that the INNS Board of Governors has approved a new Section on Explainable AI (XAI). Below you will find further details about the Section (motivation, objectives, activities, and how to join it).

We held our first XAI Section social gathering and organizational meeting at WCCI 2022 in Padua, Italy (18-23 July). It was held on Tuesday (July 19) evening in one of the conference/hotel rooms from 7 pm to 10 pm.

We are all very excited about this Explainable AI Section because deep learning applications are growing, and the potential global economic impact is estimated to be in trillions of dollars.

As a Section, we will grow and there will be plenty of opportunities to play an active role in this Section. If you are interested in taking a leading role in this Section, please email me. (Asim[dot]Roy[at]asu[dot]edu). In particular, we are asking for help in making industry contacts and having them sponsor this Section.

The XAI Section’s annual membership fee will be $50. I am sure the activities of this Section will be worth the fee. We hope you will sign up for this INNS Section and be part of the effort to make Explainable AI one of the most exciting research and application areas of neural networks.

 Motivation for the Explainable AI (XAI) Section

The black-box nature of deep learning models is preventing deployment of critical AI applications in defense, medicine, and other areas because of the potential risk of wrong decisions from such models and not understanding why it took such a decision. If the models were transparent in some form and we understood how they worked and made decisions, then such risks would disappear. Thus, creating some form of explainable and interpretable models out of deep learning models has become a critical research area for the machine learning community. It is also a high-priority research area at most national funding agencies such as the NSF (National Science Foundation) and DARPA (Defense Advanced Research Projects Agency) in the US. Once we have acceptable forms of explainability or interpretability of these models, it would lead to the deployment and use of AI on a scale that we have yet to see. And the economic impact of an extensive deployment of AI worldwide would be in the trillions of dollars in the years to come and produce unprecedented changes to our society. Such is the potential of Explainable AI.

This section is all about getting involved and jointly creating this new technology of Explainable AI that the world is waiting for.

 Explainable AI (XAI) Section Objectives

The main objective of the Section on XAI is to be the focal point of scientific exchange in this emerging area. More specifically, the Section and its members will:

  • Organize workshops, special sessions, panels, tutorials at high profile conferences
  • Engage the industry through plenary talks, sponsorship, etc.
  • Organize special issues in leading journals
  • Initiate and maintain an e-Newsletter on XAI involving the leading industry representatives

 Structure of the Explainable AI (XAI) Section

The XAI Section is currently chaired by Prof. Asim Roy of Arizona State University, who proposed this Section to INNS.

How to join?

Here’s the link to the membership application form:

2023 XAI Section Membership Application

There will be a joint meeting with the AML Section and the newly formed XAI Section at WCCI 2022 in Padua, Italy. Register below to join us at the AML/XAI Section meeting at WCCI:

AML/XAI Section Meeting Registration