On May 11, Ministry of Science and Technology (MOST) held a press conference to announce xCos, a face verification module with explainability, which was developed by Prof. Winston Hsu's team at the MOST Joint Research Center for AI Technology and All Vista Healthcare (AINTU). Besides recognition, this module can further explain why the AI thinks the two face images are the same identity or not. Enabling explainability in AI will not only promote the AI’s technique but also enable people's trust.
According to the Minister of MOST, Dr. Liang-Gee Chen, MOST has been supporting four AI Research Centers to develop different research domains since 2018 after the AI Science Strategies announced in 2017. These centers, located in National Taiwan University, National Tsing Hua University, National Chiao Tung University, and National Cheng Kung University, focus on AI technology, biomedical technology, intelligent manufacturing, applied AI research, and humanities and social sciences. Prof. Hsu’s team is from NTU and focuses on AI technology; their achievement, the xCos module, can be plugged into any existing deep face verification models. Meanwhile, they are preparing to extend explainable AI (XAI) into different fields such as energy, medicine, and the manufacturing industry.
World’s Top Face-Recognition
In 2011, Prof. Hsu’s team developed the first search engine of the human face on mobile devices. Since then, they kept continuing to tackle with the challengs in several domains including the recognition of the cross-age and disguised faces. In 2018, they won first place in Disguised Face Recognition for CVPR.
Prof. Hsu said, they have assisted many software companies in developing face recognition technologies through academia-industry collaboration during the past few years. The finding that some AI verification results are counterintuitive inspired them a lot to take time to build up xCos to explore the justification for AI decisions. This model can provide both the quantitative and qualitative reasons to explain why two face images are from the same person or not. If the two face images are viewed as the same person by the model, the team proposed method can clearly show which areas on the face are more representative than others via providing local similarity values and attention weights. The explainable module, xCos, can even work well with other common neural network face recognition backbones such as ArcFace, CosFace, etc.
Demystifying the Black box of AI
Minister Chen said, the reason why technology always continuously progress is because people kept asking “why” and make efforts to find solutions. In the early stage, AI was run by a rule-based system. It was rather easy to trace how the system made its decision, and it was highly explainable. Nowadays with an increasing amount of data, deep convolutional neural networks achieve higher accuracy for the task of face verification. However, people have noticed that AI can’t explain the decision making process. Given exceptional performances of AI, deep face verification models need more interpretability so that we can trust the results they generate.
According to PricewaterhouseCoopers, AI has a $15 trillion market value potential, but it is difficult to know the rationale of how the algorithm arrived at its recommendation or decision – ‘Explainable AI’. At this time, computer vision communities still lack an effective method to understand the working mechanism of deep learning models. Due to their inborn non-linear structures and complicated decision-making process (so-called “black box”), whose unknown reasons could lead to serious security and privacy issues. The aforementioned problems will make users feel insecure about deep learning based systems and make developers struggle to improve them. Thus, we can imagine why XAI took an important part in DARPA's $2 bn.-project in 2018.
Explainability and High Compatibility
Minister Chen added that explainable AI is surely an important issue in AI development in the world. By this trend, MOST announced “AI R&D Guidelines” last September to emphasize “Transparency and Traceability” and “Explainability.” In February, the European Union also published WHITE PAPER On Artificial Intelligence - A European approach to excellence and trust, which pointed out that the lack of trust blocked broad AI use. The future development of AI must make people trust AI by reinforcing transparency and explainability.
Prof. Hsu said that xCos can help people to understand how decisions were made and to assist developers to examine the insights for the deep neural networks. He has contributed source code to GitHub (https://github.com/ntubiolin/xcos) for the research community. He believes that with explainable techniques, people will have more confidence in accepting AI's decisions, and developers can adjust their programs to improve accuracy.
Media Contact
Winston H. Hsu
Prof. of CSIE, National Taiwan University
The associate editor for IEEE Trans. on Multimedia (TMM), and Trans. on Circuits and Systems for Video Technology (TCSVT)
NVIDIA AI Lab, Founding Director
Co-founder of thingnario
CSIE, National Taiwan University
Tel: +886-2-3366-4888
E-mail: whsu@ntu.edu.tw
Shao-Ping Chiang
Officer, Department of Foresight and Innovation Policies, MOST
Tel: +886-2-2737-7982
E-mail:spchiang@nstc.gov.tw