科技部新聞稿
打破AI黑盒子-可解釋性的人臉辨識模組
日期:109年5月11日
發稿單位:前瞻及應用科技司
聯絡人:江紹平科員
電話:(02)2737-7982
E-mail:spchiang@nstc.gov.tw
科技部今(11)日召開記者會,邀請臺灣大學人工智慧技術暨全幅健康照護聯合研究中心(以下簡稱臺大AI中心)轄下的徐宏民教授團隊,發表該團隊在科技部的支持下所開發出的可解釋性人工智慧(Explainable AI ,XAI)模組xCos,該模組除了具備高辨識率的人臉辨識能力外,還可有條理的解釋AI產出結果的原因。這項技術不僅可協助國內外相關業者開發AI辨識技術、明白AI決策建議背後的理由,更能提升人類對於使用AI的信任度。
科技部部長陳良基表示,科技部自2017年宣布人工智慧(AI)科研戰略,引導臺灣成為AI發展重鎮,續於2018年起補助臺灣四所頂尖大學成立AI創新研究中心,包含臺灣大學(核心技術與生技醫療)、清華大學(智慧製造)、交通大學(智慧服務)及成功大學(生技醫療)。歷經2年多的醞釀與實踐後,臺大AI中心轄下的徐宏民教授團隊打造出能嵌合在各個人臉辨識模型的「可解釋性AI模組xCos」,該研究成果在近年世界各國強調AI透明度的當下,讓系統直接告訴我們人臉判釋的相似度及背後原因,並正將XAI技術擴展到其他關鍵性的人工智慧決策,包括能源、醫學、工業製造等領域。
世界頂尖的人臉辨識技術再精進
徐宏民教授團隊在科技部的長期支持下,自2011年便開發出第一套行動裝置上的人臉搜尋系統,之後不斷自我挑戰包括跨年紀的臉部辨識能力、偽裝人臉辨識等,前於2018年在全球三大電腦視覺頂尖會議之一的CVPR偽裝人臉辨識競賽(Disguised Faces in the Wild)中,以辨識率唯一超過9成之姿,技壓群雄奪下全球冠軍。
徐宏民教授表示,團隊過去三年透過產學合作,協助幾家軟硬體公司開發人臉辨識產品。在深度模型設計過程中會遭遇AI人臉辨識結果跟我們大腦直覺無法吻合的案例,不曉得判斷的依據為何。為解決這樣的問題,團隊花了一年多的時間開發了可解釋性人工智慧(Explainable AI ,XAI)模組xCos,可以成功解釋為何兩張人臉會辨識為同一人(或另一人)的原因,提出AI黑盒子的決策依據、協助人臉辨識產品開發,同時xCos會自動留意人臉不自然的表面而專注在真實以及具辨識能力的人臉位置,進而達到偽裝人臉辨識的效果。且經過測試,就算搭配不同的人臉辨識軟體,亦能發揮同樣的功能。
揭開AI的黑盒子,讓AI來告訴你「它」的理由
陳良基部長提到,人類總是藉由不斷的提問「為什麼」來釐清問題、尋求突破與找到答案,這也是人類科技能進步的原因。早期的AI,是基於規則系統(rule-based system)由科學家提供各種判斷依據,讓電腦依流程判斷出結論,例如決策樹等,此方法可以輕易的回頭找出電腦產出成果的原因,具有高度的可解釋性。但近年AI已進步到使用深度學習等類神經網絡,在快速且大量的消化各式資料後,由電腦自己訂規則,憑著數以千萬計的條件產出更高精準度的答案,但逐漸的,人們開始注意到AI無法詳細說明「它」做出決定的推理過程與原因,在得不到滿意的回答之前,將讓人駐足不前,不敢放心地運用AI解決問題,甚至質疑其決策行為。
根據國際知名公司資誠聯合會計師事務所(PricewaterhouseCoopers,PwC) 指出,人工智慧具有15兆美元的市值,但當前的關卡就是AI缺乏解釋性。現階段的AI只看到了輸入的資料及輸出的結果,中間的判斷依據與過程難以捉摸,就如黑盒子般,倘能理解AI如何做出判斷,確認決策的合理性,未來才能更進一步改善與強化模型可靠性,因此,可解釋性人工智慧(XAI)成了近年來各國AI研究領域的趨勢之一,包括美國國防高等研究計劃署(DARPA)於2018年宣布投入20億美金推動的AI計畫中,AI的可解釋性就是其中重要的一環。
可解釋性與高相容性,加速相關應用技術開發與產業提升
陳良基部長補充,「可解釋性的人工智慧」無疑是國際上AI發展的重要目標,科技部已於去(2019)年9月公布人工智慧科研發展指引中強調AI的「透明性與可追溯性」及「可解釋性」;另歐盟今(2020)年2月份發表的人工智慧白皮書(WHITE PAPER On Artificial Intelligence - A European approach to excellence and trust)也提及缺乏信任是阻礙AI廣泛應用的主要原因,因此未來AI發展的重點須透過理解AI決策成因,來強化運作流程的透明度,進而讓普世大眾對AI感到信賴與安心。
此外,徐宏民教授說,這套XAI模組xCos除了可供末端的使用者了解人臉辨識結果的原因外,更可協助開發人員探究及檢視系統的運作機制,該團隊為加速技術擴散與落地應用,所研發之可解釋性AI模組xCos可與其他人臉辨識系統作相互搭配,團隊亦已將這項技術以Open Source方式供國內外產學研單位使用(https://github.com/ntubiolin/xcos),希望將其相關理念拓展至其他深度學習相關應用中,同時也正將XAI技術擴展到其他領域關鍵性的人工智慧決策系統,如AI僅告知發電廠未來1小時是否要增加發電量,但XAI可以補充說明是因為預測氣候的改變或今日有特殊節慶等;AI可以說出X光片是否有肺炎徵兆,但XAI可更進一步解釋判斷的依據以及指出病徵的位置,這些XAI的應用,都可強化人民對AI的信任,更可協助系統開發人員進一步檢視AI判斷是否合理,便以改善與強化AI模型,進而促進國內外AI技術與相關產業進展。
研究成果聯絡人
徐宏民
國立臺灣大學資訊工程學系教授
IEEE Trans. on Multimedia (TMM) 及 Trans. on Circuits and Systems for Video Technology (TCSVT) 副主編
NVIDIA AI Lab 計畫主持人
慧景科技(工業智能新創)共同創辦人
Tel: 886-2-33664888
Email: whsu@ntu.edu.tw
Press Release
May 11, 2020
xCos, Unveiling the Black Box of AI: For An Explainable Face Recognition Model
On May 11, Ministry of Science and Technology (MOST) held a press conference to announce xCos, a face verification module with explainability, which was developed by Prof. Winston Hsu's team at the MOST Joint Research Center for AI Technology and All Vista Healthcare (AINTU). Besides recognition, this module can further explain why the AI thinks the two face images are the same identity or not. Enabling explainability in AI will not only promote the AI’s technique but also enable people's trust.
According to the Minister of MOST, Dr. Liang-Gee Chen, MOST has been supporting four AI Research Centers to develop different research domains since 2018 after the AI Science Strategies announced in 2017. These centers, located in National Taiwan University, National Tsing Hua University, National Chiao Tung University, and National Cheng Kung University, focus on AI technology, biomedical technology, intelligent manufacturing, applied AI research, and humanities and social sciences. Prof. Hsu’s team is from NTU and focuses on AI technology; their achievement, the xCos module, can be plugged into any existing deep face verification models. Meanwhile, they are preparing to extend explainable AI (XAI) into different fields such as energy, medicine, and the manufacturing industry.
World’s Top Face-Recognition
In 2011, Prof. Hsu’s team developed the first search engine of the human face on mobile devices. Since then, they kept continuing to tackle with the challengs in several domains including the recognition of the cross-age and disguised faces. In 2018, they won first place in Disguised Face Recognition for CVPR.
Prof. Hsu said, they have assisted many software companies in developing face recognition technologies through academia-industry collaboration during the past few years. The finding that some AI verification results are counterintuitive inspired them a lot to take time to build up xCos to explore the justification for AI decisions. This model can provide both the quantitative and qualitative reasons to explain why two face images are from the same person or not. If the two face images are viewed as the same person by the model, the team proposed method can clearly show which areas on the face are more representative than others via providing local similarity values and attention weights. The explainable module, xCos, can even work well with other common neural network face recognition backbones such as ArcFace, CosFace, etc.
Demystifying the Black box of AI
Minister Chen said, the reason why technology always continuously progress is because people kept asking “why” and make efforts to find solutions. In the early stage, AI was run by a rule-based system. It was rather easy to trace how the system made its decision, and it was highly explainable. Nowadays with an increasing amount of data, deep convolutional neural networks achieve higher accuracy for the task of face verification. However, people have noticed that AI can’t explain the decision making process. Given exceptional performances of AI, deep face verification models need more interpretability so that we can trust the results they generate.
According to PricewaterhouseCoopers, AI has a $15 trillion market value potential, but it is difficult to know the rationale of how the algorithm arrived at its recommendation or decision – ‘Explainable AI’. At this time, computer vision communities still lack an effective method to understand the working mechanism of deep learning models. Due to their inborn non-linear structures and complicated decision-making process (so-called “black box”), whose unknown reasons could lead to serious security and privacy issues. The aforementioned problems will make users feel insecure about deep learning based systems and make developers struggle to improve them. Thus, we can imagine why XAI took an important part in DARPA's $2 bn.-project in 2018.
Explainability and High Compatibility
Minister Chen added that explainable AI is surely an important issue in AI development in the world. By this trend, MOST announced “AI R&D Guidelines” last September to emphasize “Transparency and Traceability” and “Explainability.” In February, the European Union also published WHITE PAPER On Artificial Intelligence - A European approach to excellence and trust, which pointed out that the lack of trust blocked broad AI use. The future development of AI must make people trust AI by reinforcing transparency and explainability.
Prof. Hsu said that xCos can help people to understand how decisions were made and to assist developers to examine the insights for the deep neural networks. He has contributed source code to GitHub (https://github.com/ntubiolin/xcos) for the research community. He believes that with explainable techniques, people will have more confidence in accepting AI's decisions, and developers can adjust their programs to improve accuracy.
Media Contact
Winston H. Hsu
Prof. of CSIE, National Taiwan University
The associate editor for IEEE Trans. on Multimedia (TMM), and Trans. on Circuits and Systems for Video Technology (TCSVT)
NVIDIA AI Lab, Founding Director
Co-founder of thingnario
CSIE, National Taiwan University
Tel: 886-2-3366-4888
Email: whsu@ntu.edu.tw
Shao-Ping Chiang
Officer, Department of Foresight and Innovation Policies, MOST
Tel: 886-2-2737-7982
E-mail:spchiang@nstc.gov.tw