|
The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
This project is a collaboration with the Stanford Research Institute (SRI) Princeton, NJ. Our part is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.
Publications:
Alipour, K., Schulze, J.P., Yao, Y., Ziskind, A., Burachas, G., "A Study on Multimodal and Interactive Explanations for Visual Question Answering", In Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020), New York, NY, Feb 7, 2020
Alipour, K., Ray, A., Lin, X, Schulze, J.P., Yao, Y., Burachas, G.T., "The Impact of Explanations on AI Competency Prediction in VQA", In Proceedings of the IEEE conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI 2020), Sept 22, 2020
|