By Debbie Gregory.
Modern artificial intelligence (AI) has proven that at times, it works better than the human brain. There is no denying the fact that artificial intelligence is the future. From the security forces to the military applications, AI has spread out its wings to encompass our daily lives as well. However, AI comes with its own limitations. The biggest area where AI is challenged is explaining to humans is the how and why of the decision making, limited by the machine’s current inability to explain their actions to human users.
Developing Explainable Artificial Intelligence (XAI) is of interest to commercial users of AI, as well as to the military. Explanations of how algorithms are thinking make it easier for leaders to adopt artificial intelligence systems within their organizations
XAI, especially explainable machine learning, will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
Last month, the Defense Advanced Research Projects Agency (DARPA) engaged 10 research teams in multimillion-dollar program designed to develop new XAI systems.
XAI program will incorporate new explanation techniques with the results produced by the machine in order to create more explainable models and results.
Processes such as architectural layers, design data, loss functions and optimization techniques are used to experiment and develop interpretable models of the AI machines.
Model induction would also take place to treat the machine processes like a black box and experiment with it to develop a better understanding of its processes.
“Each year, we’ll have a major evaluation where we’ll bring in groups of users who will sit down with these systems,” says David Gunning, program manager in DARPA’s Information Innovation Office.