
Andrew Marrington
Academic Affairs and Acting Associate Provost, Professor in the College of Technological Innovation.
"As an active researcher in Digital Forensics and Cybersecurity for over twenty years, I am excited by the applications of Generative AI. It’s essential that as we adopt AI in this fast-moving field, we ensure that we are able to explain our AI-empowered decision making and actions, especially in the context of law enforcement or court proceedings. This project will help investigators and responders to more effectively leverage AI to protect our digital ecosystem."
Cybersecurity operations and digital forensics require not only accurate detection and analysis but also transparency, explainability, and speed in response and seizure. Explainable generative Artificial Intelligence (AI) has the potential to convert reactive incident response into intelligent, adaptive, and trustworthy decision-making. This project proposes a pioneering Explainable Artificial Intelligence (XAI) framework that synergizes model-agnostic explanation tools, explainability-by-design architectures, and large language models (LLMs) to produce transparent, traceable, and legally defensible narratives from AI outputs. The explainability pipeline we envision transforms black-box outputs into structured knowledge to aid real-time decision-making, courtroom-ready forensic narratives, and investigator training. The project aligns closely with Dubai’s Cognitive Cities vision and the UAE Cybersecurity Strategy 2023–2026 by promoting autonomous and explainable threat detection, enabling forensic readiness and situational awareness across agencies, and strengthening trust in AI-enabled digital governance.