Interpretable ML

Machine learning models are notorious for being “black boxes” whose decision processes are inscrutable.  In hunting and incident response contexts, this is often unacceptable.  Indeed, we want to know both what behaviors, files, and network traces our machine learning models deem suspicious, but also why they deem them suspicious, so that we can follow up and verify whether they’ve found evidence of an attacker or not.  To engage this problem, in our Interpretable ML research focus we work on inventing, prototyping, and operationalizing methods to explain the “thought processes” of our security machine learning systems.  This work has resulted in multiple successful commercial model shipments and patents.