Analyzing Security ML Models with Imperfect Data in Production

Check out our paper to see how the power of visualization fulfilled the operational needs of our industry research team to detect and resolve the frequently seen issues in our productionized operational security models. We described the full step-by-step design of the user interface and shared the lessons we learned, and demonstrated how we used the system. We added multiple simple views rather than one complex view to support data scientists’ workflow while keeping it simple for high-level users. We focused on finding trends and anomalies in data feeds relevant to the models. A combination of several charts enabled the team to ask questions, verify their hypotheses and generate insights.

Awalin Sopan
Konstantin Berlin

A Simple and Agile Cloud Infrastructure to Support Cybersecurity Oriented Machine Learning Workflows

Generating up to date, well labeled datasets for machine learning (ML) security models is a unique engineering challenge, as large data volumes, complexity of labeling, and constant concept drift makes it difficult to generate effective training datasets. Here we describe a simple, resilient cloud infrastructure for generating ML training and testing datasets, that has enhanced the speed at which our team is able to research and keep in production a multitude of security ML models.

Ajay Lakshminarayanarao
Konstantin Berlin

ALOHA: Auxiliary Loss Optimization for Hypothesis Augmentation

Malware detection is a popular application of Machine Learning for Information Security (ML-Sec), in which an ML classifier is trained to predict whether a given file is malware or benignware. Parameters of this classifier are typically optimized such that outputs from the model over a set of input samples most closely match the samples true malicious/benign (1/0) target labels. However, there are often a number of other sources of contextual metadata for each malware sample, beyond an aggregate malicious/benign label, including multiple labeling sources and malware type information (e.g. ransomware, trojan, etc.), which we can feed to the classifier as auxiliary prediction targets. In this work, we fit deep neural networks to multiple additional targets derived from metadata in a threat intelligence feed for Portable Executable (PE) malware and benignware, including a multisource malicious/benign loss, a count loss on multi-source detections, and a semantic malware attribute tag loss. We find that incorporating multiple auxiliary loss terms yields a marked improvement in performance on the main detection task. We also demonstrate that these gains likely stem from a more informed neural network representation and are not due to a regularization artifact of multi-target learning. Our auxiliary loss architecture yields a significant reduction in detection error rate (false negatives) of 42.6% at a false positive rate (FPR) of 10−3 when compared to a similar model with only one target, and a decrease of 53.8% at 10−5 FPR.

Konstantin Berlin
Richard Harang
Cody Wild
Ethan Rudd