Author Richard Harang
Git blame who?: Stylistic authorship attribution of small, incomplete source code fragments
MEADE: Towards a Malicious Email Attachment Detection Engine
Towards Principled Uncertainty Estimation for Deep Neural Networks
SeqDroid: Obfuscated Android Malware Detection Using Stacked Convolutional and Recurrent Neural Networks
A Deep Learning Approach to Fast, Format-Agnostic Detection of Malicious Web Content
Sophos AI WebCat ML demo
Leaner, meaner, better models with model compression
Deep learning models can produce state of the art performance for malware detection, but that performance comes at a price: […]
ALOHA: Auxiliary Loss Optimization for Hypothesis Augmentation
Malware detection is a popular application of Machine Learning for Information Security (ML-Sec), in which an ML classifier is trained to predict whether a given file is malware or benignware. Parameters of this classifier are typically optimized such that outputs from the model over a set of input samples most closely match the samples true malicious/benign (1/0) target labels. However, there are often a number of other sources of contextual metadata for each malware sample, beyond an aggregate malicious/benign label, including multiple labeling sources and malware type information (e.g. ransomware, trojan, etc.), which we can feed to the classifier as auxiliary prediction targets. In this work, we fit deep neural networks to multiple additional targets derived from metadata in a threat intelligence feed for Portable Executable (PE) malware and benignware, including a multisource malicious/benign loss, a count loss on multi-source detections, and a semantic malware attribute tag loss. We find that incorporating multiple auxiliary loss terms yields a marked improvement in performance on the main detection task. We also demonstrate that these gains likely stem from a more informed neural network representation and are not due to a regularization artifact of multi-target learning. Our auxiliary loss architecture yields a significant reduction in detection error rate (false negatives) of 42.6% at a false positive rate (FPR) of 10−3 when compared to a similar model with only one target, and a decrease of 53.8% at 10−5 FPR.
Hacking Facial Recognition Systems
Facial recognition is becoming ubiquitous, but how it works is often confused with standard image classification. In this short, high-level […]