VUzzer: Application-aware Evolutionary Fuzzing

See, stats, and : https : / / www . researchgate . net / publication / 311886374 VUzzer : Application - aware Conference DOI : 10 . 14722 / ndss . 2017 . 23404 CITATIONS 0 READS 17 6 , including : Some : Systems Sanjay Vrije , Amsterdam , Netherlands 38 SEE Ashish International 1 SEE Cristiano VU 51 SEE Herbert VU 163 , 836 SEE All . The . All - text and , letting . Abstract—Fuzzing is an effective software testing technique to find bugs . Given the size and complexity of real - world applications , modern fuzzers tend to be either scalable , but not effective in exploring bugs that lie deeper in the execution , or capable of penetrating deeper in the application , but not scalable . In this paper , we present an application - aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format . In order to maximize coverage and explore deeper paths , we leverage control - and data - flow features based on static and dynamic analysis to infer fundamental prop - erties of the application . This enables much faster generation of interesting inputs compared to an application - agnostic approach . We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets : DARPA Grand Challenge binaries (CGC) , a set of real - world applications (binary input parsers) , and the recently released LAVA dataset . On all of these datasets , VUzzer yields significantly better results than state - of - the - art fuzzers , by quickly finding several existing and new bugs .

[1]  Martin C. Rinard,et al.  Taint-based directed whitebox fuzzing , 2009, 2009 IEEE 31st International Conference on Software Engineering.

[2]  Nahid Shahmehri,et al.  Turning programs against each other: high coverage fuzz-testing using binary-code mutation and dynamic slicing , 2015, ESEC/SIGSOFT FSE.

[3]  Wasif Afzal,et al.  A systematic review of search-based testing for non-functional system properties , 2009, Inf. Softw. Technol..

[4]  Dawson R. Engler,et al.  EXE: Automatically Generating Inputs of Death , 2008, TSEC.

[5]  Patrice Godefroid Random testing for security: blackbox vs. whitebox fuzzing , 2007, RT '07.

[6]  David Brumley,et al.  Scheduling black-box mutational fuzzing , 2013, CCS.

[7]  Timo Mantere,et al.  Evolutionary software engineering, a review , 2005, Appl. Soft Comput..

[8]  Giuliano Antoniol,et al.  Detecting buffer overflow via automatic test input data generation , 2008, Comput. Oper. Res..

[9]  Xiaoyin Wang,et al.  Experience report: how is dynamic symbolic execution different from manual testing? a study on KLEE , 2015, ISSTA.

[10]  Paul Piwowarski,et al.  A nesting level complexity measure , 1982, SIGP.

[11]  Angelos D. Keromytis,et al.  libdft: practical dynamic data flow tracking for commodity systems , 2012, VEE '12.

[12]  David Brumley,et al.  Unleashing Mayhem on Binary Code , 2012, 2012 IEEE Symposium on Security and Privacy.

[13]  Koushik Sen DART: Directed Automated Random Testing , 2009, Haifa Verification Conference.

[14]  William K. Robertson,et al.  LAVA: Large-Scale Automated Vulnerability Addition , 2016, 2016 IEEE Symposium on Security and Privacy (SP).

[15]  David A. Wagner,et al.  Dynamic Test Generation to Find Integer Bugs in x86 Binary Linux Programs , 2009, USENIX Security Symposium.

[16]  Sandy Clark,et al.  Familiarity breeds contempt: the honeymoon effect and the role of legacy code in zero-day vulnerabilities , 2010, ACSAC '10.

[17]  Koushik Sen,et al.  Symbolic execution for software testing: three decades later , 2013, CACM.

[18]  Paul T. Groth,et al.  Looking Inside the Black-Box: Capturing Data Provenance Using Dynamic Instrumentation , 2014, IPAW.

[19]  Barton P. Miller,et al.  An empirical study of the reliability of UNIX utilities , 1990, Commun. ACM.

[20]  Patrice Godefroid,et al.  Automated Whitebox Fuzz Testing , 2008, NDSS.

[21]  Jared D. DeMott,et al.  Fuzzing for Software Security Testing and Quality Assurance , 2008 .

[22]  Herbert Bos,et al.  The BORG: Nanoprobing Binaries for Buffer Overreads , 2015, CODASPY.

[23]  Harish Patil,et al.  Pin: building customized program analysis tools with dynamic instrumentation , 2005, PLDI '05.

[24]  Bogdan Copos,et al.  InputFinder: Reverse Engineering Closed Binaries using Hardware Performance Counters , 2015, PPREW@ACSAC.

[25]  David Brumley,et al.  Optimizing Seed Selection for Fuzzing , 2014, USENIX Security Symposium.

[26]  Herbert Bos,et al.  Dowsing for Overflows: A Guided Fuzzer to Find Buffer Boundary Violations , 2013, USENIX Security Symposium.

[27]  Christopher Krügel,et al.  Driller: Augmenting Fuzzing Through Selective Symbolic Execution , 2016, NDSS.

[28]  Abhik Roychoudhury,et al.  Coverage-Based Greybox Fuzzing as Markov Chain , 2017, IEEE Trans. Software Eng..

[29]  Ryan Cunningham,et al.  Automated Vulnerability Analysis: Leveraging Control Flow for Evolutionary Input Crafting , 2007, Twenty-Third Annual Computer Security Applications Conference (ACSAC 2007).

[30]  Zachary N. J. Peterson,et al.  Analysis of Mutation and Generation-Based Fuzzing , 2007 .

[31]  Guofei Gu,et al.  TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection , 2010, 2010 IEEE Symposium on Security and Privacy.

[32]  Laurent Mounier,et al.  An Evolutionary Computing Approach for Hunting Buffer Overflow Vulnerabilities: A Case of Aiming in Dim Light , 2010, 2010 European Conference on Computer Network Defense.

[33]  Rupak Majumdar,et al.  Hybrid Concolic Testing , 2007, 29th International Conference on Software Engineering (ICSE'07).

[34]  David Brumley,et al.  Program-Adaptive Mutational Fuzzing , 2015, 2015 IEEE Symposium on Security and Privacy.

[35]  Hisashi Kobayashi,et al.  Probability, Random Processes, and Statistical Analysis: Random processes , 2011 .