: fi 1, i j This is to certify that the thesis entitled CUSTOMIZABLE VULNERABILITY ANALYSIS AND CLASSIFICATION presented by BRENT LEE HOLTSCLAW LIBRARY Michigan State University has been accepted towards fulfillment of the requirements for the MS. degree in COMPUTER SCIENCE (Zaficm, Major Bfofessorlg Signature Mnu L 200 5’ _ . . / , Date MSU is an affirmative-action. equal-opportunity employer PLACE IN RETURN BOX to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/07 p:lC|RC/DateDue.indd-p.1 CUSTOMIZABLE VULNERABILITY ANALYSIS AND CLASSIFICATION By Brent Lee Holtsclaw A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for a degree of MASTER OF SCIENCE Department of Computer Science 2008 ABSTRACT CUSTOMIZABLE VULNERABILITY ANALYSIS AND CLASSIFICATION By Brent Lee Holtsclaw Due to the many complications within the various vulnerability databases, my thesis presents a tool, VACT, which scans the vulnerability databases, searches for specified vulnerabilities by the classification given, and returns the selected vulnerabilities in a downloadable and statistical form. VACT allows for one to gather customizable trend analysis results from a user defined search of vulnerability databases. The user selects each classification used within the search. The search is conducted by comparing the classification to the vulnerabilities description. Trend analysis results are returned by separating the statistics of the vulnerabilities found per year selected. ACKNOWLEDGEMENTS I would like to thank my parents for all of the support that I have received while obtaining my advanced degree. I am lucky to have such wonderful parents. In addition, I would like to thank my family for all of the encouragement that I received. A special thank you goes out to Caity. I would like to thank Dr. Enbody. It has been a real pleasure having him as an advisor and professor. Early in my academic career, his introduction to programming class helped to draw me into computer science. His patient, straightforward approach to teaching and advising has been very beneficial. I could not have completed my thesis without all of his knowledge and help. I would also like to thank the faculty within the Computer Science Department for all of the knowledge that I gained as a result of their teaching. TABLE OF CONTENTS LIST OF TABLES .............................................................................................................. v LIST OF FIGURES ........................................................................................................... vi INTRODUCTION .............................................................................................................. 1 Chapter I: Purpose/Background ......................................................................................... 4 1.1 Motivation ................................................................................................................. 4 1.2 Vulnerability Classification Techniques ................................................................... 5 1.3 Vulnerability Databases .......................................................................................... 14 1.4 Trend analysis ......................................................................................................... 27 Chapter 22VACT Overview .............................................................................................. 35 2.] Framework .............................................................................................................. 35 2.2 Classification and Search ........................................................................................ 36 2.3 Trend Analysis ........................................................................................................ 37 Chapter 3: Classification and Search ................................................................................ 40 3.] Strategy .................................................................................................................... 40 3.2 User Interface .......................................................................................................... 41 3.3 Functionality ............................................................................................................ 42 3.4 Efficiency ................................................................................................................ 42 3.5 Naive Bayesian Classification ................................................................................. 43 Chapter 4: Trend Analysis ................................................................................................ 46 4.1 Evaluation of Features ............................................................................................. 46 4.2 Efficiency ................................................................................................................ 49 Chapter 5: Real World Example ....................................................................................... 50 5.1 Problem ................................................................................................................... 50 5.2 Results ..................................................................................................................... 50 Chapter 6: Conclusion ....................................................................................................... 75 APPENDICES .................................................................................................................. 76 Setting up VACT .............................................................................................................. 77 VACT Code ...................................................................................................................... 78 BIBLIOGRAPHY ............................................................................................................. 96 LIST OF TABLES Table 1.1. Vulnerability classification ideas ...................................................................... 5 Table 1.2. List of Abbreviations associated with vulnerability databases and identifications. ..................................................................................................................... 6 Table 1.3. CWE classifications used by NVD ................................................................. 18 Table 1.3 continued ........................................................................................................... 19 Table 5.1. Summary of results and classifications from VACT pertaining to Microsoft 52 Table 5.2. Summary of results and classification from VACT pertaining to Apple ....... 53 LIST OF FIGURES Figure 1.1. A snapshot of a node within CWE tier 1 ....................................................... 10 Figure 1.2. Another screenshot of CWE tier 1 ................................................................. 1 1 Figure 1.3. A snapshot of the upper portion of CWE tier 2 ............................................. 12 Figure 1.4. A snapshot of CWE tier 3 .............................................................................. 12 Figure 1.5. A screenshot of US-CERT vulnerability search feature ................................ 16 Figure 1.6. A screenshot of the US-CERT search results for buffer and overflow and 2006 ................................................................................................................................... 17 Figure 1.7. A screenshot of NVD vulnerability search .................................................... 20 Figure 1.8. A screenshot of the search results returned from NVD when buffer overflow is typed into the keyword search and the year 2006 is Specified ...................................... 22 Figure 1.9. A screenshot of the search results returned from NVD when the CWE category of buffer errors is selected from the list of vulnerability classifications ............ 22 Figure 1.10. Illustration of NVD vulnerability categorization confusion ......................... 23 Figure 1.11. OSVDB vulnerability search ....................................................................... 24 Figure 1.12. The search results returned when searching for buffer overflows for the year 2006 ................................................................................................................................... 25 Figure 1.13. Secunia vulnerability search ........................................................................ 26 Figure 1.14. A screenshot of results from buffer overflow2006 as the key word search 27 Figure 1.15. Criteria selection when using the NVD statistics query page ..................... 29 Figure 1.16. The statistical results of choosing buffer errors from the Vulnerability Category and selecting the time period of January 2003 through March 2008 ................ 30 Figure 1.17. The graphical results of choosing buffer errors from the Vulnerability Category and selecting the time period of January 2003 through March 2008 ................ 31 Figure 1.18. A section of the CWE trend analysis ........................................................... 32 Figure 2.1. A Screenshot of VACT when first initialized ............................................... 36 Figure 2.2. Results page from VACT ............................................................................... 38 vi Figure 2.2 continued ......................................................................................................... 39 Figure 3.1 A screenshot of VACT when first initialized .................................................. 41 Figure 3.2. Psuedo code showing how a vulnerability’s probability is computed according to the naive Bayesian classification ................................................................. 44 Figure 4.1. Results page from VACT ............................................................................... 47 Figure 4.1 continued ......................................................................................................... 48 Figure 5.1. VACT results searching for Microsoft and Microsoft windows within US- CERT ................................................................................................................................ 54 Figure 5.2. VACT results searching for Microsoft windows xp and Microsoft windows vista within US-CERT ...................................................................................................... 55 Figure 5.3. VACT results searching for Microsoft xp and Microsoft vista within US- CERT ................................................................................................................................ 56 Figure 5.4. VACT results searching for windows xp and windows vista within US-CERT ........................................................................................................................................... 57 Figure 5.5. VACT results searching for xp and vista within US-CERT .......................... 58 Figure 5.6. VACT results searching for apple and mac os x within US-CERT .............. 59 Figure 5.7. VACT results searching for apple mac os x within US-CERT ..................... 60 Figure 5.8. VACT results searching for Microsoft and Microsoft windows within NVD ........................................................................................................................................... 61 Figure 5.9. VACT results searching for Microsoft windows xp and Microsoft windows vista within NVD .............................................................................................................. 62 Figure 5.10. VACT results searching for Microsoft xp and Microsoft vista within NVD ........................................................................................................................................... 63 Figure 5.11. VACT results searching for windows xp and windows vista within NVD. 64 Figure 5.12. VACT results searching for xp and vista within NVD ................................ 65 Figure 5.13. VACT results searching for apple and mac os x within NVD .................... 66 Figure 5.14. VACT results searching for apple mac os it within NVD ........................... 67 Figure 5.15. VACT results searching for Microsoft and Microsoft windows within OSVDB ............................................................................................................................. 68 vii Figure 5.16. VACT results searching for Microsoft windows xp and Microsoft windows vista within OSVDB ......................................................................................................... 69 Figure 5.17. VACT results searching for Microsoft xp and Microsoft vista within OSVDB ............................................................................................................................. 70 Figure 5.18. VACT results searching for windows xp and windows vista within OSVDB ........................................................................................................................................... 71 Figure 5.19. VACT results searching for xp and vista within OSVDB ........................... 72 Figure 5.20. VACT results searching for apple and mac os it within OSVDB .............. 73 Figure 5.21. VACT results searching for apple mac os it within OSVDB ...................... 74 viii INTRODUCTION My thesis generates trend analyses from user-defined searches of online vulnerability databases. There are many challenges in extracting trends from the various vulnerability databases. Vulnerability databases vary on which vulnerability information is kept within the database. They also differ on the way that the data is classified and how one is allowed to search through the data. In addition, no vulnerability database offers trend analysis on a user-defined search. To help provide consistency within these factors, the thesis presents a tool, Vulnerability Analysis and Classification Tool (VACT), which combines vulnerability database search with trend analysis tools to enhance the ability of the end user to search through vulnerabilities and conduct analysis. There are many different vulnerability databases that exist to help raise awareness of the various know vulnerabilities. The government runs some while private organizations or universities run others. Each database iS set up with different standards and capabilities. Take US-CERT for example, the Vulnerability Notes Database contains only severe vulnerabilities. ("US-CERT Vulnerability Notes Database") On the other hand, Secunia contains advisories and virus information. ("Search Advisory. Vulnerability, and Virus Database") Just as each database contains its own set of vulnerabilities, there are multiple vulnerability schemes to fulfill the various needs of researchers, developers, and systems administrators which provides an interesting scenario when searching for vulnerabilities to suit an individual’s needs. Therefore, one goal is to provide users with a tool that allows a customizable search to harvest the desired vulnerabilities. Browsing and searching are two main ways that one is able to use in order to find specific vulnerabilities within a vulnerability database. Neither feature follows a universal standard, but they use similar concepts. When browsing a database one is able to view vulnerabilities according to some kind of vulnerability classification that iS defined within the database. The classification schema allows the vulnerability databases to presort vulnerabilities by common characteristics. The search feature acts like a search engine allowing the users to input a search string to find within the various vulnerabilities. The variance within searches comes from the way that the vulnerability database searches for vulnerabilities. For example, Open Source Vulnerability Database will search for the String within titles while USCERT will search for the keyword within the vulnerability’s information. ("OSVDB: The Open Source Vulnerability Database") To take away ambiguity when searching for certain characteristics, the Vulnerability Analysis and Classification Tool allows a user the ability to not only search by key words but to also make up a classification. Through our own tn'als, we have found that an efficient way to return these results is to do a string search using vulnerability descriptions. To improve the effectiveness of the tool, trend analysis from the searched results is provided. This is a feature missing from many vulnerability databases. The one exception comes within National Vulnerability Database which is sponsored by the National Institute of Science and Technology. ("National Vulnerability Database") Although it will give some trend analysis pertaining to the vulnerabilities, it does not allow the user any freedom. A11 analysis must be picked from predefined classifications, which can be improperly calculated at times. Some organizations which are not vulnerability databases, such as Common Weakness Enumeration, also deliver a trend analysis of vulnerabilities. The analysis done by CWE is done by the year and cannot be customized. Our main goal of trend analysis is to return graphs and statistics that can help the user better visualize the results and save the user from any low level computation. The thesis begins by telling a quick story that motivated this project. We then provide background information about vulnerability classification. Next we provide the reader with insight into the overall capabilities of the tool by providing a breakdown of the steps and decisions that were taken when making the tool. Once we show what the tool does, we then explain how it differs from the current offerings of vulnerability searches and trend analyses. The focus is then shifted to detail the methodologies used to determine both the search and trend analysis results returned. We then describe a real life scenario and how our tool could help out. To round things out the results of some scenarios posed within the search and trend analysis section are put forward as well as the results to the real life scenario. Finally the thesis is finished up with a conclusion which pulls together all of ideas within the thesis. Chapter 1: Purpose/Background 1.1 Motivation The thesis was motivated by a Simple question posed by Dr. Enbody to classify the number of buffer-type vulnerabilities that occurred over the past year. He Specifically wanted to obtain the number of buffer vulnerabilities, the number of compromising buffer vulnerabilities, and the number of compromising vulnerabilities. Compromising vulnerabilities were defined as vulnerabilities that would allow a user to gain control of a system or gain elevated privileges within a system. The first issue arose when we tried to find what classification techniques are used to classify vulnerabilities. After searching the literature and examining vulnerability databases online, we found that there are many different classification schemes. Not only did we not see a common classification for vulnerabilities, but we also found discrepancies within vulnerability databases. As we sorted through the initial problems, we found another problem with the data. We needed to sort through the results of the vulnerabilities to compute our own statistics and graphs. As this simple task continued to grow in complexity, it was apparent that a tool to help automate this process would be a worthwhile contribution to the community. 1.2 Vulnerability Classification Techniques The word vulnerability has a very strong tone with it as it dictates that a flaw is evident within the subject at hand. 0 Within computer security, the term does not change meaning as it applies to a “set of transitions which take a system from an allowed state to a disallowed state” (Bishop) Another key term that will come into play later in the thesis is an exploit. 0 An exploit is a set of commands which take advantage of a vulnerability. (Engle) Due to the complexity and abundance of various vulnerabilities and exploits that exist, vulnerability databases have been created by various entities to help share the knowledge with users. Vulnerability Classification Name Abbreviation Bishop Common Weakness Enumeration CWE ‘ Table 1.1. Vulnerability classification ideas Tables 1.1 and 1.2 are provided to help distinguish the concepts and databases presented throughout chapter one. Table 1.1 contains the current vulnerability classification techniques and ideas. Table 1.2 contains the vulnerability databases introduced within the chapter. It also contains the vulnerability identifications that are used. The various databases contain vulnerabilities according to various naming conventions and standards which are recognized by the vulnerability identifications. Vulnerability Identification Name Abbreviation Common Vulnerabilities and Exposers CVE Bugtraq Internet Security Systems' X-Force organization ISS X-Force Nessus Script Open Source Vulnerability Database OSVDB Snort Signature Secunia Advisory French Security Incidence Response Team FrSIRT Advisory Open Vulnerability and Assessment Language OVAL Computer Incident AdvisorLCapability CIAC Advisory Commiter Emergency Response Team CERT The United States Computer Emergency Readiness Team CERT VU Mi1w0rm Common Configuration Enumeration CCE Common VulnerabilityScoring System CVSS VulnerabilitLDatabases Name Abbreviation Searchable Identification Bugtraq Microsoft Bulletins French Security Incidence Response Team FrSITR US-CERT Vulnerability Note Database US-CERT CVE, CERT VU National Vulnerabilities Database NVD CVE, CCE, CVSS CVE, OSVDB, Bugtraq, ISSX-Force, Nessus Script. Snort Signature, FrSIRT Advisory, OVAL, CIAC Advisory, CERT, CERT VU, Open Source Vulnerability Database OSVDB Milerm Secunia CVE, Secunia Table 1.2. List of Abbreviations associated with vulnerability databases and identifications. Many key terms and ideas within this thesis hinge on the previous work that was accomplished by Matt Bishop. Bishop a professor at University of California, Davis, specializes in computer security and vulnerability analysis. In a paper from 1999, Bishop defines five important properties to vulnerability classification: 1. Similar vulnerabilities are classified similarly. For example, all vulnerabilities arising from race conditions Should be grouped together. However, we do not require that they be distinct from other vulnerabilities. For example, a vulnerability involving a race condition may require untrusted users having specific access permissions on files or directories. Hence it should also be grouped with a condition for improper or dangerous file access permissions. 2. Classification should be primitive. Determining whether a vulnerability falls into a class requires a “yes” or “no” answer. This means each class has exactly one property. For example, the question “does the vulnerability arise from a coding fault or an environmental fault" is ambiguous; the answer could be either, or both. For our scheme, this question would be two distinct questions: “does a coding fault contribute to the vulnerability” and “does an environmental fault contribute to the vulnerability.” Both can be answered “yes” or “no” and there is no ambiguity to the answers. 3. Classification terms should be well-defined. For example, does a “coding fault” arise from an improperly configured environment? One can argue that the program Should have checked the environment, and therefore an “environmental fault” is simply an alternate manifestation of a “coding fault.” So, the term “coding fault” is not a valid classification term. 4. Classification should be based on the code, environment, or other technical details. This means that the social cause of the vulnerability (malicious or simply erroneous, for example) are not valid classes. This requirement eliminates the speculation about motives for the hole. While valid for some classification systems, this information can be very difficult to establish and will not help us discover new vulnerabilities. 5. Vulnerabilities may fall into multiple classes. Because a vulnerability can rarely be characterized in exactly one way, a realistic classification scheme must take the multiple characteristics causing vulnerabilities into account. This allows some structural redundency in that different vulnerabilities may lie in the same class; but as indicated in 1, above, we expect (and indeed desire) this overlap.(Bishop) These properties help to provide a straightforward way of creating vulnerability classifications. The same properties are reiterated in another paper in which Bishop was an author seven years 1ater.(Engle) Although Bishop presents a great plan for vulnerability classification, when one really evaluates the classification rubric, it presents itself as a guideline rather than as specific classifications. Various interpretations can make classifications that are both broad and specific. One will find an assortment of interpretations when searching through the various classifications that are used within the different vulnerability databases. In addition to the work presented by Bishop, two branches of The MITRE Corporation help to provide structure to vulnerability classification. The Common Weakness Enumeration, CWE, division of The MITRE Corporation, offers a community- developed dictionary of software weakness types. ("Common Weakness Enumeration") While Common Vulnerability Enumeration, CVE provides a common naming convention for vulnerabilities. ("Common Vulnerabilities and Exposures") With the help of CVE and researchers, CWE continues to build a classification tree of vulnerabilities. Even though there is a current layout and structure, CWE continually accepts new research and vulnerabilities to help expand the tree to make it as comprehensive as possible. The layout is “currently using what could roughly be described as a three-tiered approach, in which (1) the lowest level consists of the full CWE List (hundreds of nodes) that is primarily applicable to tool vendors and detailed research efforts; (2) a middle tier consists of descriptive affinity groupings of individual CWES (25-50 nodes) useful to software security and software development practitioners; and (3) a more easily understood top level consisting of high-level groupings of the middle-tier nodes (5-10 nodes) to define strategic classes of vulnerabilities and which is useful for high-level discourse among software practitioners, business people, tool vendors, researchers, etc.”("Process.") CWE produces a body of work that is the closest we have seen completely implementing Bishops properties. The problem is that it is not used as a database Standard. Several databases such as National Vulnerability Database have implemented a partial CWE list. Figures 1.1 and 1.2 illustrate the level of detail that exists within CWE. Figure 1.1 is a high level classification with an associated description and relationships to similar classifications. Figure 1.2 illustrates a specific classification. The classification levels can also be seen within Figures 1.1 and 1.2. Figure 1.2 is referred to as a child of Figure 1.1. In addition to the description and relationships, Figure 1.2 has associated examples. Figures 1.3 and 1.4 illustrate CWE tier 2 and tier 3. The tiers represent the various levels of classification that exists within CWE. Fatimato"car.strai'n"6aersua.;s Mann'e‘eamiagar stixiia‘carea " Memo Buffer Weakness ID 119 (Weakness crass) Statue: Draft Description W The software may potentially allow operations, such as reading or writing, to be performed at addresses not intended by the developer. e e rt 1 When software permits read or write operations on memory located outside of an allocated range, an attacker may be able to access/modify sensrtive information, cause the system to crash, alter the Intended control flow, or execute arbitrary code. Affected Memory Resource Relationships Nature Type ID Name ChiIdOf 118 Range Errog ChildOf W 635 flgaknesses gags! by NVQ ChiIdOf O 633 Weaknesses that Affect Memory ParentOf Q 133 fitring firrgg ParentOf Wt: 123 Write-what-wherg Condition DarentOf m 124 'nnin Viol ion ' ffer nderwrite' DarentOf m 125 Out-of-bounds Read ParentOf m 128 r - r rr r DarentOf 129 n h r n xin ParentOf 131 Incorrect Calculation of Buffy Size DarentOf 132 Mi 1 II T rmin i DarentOf 466 Return of Pointer Value Outside of Expected Ranqg parentOf ‘ 120 n n Tr n f ' i if r rfl w' Related CAPEC-ID Attack Pattern Name Attack _1_0_Q Overflow Buffers hum LQ Buffer Overflow via Environment Variables L1 Client-side Injection-induced Buffer Overflow 5; MIME Conversion 21 Filter Failure through Buffer Overflow g Buffer Overflow in an API Call fi Overflow Binary Resource File 2 Buffer Overflow in Local Command-Line Utilities 15 Buffer Overflow via Symbolic Links 35 Overflow Variables and Tags 91 Buffer Overflow via Parameter Expansion Figure 1.1. A snapshot of a node within CWE tier 1 10 Incorrert (Zaluilation of Buffer Size Weakness ID 131. (Weakness Class) Desa'lptlon Observed Examples Context Notes Reladonshlps Source Taxonomles Applicable Platforms Related Attack Patterns Mm Status: Draft The software does not correctly calculate the size to be used when allocating a buffer, which could lead to a buffer overflow. Reference Q!E~2004-0747 CVE-2005-2 103 CVE-2005-3 120 CVE-ZOOQ-QBQQ - - 4 CVE-2001-og4g QE-ZQQL-Oz49 QE-2002-0184 CVE-2004-0434 - - 47 - - 4 ~ - 4 Description substitution overflow: buffer overflow using envrronment variables that are expanded after the length check is performed substitution overflow: buffer overflow using expansion of environment variables substitution overflow: buffer overflow using a large number of substitution strings transformation overflow: product adds extra escape characters to incoming data. but does not account for them in the buffer length transformation overflow: buffer overflow Mien expanding ”>" to "nigh". etc. expansion overflow: buffer overflow using im'ldcards expansion overflow: long pathname + glob - overflow expansion overflow: long pathname + glob - overflow special characters in argument are not properly expanded small length value leads to heap overflow multiple variants needs closer investigation. but probably expansion-based needs closer investigation. but probably expansion-based This is a broad category. Some examples include: (1) Simple math errors, (2) incorrectly updating parallel counters, (3) not accounting for size differences when "transforming" one input to another format (e.g. URL canonicalization or other transformation that can generate a result that's larger than the original input, i.e. "expansion"). This level of detail is rarely available in public reports, so it is difficult to find good examples. Nature Type 10 Name ChildOf 119 Failurg to angtrain Qgemtigns within the Boungg gf an Allfigtgg Mgmgry Buffer PLOVER - Other length calculation error C C++ CAPEC-ID Attack Pattern Name _4_7 Buffer Overflow via Parameter Expansion Figure 1.2. Another screenshot of CWE tier 1 11 EILQLocation - (1) - @Configuration — (16) EiOCode - (17) EJQSource Code - (18) :13 Data Handling — (19) IO Security Features - (254) IGTime and State - (361) I8 Error Handling - (388) [B Failure to Fulfill API Contract (aka 'API Abuse') - (227) - .Use of Inherently Dangerous Function - (242) IMIndicator of Poor Code Quality - (398) I-Insufficient Encapsulation - (485) IMAI‘ways-Incorrect Control Flow Implementation - (670) I-Insufficient Control Flow Management - (691) i516 me/Object Code - (503) - Compiler Removal of Code to Clear Buffers - (14) BMViolation of Secure Design Principles - (657) - .Design Principle Violation: Failure to Use Least Privilege - (250) - “Design Principle Violation: Not Failing Securely - (636) - .Design Principle Violation: Not Using Economy of Mechanism — (637) - “Design Principle Violation: Not Using Complete Mediation - (638) - Design Principle Violation: Insufficient Compartmentalization - (653) - .Design Principle Violation: Reliance on a Single Factor in a Security Decision —(654) - Design Principle Violation: Failure to Satisfy Psychological Acceptability - (655) - .Design Principle Violation: Reliance on Security through Obscurity - (656) It'llDesign Principle Violation: Lack of Administrator Control over Security - (671) E16 Environment - (2) EIGTechnology-specific Environment Issues - (3) - @JZEE Environment Issues - (4) IO .NET Environment Issues - (519) BOMotivation/Intent - (504) Figure 1.3. A snapshot of the upper portion of CWE tier 2 ECLocation -(1) - ©Configuration - (16) (9 Code - (17) IC Environment — (2) BC Motivation/Intent - (504) I0 Intentionally Introduced Weakness - (505) - @Inadvertently Introduced Weakness - (518) Figure 1.4. A snapshot of CWE tier 3 12 Common Vulnerabilities and Exposures, CVE, “is a list of information security vulnerabilities and exposures that aims to provide common names for publicly known problems. The goal of CVE is to make it easier to share data across separate vulnerability capabilities (tools, repositories, and services) with this ‘common enumeration.” ("Common Vulnerabilities and Exposures") In other words CVE provides a common naming convention to reference vulnerabilities. CVE does not include any zero day vulnerabilities. A zero day vulnerability is a newly released vulnerability. Instead, vulnerabilities must go through a process to get onto the CVE list. After a vulnerability is discovered, it goes through “three stages: the initial submission stage, the candidate stage, and the entry stage.” ("How We Build the CVE List") Their website provides a complete tutorial on the CVE List building process. The CVE List building process is stringent with the vulnerabilities which are given a common name. To encompass vulnerabilities without CVE names, other organizations offer identification to vulnerabilities which are identified within CVE and ones that are not. Security Focus features a zero day vulnerability database, bugtraq. The database allows users to send in all vulnerabilities that are found. Bugtraq offers an up- to-date email system to provide all subscribers a chance to view and discuss new vulnerabilities. Microsoft Bulletins features Microsoft specific vulnerabilities. French Security Incidence Response Team, FrSITR, also keeps a zeroday list of reported vulnerabilities. USCERT provides a truncated list of vulnerabilities by selecting only the vulnerabilities which are identified as critical. Secunia offers a vulnerability list which includes both vulnerabilities and virus information. It is important to note that in addition 13 to their own naming convention, each of the databases listed above still offers the CVE name for vulnerability which is identified by compiles to CVE standards. Other notable standards which are used by some of the vulnerability databases include the Common Vulnerability Scoring System, CVSS, and Common Configuration Enumeration, CCE. “The Common Vulnerability Scoring System provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. Its quantitative model ensures repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores.” ("NVD Common Vulnerability Scoring System Support v2.") CVSS is calculated with using the following metrics: Vulnerability Severity, Access Vector, Authentication, Confidentiality, Integrity, Availability, and Access Complexity. “The CCE List provides unique identifiers to security-related system configuration issues in order to facilitate fast and accurate correlation of configuration data across multiple information sources and tools.” ("About CCE") 1.3 Vulnerability Databases Within the next section we are going to take a closer look into some of the vulnerability databases. Not only does this help one get a better picture of the types of vulnerability databases that exist but it also shows how vulnerability databases differ. The description will feature what types of vulnerabilities that can be found within the database as well as any classification schemes that are used. Another attribute of the description will be an evaluation of the search functionality within the database. The US-CERT Vulnerability Notes Database contains vulnerabilities that meet a “certain severity threshold” which is severe for all users. In other words, the database 14 contains severe vulnerabilities for software and operating systems that many users interact with on a daily basis. One is able to view the vulnerabilities within seven predetermined metrics: Name, ID Number, CVE name, Date Public, Date Published, Date Updated, and Severity Metric. Even though US-CERT does not offer any classification, the search feature is very good. It does a full text search allowing the user to input complex queries. Figure 1.5 shows a snapshot of the US-CERT vulnerability notes search page. Figure 1.6 shows a snapshot of the results returned when searching for buffer overflow 2006. The US-CERT Vulnerability Notes Database is a truncated list of CVE vulnerabilities. Therefore, one is only able to get a subset of CVE vulnerabilities as a result. Another downfall comes as the results from the search as featured in figure 5 contain vulnerabilities from years other than 2006 with no indication of the number of vulnerabilities that have been returned. The user must do extra work to know how many vulnerabilities fit into the search results. In addition, extra time must be sent filtering through the results to find the ones that are from 2006. 15 Home l FA 0 lContact l Priynn Policy US-CERT UNITED STATES COMPUTER EMERGENCY READINESS TEAM Vulnerability am Search US-CERT Vulnerability Notes lamb” ) Search Search for US-CERT Vulnerability Notes that contain the following wordtsi: Vulnerability N ole) r __JSearch V ”lmmb‘l” Searches arr mse insrnsiIii-r. and they match whole words in a full Ir.” index. You run use logical —-£fl?'$:l n constructs such as and. or and not. as we’ll its (mrrmhrsrs and H ildt unis lilu' asn'n'st (mil quewmn mark. t 'I [0 Limit results to: I 50 View Notes By Ni! Sort results by: 9' Relevance ID Number ' Oldest tirst (roughly by moditied date) Newest tirst (roughly by iiiodilied date) CVE Name . Word options: .1 Find exact word matches only Date Public '7 Find word variations as detined by thesaurus Date Published Example queries includtr [Mite liptldtcd O rsaref and (ssh or ssli Sen-rig Metric . . o cgi-bin and not (us or upuchci Other 0 Windows 9'? or Window» 2WD or Windows XP ”utilmcnts 0 buffer over‘ W More detailed help is also available. Technical Bulletins Alerts Secung' Tips Printed 21le by USCERT. a pm crnmcnt organization DINLIHIL‘!‘ .in-J tumnglii int-«iiutiuii Figure 1.5. A screenshot of US-CERT vulnerability search feature 16 US-(IERT UNlTED STATES COMPUTER EMERGENCY READINESS TEAM Yin—habit; Search Results Database Date ID M: Na. Wealth!) $1954.12 0311 2008 Microsoft 05cc Web Coupooelss Spreadsheet ActsveX control L'RL parsing stack bufier overflow ' \17a196240 0219:2007 Sowcefic Snort DCE RPC preprocessor does not properly reassemble fiagmented packets Enbe'hht‘.‘ $333119; 12 01-2006 VUPlaycr wormed plan/Est infer overflow M \‘L‘n'Ol 121 06-2: '2006 Gracenote CDDB Actich comrol Met overflow VL'=“3.‘_4§ 06192006 32'» cm a bss bufi‘er overflow ii is LZH handing \1‘3138545 0604:2007 Java Ruin: Emiounent Image Fusing Code We: overflow uberabh'y V'L's4513fl 09122006 Adobe Flash Player long sting btfl'er overflow VL'IZBOZSS 01’04’2007 OpusOfiee fed: to propaiy process “W and EMF fies “Is-141785 0212272007 StpponSofi ActiveX comrols coma: mid: btfi'er overflows \1'811945" 125203006 Sui Java IRE miserable to arbitrary code amnion via an mdetauiied error \L‘n592‘96 0203-2007 Malia Network Secu'ty Services (NSS) fiis to property but]: the tiers master key VL'8377812 02123-2007 Moziu Network Sealiy Senices (NSS) fiis to propa‘ly process maI'onned SSLvZ serves messages Figure 1.6. A screenshot of the US-CERT search results for buffer and overflow and 2006 The National Vulnerabilities Database, NVD, which is sponsored by the National Institute of Science and Technology, is a very interesting database to study. It is a CVE and CCE vulnerability database. Table 1.3 shows which CWE vulnerability classifications are integrated into N VD. Table 1.3 is mainly composed of CWE tier 2 vulnerability classifications. It is interesting to note that because the list is a truncated version, classifications such as Other and Not in CWE exist. The user is able to do a keyword search with any of the one of the CWE metrics from table 2 specified, any of the metrics within CVSS specified, the product specified, the vendor specified, or the date specified. Figure 1.7 helps to illustrate the search and browsing features of NVD. 17 Name Description Authentication Issues Failure to properly authenticate users. Credentials Management Failure to properly create, store, transmit, or protect passwords and other credentials. Permissions, Privileges, and Access Control Failure to enforce permissions or other access restrictions for resources, or a privilege management problem. Buffer Errors Buffer overflows and other buffer boundary errors in which a program attempts to put more data in a buffer than the buffer can hold, or when a program attempts to put data in a memory area outside of the boundaries of the buffer. Failure to verify that the sender of a web request actually intended to do so. CSRF Exisésstite attacks can be launched by sending a formatted request to a victim, then tricking the Forgery victim into loading the request (often automatically), which makes it appear that the (CSRF) request came from the Victim. CSRF is often associated with XSS. but it is a distinct issue. gross-.Site Failure of a site to validate, filter, or encode user input before returning it to another cripting user's web client (XSS) ' Cryptographic An insecure algorithm or the inappropriate use of one; an incorrect implementation of Issues an algorithm that reduces security; the lack of encryption (plaintext); also, weak key or certificate management, key disclosure, random number generator problems. Path Traversal When user-supplied input can contain “..” or similar characters that are passed through to file access APIs, causing access to files outside of an intended subdirectory. Code Injection Causing a system to read an attacker-controlled file and execute arbitrary code within that file. Includes PHP remote file inclusion, uploading of files with executable extensions, insertion of code into executable files. and others. Format String The use of attacker-controlled input as the format string parameter in certain functions. Vulnerability Configuration A general configuration problem that is not associated with passwords or permissions. Information Leak / Exposure of system information, sensitive or private information, fingerprinting, etc. Disclosure Failure to ensure that input contains well-formed, valid data that conforms to the Input . . , . . . . . . . . . application 5 speCIfications. Note. this overlaps other categories like XSS, Numeric Validation . . Errors, and SQL Injection. Numeric Integer overflow, signedness, truncation, underflow, and other errors that can occur Errors when handling numbers. 08 Command Allowing user-controlled input to be injected into command lines that are created to Injections invoke other programs, using systemQ or similar functions. Table 1.3. CWE classifications used by NVD 18 Name Description Race The state of a resource can change between the time the resource is checked to when it Conditions is accessed. Resource The software allows attackers to consume excess resources, such as memory Management exhaustion from memory leaks, CPU consumption from infinite loops, disk space Errors consumption. etc. S QL Injection When user input can be embedded into SQL statements without pr0per filtering or quoting, leading to modification of query logic or execution of SQL commands. Link Failure to protect against the use of symbolic or hard links that can point to files that Following are not intended to be accessed by the application. Other NVD is only using a subset of CWE for mapping instead of the entire CWE, and the weakness type is not covered by that subset. Not in CWE The weakness type is not covered in the version of CWE that was used for mapping. Insufficient There is insufficient information about the issue to classify it; details are unkown or Information unsyecified. A vulnerability is characterized as a “Design error" if there exists no errors in the Design Error implementation or configuration of a system, but the initial design causes a vulnerability to exist. Table 1.3 continued 19 g porisorvd try ’ DHS National Cytier Security DWISIOIII'US‘CERI National Vulnerability Dababase uutoniutiiiq VUIIIL‘IdbliIly iiiariugeme , . nty measurement, and Compliance Liletkllh‘j Impact Metrirs [scare-ms CVE and CCE Vulnerablllty Database Advanced Search [CCE ruppart is under development] NVD is the U.S. govemmontra osi of standards basgd m -__._ReSEIValues vulnerabiflty management Keyword search data. This data ”BUGS Try a vendor name. product norm. nr version umber automation Of ry I m standard vulnerabilny name ~ type in rnu pl: eyvardi separated by rpacu le’ibfity . Only vulnerabilities that match ALL keyword! in" be returned managiment, seconty ”fitment-W Vendor ABCE F.HIK LN 00 RT UWXZNI compliance (3.9. FISMA). Produa LBLELHLKHHLTMLZE Version “ Choose I Vendor or Drodud ‘ 30009 Qflll null! Btu Search start date Ahy Month VV- EMT, J 148mm Search and date AnyMo-i'h v myéa, v 33 _ . , , I W Vu'nerabllvv TYPGS‘ El Software Flaws (M) 2158 W . 317‘ Misconngurations (CCE). under development WI CV38 Version 2 Metrics: 13879 mm Last updated: Halli/08 Vulnerability Severity Am: H v CVE Publication rate: ‘ , 15 mabitlas/day Access Vector Any v Authentication Any v NVD ”my“; four mailing Confidentiality: Any 7 v’ lists to the able. For We m . ' information and g v AT"? s , V. subscription instructions Availability AW ., ease visit NVD Maili ' fists —J Access Complexity Arry v Workload Index Vulnerability Category AW 7 7 7' Vulnerabit Worldoad : may '—'—"' Show only vulnerabilities that E] US-CERT Technical Alerts have the followmg [3 us-cem Vulnerabilit Notes “mm Us associated resources: l: OVAL DefinltIOUS Whisanmrliu-tnffiu Figure 1.7. A screenshot of NVD vulnerability search Despite the positive features of NVD, there are some other features which cause confusion. The search engine provides many search options while the key word search is not very extensive. In fact, when searching for buffers overflows, there were some vulnerabilities that were identified as buffer overflow vulnerabilities that were not being returned. Figure 1.8 illustrates the search results returned when typing buffer overflow into the search with the year of 2006 specified. There are 583 vulnerabilities that are 20 returned. However, if one selects just buffer errors from 2006 as specified within Figure 1.9, only 30 vulnerabilities are returned. When limiting the buffer errors to the keywords buffer overflow, only 19 vulnerabilities are returned for 2006. As 23 classifications have been chosen to represent a subset of CWE, many of the vulnerabilities have not been set to fall into any of these categories after searching for them which leads to inaccurate results. Another problem that was found with NVD comes within the XML downloads. On the site, there is a place where one can download the various years’ worth of CVE data. The problem comes when one tries using the categorization aspects within the CVE. The categories are inconsistent with the new CWE categories. Figure 1.10 illustrates the vulnerability classifications that are found in the XML of the 2007 download. The file is not using the CWE vulnerability classification standard. Instead it lists twelve classifications which are illustrated on the left of Figure 1.10. Next to the twelve classifications, is the number of times that the classification is found within the vulnerabilities. Within the XML file, some vulnerabilities fall into several of the classifications while others do not fall into any of the listed classifications. The right side of Figure 1.10 shows the exact classifications that are found associated each vulnerability is associated with, as well as the number of times that classification is found within the vulnerabilities. According to Bishop, allowing multiple categories can be a good thing so this may prove to be a problem. However, NVD is using two different vulnerability classification metrics. Another concern is that 542 of the vulnerabilities are classified as unknown. 21 Sponso red by - ‘ V l NlSI- DHS National Cybor Security Division/US- CERT y N I 1.111 1 n. si 011" atrium 1., National Vulner‘ability Database autuiiiatinq vulnerability managemenMSeuity int-usurenient ("1d coniplidrite cliet king Vulnerabilit . Checklists Product Dictionary Impact Metrics, Data Feeds Statlstlcs There are 583 matching records. Displaying matches 1 through 20. NVD is the U5. ,_ "‘ ' . ; Next20M teh government repostory of a as standards based CV}; 2006- 9911 wxfimm Summary: Multiple buffer overflows in Computer Assooates (CA) BrightStor ARCserve Backup R115 Server before SP2 allows remote attackers to execute arbitrary code in the Tape Engine (tapeeng.exe) via a crafted RPC request With (1) opnum 38, which IS not properly handled in TAPEUI'IL.dll 11.538840, or (2) opnurn 37, which is not properly "“an “cm" handled In TAPEENG.dll 11.538840. “a“m" published: 12/31/2006 CVSS Sever! : 1 automation of when ‘ compiance (e. g. FISMA). Resource Status CVERQ‘LG’ 5299 m oontaInS' Summary: Stack-based buffer overflow in http.c in Karl Dahlke Edbrowse (aka Command 30603 Eli II I I'll! line editor browser) 3.1.3 allows remote attackers to execute arbitrary code by operating an FTP server that sends directory listings With (1) long user names or (2) long group names. 15° m Published: 12/31/2006 CVSS Se ' ' H h QVLJQQG 0‘19“ 3259mm Summary: Buffer overflow in the Bluetooth Stack COM Server In the widcomm Bluetooth 14148 W Stack, as packaged as Widcomm Stack 3.x and earlier on Windows, Widcomm Last updated. 00/13/08 BTStackServer 1.4.2.10 and 1.3.2.7 on Windows, Widcomm Bluetooth Communication CVEPu mtg; Software 1.4.1.03 on Windows, and the Bluetooth implementation in Windows Mobile or 15 “mural rim. 7 Windows CE on the HP IPAQ 2215 and 5450, allows remote attackers to cause a denial of Figure 1.8. A screenshot of the search results returned from NVD when buffer overflow is typed into the keyword search and the year 2006 is specified Spon riso . , \ ¥ DHS National Cyber Security Division/US- CERT . ' .. I 31... l... 1 mm 1:1hnnlur” National Vulnerability Database automating vulnerability managemeht, rity measurement, and compliance checking Vulnerabilities ' Product Dictionary Impact Metrics Data Feeds MIS on and Overview There are 30 matching records Displaying matches 1 through 20. NVD is the U.s. fi"’“—“‘ government repository of 13-93139“ flicfigij standar do has ad (y 2006 6149 vulnerabity management data. This data enables Summary: Buffer overflow in the parse_expressron function in parse-config in OpenSER 1.1.0 allows attackers to have an unknown impact Via a long st: parameter. ”mum?" °f Published: 12/26/2006 man” . cvss Severn - Huh management, security measurement, and CVE- 2006 9696 Qgflgmiiilr .9vol.gel:1§1§s compiance (9.9. FISMA). Summary: Double free vulnerability in Microsoft Windows 2000, XP, 2003, and Vista allows ‘ , local users to gain privrleges by calling the Messagesox function With 3 MB SERVICE NOTIFICATION message With crafted data, which sends a HardError message NVD contains: to Client/Server Runtime Server Subsystem (CSRSS) process, which IS not Vproperly handled 30603 1218!! l I '11 when invoking the UserHardError and GetHardErrorText functions in WINSRV .DLL Published: 12/21/2006 “OW CVSSSe verit (y: a 9 SMediumz Figure 1.9. A screenshot of the search results returned from NVD when the CWE category of buffer errors is selected from the list of vulnerability classifications. 22 . i ffe 4 - input buffer 602 - other 42 .npUt bu r 61 - input bound 67 - Input bound, design 5 ° input bound 74 - unknown 542 - input bound buffer 2 - 311! 15 - design, mfigfi . u n kn own 542 - design 723 - input, config 20 0 gm, 25 - input 4107 - input bound, other 1 - con 2 37 - input er, at r 1, . fi buff he ' d 95'8“ 952 - exception 271 - access, input buffer 3, - - design, race 4 - input buffer, design 3 0 mm” 4352 - race 34 - input, racel ° config 68 - input bound, exception 1 - input, any 5 ‘ . . 361 - access 202 - input, exception, env 1 exception - input, design 128 - input buffer, exception 3 0 r3 ce 39 - Input, exception 49 - exception, gm 1 - design, 3933 - input buffer, m 2 ° access 305 - design, exception 28 - access, input, design3 - access, design 49 - access, exception 7 o Other 44 - access, input 38 ° input bound buffer 2 - access. confie 3 Figure 1.10. Illustration of NVD vulnerability categorization confusion Open Source Vulnerability Databasc,OSVDB, offers an extensive vulnerability database which incorporates: Bugtraq ID, CVE ID, 188 X—Force ID, Nessus Script ID, Related OSVDB ID, Snort Signature ID, Secunia Advisory ID, FrSIRT Advisory ID, OVAL ID, CIAC Advisory, CERT, CERT VU, Security Tracker, and Milerm ID. OSVDB has made several vulnerability classifications which include Location, Attack Type, Impact, Solution, Exploit, Disclosure, OSVDB. The search feature allows the user to select any the vulnerability classifications, select a reference point, input key words to find within the title or text, the vendor or product name, or a time period. Figure 1.1 l illustrates the elaborate search capabilities. The limitation of OSVDB comes within the search. Keywords are only allowed to be found within the vulnerability titles. By allowing only keywords to be searched for within the title, one is not able to get access to all vulnerabilies. OSVDB lists it’s vulnerability classification within the title. If one picks search terms that are too general or specific for the title, they will not receive the 23 proper results. Figure 1.12 contains the three search results that are returned for buffer overflow for the year of 2006. It is hard to believe that only three of the vulnerabilities from 2006 were buffer overflows. Advanced Search l I“ iifliWor‘ds. v_ '.__ w, ,H— .u-.. .‘ 7-- -__...- q~-w_,._“ Disclosure Date Range: L. V _ RBfBl'Bl'lW! ' _- :Any: ._ _ *_ V Text: “WWW _._- ~ _ M J Vendor/product: i Vulnerability Classification Location lit-tad: Tue M Hidden D Authentication Management D c n U Physical Access Required NPM'OP ‘ D No solution B ‘ Denial of Service D Local Access Required D H . k' Workaround D Remote/Network Access Required 1:“ m? D' ' D Loss of Confidentiality Ci patch mm! n i r But all Remote Din; o u "u . D Loss oflnteority D Upgrade strum D Dialup Access Required U: . M ‘ "lad D Loss of Availability D Change Default Setting D Wireless D “:9“: nil.“ P; on D Unknown D Third Party Solution i360 l" on U Mobile Phone U R C 9m U Discontinued Product UUnirnoen Location D 0:. on on D Solution Unknown or D Un known ,___m fi' om Cl ~ 3 - D OSVDB Verified D Authentication equared D Content Dependent Vendor Verified D V ' D d t D Euplolt Malia ble D Vendor Disputed D we n “.2" “ OHM C D Exploit Unavailable DThird Party Verified D n b R I d to D Exploit Rumored / Private U Coordinated Disclosure . . . U D D Concern Exploit Unkno en Un coordinated Disclosure D D . . Best Practice Third Party Disputed U D Midi/Fake Discovered in the Wild D Security Softgare [seordi] New] Figure l.1l. OSVDB vulnerability search 24 noun-o Weed ad by 0 s u [I n Search OSVDB Browse Vendor: Prejcct Info Help OSVDBI Sporsor: ddNQ~ %‘ ., 5 he 0- EMILANS)W! User Status ‘,» MCOUM _ Calf”? be Resuzts: 5 : t‘iarrozv‘Searcb any. Doctriptmn: Score :1] m Search Query: vuln title: buffer overflow r. date: Mummy 1, 2006 9 date: December H, 2006 OSVDB ID Disclosure Date 11th 231;; 2006-0303 .. - ()UiLk heartlnw Wm ' ‘ Wool 231112 20060340 ”m '°L°°'“° :93 NETGEAR W6111v2 Wireless Driver (woulvzsvs) Beacon Request m 2006-11-16 M mo me {Gal erOve W Alth‘l'lIHt‘lllt‘lll‘a ‘5» ‘1:ab;-,qi~.§nrn;f ..,—,_..‘ i'fi'ili‘i..’"“i' Arm r» v; s -.=_ :[zr-qmi-i—n,‘ ‘r\.'r'.'1,'::i£ .g->',nj—“., , n H.214; ‘Z m-wjri-n V0""._'r.: _,r..‘.').a- array; 1"": o“ir'rf+r..req .{". #".v"_ rm. ur,‘ parlirr it! ,5; —r.. 4‘5Q"Vhiiv”'r'7il’l r ea'lr- .a K'J- i7="-i.df"t’, 06'1", tzh'l‘ ‘arl‘r‘.\"'v' l "" r’ r ;e“ev.‘--a:i-1';runceraze. .l-an'va-e:a':im;'u:-v:r ;‘-ra. tx‘ Reflex-Li'ir'u‘ta’tr‘s. '3'! on v ; 'Fu;,' 3'“ If": "raw “ 7F 3* “71.15“ ‘u'nvs'31 .‘t w' I-”: ll. -"- ’T'lh'rru ‘ - Drivgq fitgtgmgcg Tgrmg of 913 Figure 1.12. The search results returned when searching for buffer overflows for the year 2006 Secunia offers an advisory, vulnerability, and virus database. Browsing options include historic advisories, listed by product, and listed by vendor. Categorization within Secunia includes Impact, Critical Levels, and Where. Users are allowed to input key words as well as select an option from the different categorizations when searching through the database. Figure 1.13 shows a snapshot of the advanced search part of the website. Figure 1.14 shows the results returned when searching for buffer overflow 2006. 2006 was added into the search to find vulnerabilities only from the year 2006. However, vulnerabilities from 2007 are featured on within the results in figure 1.14. Overall, Secunia offers a great search tool combined with basic classifications. 25 SECU n IS Verified Vulnerability Intelligence §can Onling El Headline Personal 1P5“ Gent E] Software/OS Network (3511 Body Text CVE reference Secunia Advisories M Historic Advusones gifted fix Prgdgg’g thgd fl! Vendor mostly: Z graph; §egnia Research Report VUinel’abliltt Abom Advusones Figure 1.1 3. Secunia vulnerability search 26 ix'av Remus ' where It matters. Home MM Log Mailing List; _§§ flLog Advemse J Seard'i Solutions For [Search Advisory. Vulnerability, and Virus Database J W nt' Profession I ‘ Search 18 mole Searchl i » --- i BEA—J : l . _ :: ‘ §eg|nty Vendors F ii Search J l .. , . i Free Solutions For You can enclose search terms with " and ‘ for better search results. i "1 - - ~ - ‘1 .‘ 99W 9 All Content 0 Seounia Adwsones 0 virus lnlonnation L..._..__._.._.l Journalists a Media Software Inspectors 59"!" within: Secunia PSI Scan | Patch | Track Free Download Setunla Poll Do you think it's important to read Setup/User GUIdes for applications for use within your network? ores, I do it all the me OYes, but I do it rarely IFound: 483 Secunia Security Advisories, displaying 1-25 ] Sort by: Match, ling, m ml: not; m nt Iti l Pr d cts Su ortSo ActiveX on rols Buffer 0v rflow 2007-02-23 JustSystems Multiple Products Buffer Overflow Vulnerability 2006-12-05 Borland Products idsql32.dll Buff_e_r Overflow Vulnerabim 2006-11-29 gistSvstems lchitaro Document Property Btfir Overflow Vulnerability 2006-10-18 IDSWitch Mail Server SMTP Serwce Buffer Overflow Vulnerability 2006-09-07 Ichitaro Document Viewer Buffer Overflow Vulnerability 2006-08-21 Microsoft Visual BaSlC for Applications Buffer Overflow 2006-08-08 EowgrArchiver DZIPSBZDLL Buffer Overflow Vulnerability 2006-07-25 Microsoft Office lmaqe Filters Buffer Overflow Vulnerabilities 2006-07-11 Microsoft Excel Multiple Buffer Overflow Vulnerabilities 2006-07-06 McAfeg SecurityCenter Subscription Manager Buffer Overflow 2006-08-01 Alien Arena 2006 Gold Edition Multiple Vulnerabilities 2006-03-08 Svmantec Support Tool Activex Control Vulnerabilities 2006-10-06 Microsoft Word Code Execution Vulnerabilities 2006-09-05 Mandriva update for xine-lib 2006-06-26 Microsoft Office Multiple Code Execution Vulnerabilitijes 2006-03-14 F-Secure Anti-Virus Archive Handlinq Vulnerabilitie_s 2006-01-19 gflphwz GD GIF Handling Buffer Overflow Vulnerability 2008-02-13 aI t di C stal Re orts RPT Processm Buffer ve low 2007-09-11 Media Player Classm FLI File Processing Buffer Overflow 2007-08-24 Migrfioft Directx RLE Compressed Targa Image Processing guffer Qvgrflgu 2007-07-19 Avgxa Products (308 "DWARF" Buffer Overflow Vulnerabilities 2007-07-04 Qiscp Products PHP "htmlentitigsm and "htmlspgcialghgrst 1" Suffer Qvgrflows 2007-04-26 15M Lotus Domino Script Insefiion and Buffer Qvgrflgws 2007-03-28 gentoo mqv Buffer Overflow Vulnerability 2007-03-27 Next 25 matches >> Figure 1.14. A screenshot of results from buffer overflow2006 as the key word search 1.4 Trend analysis NVD is the only vulnerability database with any sort of trend analysis. Figure 1.15 shows the trend selections that one can make. The image shows that the capabilities are basically the same as the search, but there is no key word search. Therefore, the user is not able to gain any trend analysis from customized search results. Figure 1.16 demonstrates the statistical results while Figure 1.17 shows the graphs of the results from choosing buffer errors from the Vulnerability Category and selecting the time period of January 2003 through March 2008. Figure 1.16 indicates that the statistics are not being calculated correctly. Within the statistics given, it claims that there are 29 buffer error vulnerabilities in 2006 which constitutes to 0% of the makeup of vulnerabilities. It is not 27 possible for 29 vulnerabilities to constitute for 0% of the vulnerabilities in 2006. The error is due to the high number of vulnerabilities in 2006. There should still be some way to computer the actual value or provide a more accurate computation. 28 I Sponsuie" l2] 0H5 Nat mm 0,: w Sec-mt ., Division/US CERT National Vulnerability Database utitoiiiatii q vi ll‘ iai lity iii iiit'iqe ment sé'rtiiity ~asure-iiiunt, and t ")111pl' am»: i, wet » ii:t_i Vulnerahllltiu checklists Product DIdlonary Impact Metrm Data Feeds 'm-cv'rr. . "r i me ~ -. ”K‘N'fll ~12: 1“ r‘vnv s g a. «w- CVE and CCE Statistics mQuery Page It“ Ill-Rb This 15 a general purpose vulnerability statistics generation engine Use it to graph and chart vulnerabilities discovered Wlthlr‘ a product or to graph and chart sets of vulnerabilities containing particular characteristics (e g. remotey exploitable buffer overflows) These calculations may take up to several minutes to be generated depending on the complexiity of the statistic requested Important Note Linux distributions are often made up of a large collections of independently developed software and it is sometimes difficult to determine which software packages should be conSidered part of the operating system and which should be conSIdered independent but merely included along Wlll‘i the operating system in addition, some vulnerabilities occur Wllhll‘l the Linux kernel and for those vulnerabilities we do not enumerate all of the hundreds of Linux distributions. Thus, the statistics related to Linux must be interpreted carefully. We will be working to provrde batter statistics for Linux distributions. Vendor ufiuuumuweza Product flfifl‘lflflflfilflflfll VOIiMI " chnou a Vendor or Product " Search start date: AnyMonm v AnyYeor v Search and date: Any Month v AriyYeor v CV88 Verslon 2 Metrics: Vulnerability Severity: Any _ v Access Vector Any 7 i i v Authentication: Any v Confidentiality: Any 7 iv 1ntegrity: Any v Availability. 4 Any vi Access Complexity: Any v Vulnerability Category: Any i ., i ii i v /\ Use only vulnerabilities that Kg: Software Flaws (CVE) have the followmg Misconfigurations (CCE), under development assouated resources; '; J US-CERT Technical Alerts 0 US-CERT Vulnerability Notes 0 US-CERT Technical Alerts or Vulnerability Notes 0 OVAL Queries l I [(2 li Pmnq Smyrna; (‘ 25335;}; am Send comments or rugqutwn: to rigging 92! Figure 1.15. Criteria selection when using the NVD statistics query page 29 Sponsored by “ \ ¥ ‘9' DHS Martional Cyber Security Division/US- CERT _ ' ~ National Vulnerability Database automating VUlllefdblle managemen‘tmrity measurement and compliance LlIchlng Vulnerabilities , #3 v Product Dictionary Home luv/scu- Lemma... Statistics Results Page Impact Metrics [WM-u 1m Vim" “-"fifwbm :3 3,, New QWJ Calculating general vulnerability statistics Calculating user requested vulnerability statistics Generating tables and graphs Below are a table and graphs with data matching the characteristics you specified on the Statistics Query Page. You have asked for statistics on vulnerabilities with the following limitations: 9 Occurred after January, 2003 o Occurred before April, 2008 0 Has the following vulnerability type: Buffer Errors Table of Data Matching the Above Limitations Year 2008 2007 2006 2005 2004 2003 ‘ a or Vulns 174 . 387 30 24 17 46 l %ofTotal9% 6% 0% 0% 1% 3%! Figure H6. The statistical results of choosing buffer errors from the Vulnerability Category and selecting the time period of January 2003 through March 2008 30 Graph of Data Ilatchhrg the Above Unanimous ¥ l l l i i l 3 l l ! A ‘ 8 l a t V 1 udvmmmoi‘uuwm j I A V“ uarwmauwwm 2008 2.07 2'“ 2'5 2.04 2.3 m Figure l.l7. The graphical results of choosing buffer errors from the Vulnerability Category and selecting the time period of January 2003 through March 2008 31 CWE also offers a form of trend analysis.(Christey) Because CWE is not a vulnerability database, they only offer the statistics of CVE vulnerabilities broken down into 41 classifications. The trend analysis is more of a statistical study which offers only tables. A sample of the analysis is shown by Figure 1.18. The rows are ordered by the classifications used while the columns indicate the totals from 2001 until 2006. Anyone that uses the provided trend analysis has no concept of which vulnerabilities fall into the classifications used in the analysis. Therefore, the user would have a hard time making any changes to the classifications. One is also able to find the importance of a graph after reading from the large tables as it is hard to put the tables into perspective. Table 1: Overall Results TUIAl )()0l 1““) )UU.$ 2004 )U(l.') )UUI) Total 18009 1432 2138 1190 2546 4559 6944 [ 1] xss 13.8% _08.7% (2) 07.5% ( 2) 10.9% (2) 16.0% ( 1) 18.5% ( 1) 2595 31 187 89 278 728 1282 [2) but 12.6% 19.5% ( 1) 20.4% ( 1) 22.5% ( 1) 15.4% ( 1) 09.8% (3) 07.8% (4) 2361 279 436 268 392 445 541 t 3] sql-lnjoct 09.3% ' ' 03.0% (4) 05.6% (3) 12.9% (2) 13.6% ( 2) 1754 36 142 588 944 [4) php-lndude 05.796 1. . 02.1% (6) 13.1% (3) 1065 1 7 12 36 96 913 [5) dot 04.7% 08.9% (2) 05.1% (4) 02.9% (5) 04.2% (4) 04.3% (4) 04.5% (5) 888 127 110 34 106 196 315 [6] lnfoloak 03.4% 02.6% (9) 04.2% (5) 02.8% (6) 03.8% (5) 03.8% (5) 03.1% (6) 646 37 89 33 98 175 214 [7) dos-malform 02.8% 04.8% (3) 05.2% (3) 02.5% (8) 03.4% (6) 01.8% (8) 02.0% (7) 521 69 111 30 86 83 142 [8] Int: 01.8% 04.5% (4) 02.1% (9) 03.5% (3) 02.8% (7) 01.9% (7) - 341 64 45 42 72 87 31 [9] twat-string 01.796 03.2% (7) 01.8% (10) 02.7% (7) 02.4% (8) 01.7% (9) 00.9% (11) 317 46 39 32 62 76 62 [10] crypt 01.5% 03.8% (5) 02.7% (6) 01.5% (9) _01.5% (10) 00.8% (13) 278 55 58 18 22 69 56 [11] 9811! 01.3% 02.5% (10) 02.2% (8) 01.1%(12) 01.3%(11) 01.5%(11) 00.8%(14) 249 36 46 13 33 67 54 [12] perm 01.3% 02.7% (8) 01.8% (11) 01.3% (11) 00.9% (15) 01.1% (13) 01.1% (9) 241 39 39 15 24 48 76 Figure 1.18. A section of the CWE trend analysis 32 OSVDB has announced plans for a statistics project to for Google Summer of Code 2008. This project is to create a flexible framework that can provide useful statistics on vulnerabilities from OSVDB. This project should take in consideration all of the fields and classifications in OSVDB. -Should create and generate standard/most popular graphs and charts each day and make available -Should create statistics that allows very flexible/detailed stats to be dynamically generated on demand by user -Some examples of statistics required: -# Vulns based on Disclosure Year -Detailed stats based on each vuln classification options (ALL OPTIONS) -# of vulns by Vendor -# of vulns by Product -# of vulns that do not have a solution (and by vendor) -Time from when a vuln was discovered and then disclosed -Create stats application that allows user to dynamically generate stats based on their own requirements. -Trend the number of vulns released per day ("OSVDB GSoC 2008 Project Ideas") Although this has not yet been implemented, it helps to illustrate the need for vulnerability analysis. From the list of features that are provided, VACT will be able to accomplish all tasks except for finding the time from when a vulnerability was discovered and then disclosed and the number of vulnerabilities released per day. The user will be able to use the options within VACT to accomplish all other trends. Below is a summarization of the problems that were found within the vulnerability databases. 0 Each vulnerability database only allows one search at a time 0 There is no way to obtain a copy of the search results 0 Each of the vulnerability databases returned a different number of results when searching for buffer overflow within the year 2006. 0 US-CERT contains a truncated list of vulnerabilities 33 NVD does not follow the vulnerability classifications OSVDB only searches through the vulnerability title US-CERT, Secunia, and OSVDB do not offer trend analysis NVD trend analysis is not customizable 34 Chapter 2:VACT Overview 2.1 Framework Customizability is an important characteristic of this project which sets it apart from other tools. One way to provide customization is to use a powerful yet simple programming language that can allow the user to make some changes without the need to develop a tool to for the task that is being accomplished. VACT requires no programming knowledge to work, however the knowledge of programming allows one to customize the tool. Another way to aid users is to provide a flexible tool that has enough range to supply useful information to a variety of users. The purpose of VACT is to provide a quick and easy way to search through and analyze vulnerabilities. VACT brings together vulnerability classification, vulnerability search, and trend analysis. In order for this to happen, the tool needs to provide a common interface so that the user does not have to search through various vulnerability databases for vulnerabilities. For the interface to be successful, it must be able to accommodate the needs of many different users. To make this possible we rely on a simple, robust front end solution. In order to provide a solid framework, Python is the chosen programming language. It offers a powerful yet robust object—oriented environment. VACT is set up so that it must gather vulnerabilities from vulnerability databases and then parse through a large amount of text. Python does especially well with string processing and Internet retrieval. 35 2.2 Classification and Search We found a lot of disparity within the classification schema of vulnerability databases. Even within all the various classification options, we were not able to select the vulnerabilities out of the databases that we wanted. We could not find a vulnerability database that could find buffer errors within the past year that allowed privilege elevations. Therefore, a main feature within the Vulnerability Analysis and Classification Tool is to allow the user to specify the classifications. Our approach allows the user to preselect all the keywords that are necessary and then perform a search. Another feature for user convenience, which no vulnerability databases have the ability to do is to allow the user to perform multiple vulnerability searches for comparison. Figure 2.] shows of screenshot of Vulnerability Analysis and Classification Tool. Within the figure, one is able to see the simplicity of the design. Section 3.2 explains the user interface in detail. Vulnerability Analysis and Classification Tool 1 [ Add Variable ] Start Date: LJariuary" [2)[2008 3.. W995 930991-181 NE} 51' US Cert : National Vulnerabiity Database ’3 Open Source Vinnerabiity Database [ Add Search ] LFind Vulnerabilities ] Figure 2.1. A Screenshot ofVACT when first initialized 36 Allowing the user to specify the classification creates a better tool than if it were using preclassified vulnerabilities. Even though CWE has so many great features, it is constantly changing. From December 2007 to April of 2008, CWE has gone from draft 7 to draft 9. As the drafts changed so have the various classification schemas that are used within the drafts. The classifications given do not have all the possible vulnerabilities listed with them, therefore, one would have to classify each vulnerability according to the classification tree within CWE that is being used. Another limitation to using CWE is that when a new classification is made, the classification is not instantly changed within the table. Allowing users to search for vulnerabilities by making their own classifications alleviates the problems stated above as the user now has the freedom to use CWE as a reference or make up their own classifications. 2.3 Trend Analysis After the vulnerabilities are gathered, VACT returns the results to the user in graphical form. Figure 2.2 illustrates an example of an output page returned to a user. The user is also given an option to download a CSV file containing the vulnerabilities found to match the classification criteria. A simple set of statistics are also returned from the search. The statistics include the total number of vulnerabilities, the total number of vulnerabilities within each category, and a breakdown of which characteristics are found within the vulnerabilities. There are currently no vulnerability databases which allow a user to download the vulnerability search results or that returns a user trend analysis information based upon the vulnerability search results. Section 4.] will explain the trend analysis functionality in farther detail. 37 Results Analysis 1 unb—I—l-l ‘MU Nunoqulnoiabiilios HMU§UONOO Ibuflorovorflow 000102030405060708 Year 20002001 2002 2003 2004 2005200620072008total Total 31 300 374 257 339 234 416 366 67 2404 Matchlnql B2 121 59 63 66 93 113 23 601 Percent 3.2 20.7 32.4 23.0 18.6 23.2 22.4 30.9 34.3 25.0 Analysis 2 Hum 01 Vidnorabiflios 000102030405060708 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 31 300 374 257 339 284 416 366 67 2404 Matchtnql 64 121 61 65 69 94 HS 23 611 Percent 3.2 21.3 32.4 23.7 19.2 23.2 22.6 30.9 34.3 25.4 Figure 2.2. Results page from VACT 38 Analysis 3 33.30 83> 6 .52 000102030405060708 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total 31 2404 102 120 23 643 300 374 257 339 284 416 366 67 Total 124 62 75 72 3.2 21.3 33.2 24.1 22.1 25.4 25.0 32.8 34.3 26.7 64 Matching 1 Percent wwwwwwwwwwwwrr 008-3083»; .0 £52 0435NC703 V01 . V. v 0C01 C20 Figure 2.2 continued 39 Chapter 3: Classification and Search 3.1 Strategy As one has gathered from the background, two key components of Vulnerability Analysis and Classification Tool are the classification and search feature. The implementation of these tasks consist of three sections. The first task is gathering the user’s vulnerability classification schema that will be searched. The next task is a behind the scenes act of gathering the database information. The third and final task is performing the search upon the gathered data. Before going into detail of how the search is configured, I will first talk about the search strategy that is being used. String search was chosen on the basis that it offers a simple yet effective search and the vulnerability descriptions offer enough detail to make it happen. When making this choice there were a few tradeoffs that needed to be considered. Vulnerabilities are submitted to the various databases by various researchers and organizations, so the wording is not always consistent between vulnerability descriptions. Another important consideration that we take into account is that the descriptions accurately depict the vulnerability. Before someone submits a vulnerability and offers a description, they must have enough knowledge of the situation to know where the error is occurring. Therefore, we argue that if one has enough knowledge to find and submit a vulnerability, then the person is able to accurately describe where the application is failing. 40 3.2 User Interface Vulnerability Analysis and Classification Tool uses a web-based interface to interact with the user. It is composed of an html form which relies on JavaScript to help add words and new search boxes. As seen in figure 3.], the user interface allows for the user to input classification variables for a search, add another search, and submit the search queries. For any individual search, the user is allowed to specify the vulnerability database, the time frame to search, and any classification words or phrases that should be used within the search. To add a word or phrase, the user must input it into the associated textbox and select Add Word. The Add Word button calls a JavaScript function which adds the word or phrase into the html form. Finally the user is allowed to submit the search. Clicking the Submit button sends the associated form of search variables to the python search and analysis code. Vulnerability Analysis and Classification Tool i [ Add Variable ] 1 4.-. Start Date: 11:115de Efl 2003 E] EndDatcr .-s’:m@rx--__ElZOP§E CO? US Cert {’3' National Vnhaabiity Database C" Open Source Vdnaabiily Database [ Add Search ] [ Find Vulnerabilities ] Figure 3.] A screenshot of VACT when first initialized 41 3.3 Functionality Upon reception of the search form, the database variables must be obtained. There are two main methods to obtain the vulnerability information from the vulnerability databases are crawling through the web site to gather the information that is necessary and downloading a premade file meant for download. Crawling through the site is necessary as some sites do not allow one to download the information that is available. Once the vulnerabilities are downloaded, no matter what the source, they are put into a Python dictionary. The dictionary uses the vulnerability name followed by the date as a key and a tuple containing the vulnerability description as a set paired with the date for the value. The search function accepts the dictionary of vulnerabilities as well as the list of the classification words. The function loops through each vulnerability checking for each classification term in the list. If the classification is not found within the vulnerability description, then it is removed from the dictionary of vulnerabilies. Once the search has completed, we send the results to the trend analysis. 3.4 Efficiency The runtime of VACT is dominated by the amount of data that must be downloaded. To illustrate the runtime of the search function, several test scenarios have been setup. The first test set uses the US-CERT vulnerabilities, which is composed of nearly 2,400 vulnerabilities. The vulnerabilities must be downloaded using the web crawler method. The second test set contains the 2007 NVD vulnerabilities which contains over 6,000 vulnerabilities. The third test set contains the NVD vulnerabilities from 2003-2007. The third test set contains over 22,000 vulnerabilities. 42 Downloading the first set of vulnerabilities, took an average time to be close to 1 minute and 30 seconds. While downloading the second set, from NVD took an average of 1 minute and 45 seconds. The third set took the longest at 5 minutes to download. Therefore, we recommend a preconfigured download if at all possible. The composition of the dictionary from each set of vulnerabilities took an average of one second to fill. Using the same vulnerabilities within the setup above, tests were run to see how long it would take to return the search results after the vulnerabilities are gathered. In the test, various trials were done by varying the classifications from 1-6 words and the number of searches from 1-6 searches. For example, one test contained four searches consisting of (buffer and overflow), (buffer), (overflow), and (exception, microsoft, the, a, and, .dll). The search for the example just like all other searches returned the results of all searches within two seconds for each trial set of vulnerabilities. Another test was run on the third set to make sure that there would be no problems searching the data. The third set was set to run with twelve search words. The search was returned within one second. 3.5 Naive Bayesian Classification String search is a fast and efficient way to match key words, yet there is a limitation to the string search. There are some vulnerability classifications which cannot be summarized within a reasonable amount of key words. To help with any such cases, Vulnerability Analysis and Classification Tool includes a generic naive Bayesian approach. Naive Bayesian is a classification algorithm that uses the probability of occurrence to classify an attribute. The naive Bayesian classification will take a csv that is returned from the string search as an input. The descriptions of each classification are combined into a dictionary with the words as the key and the number of occurances as the 43 value. Once the dictionary is filled, the common words are stripped out. The Algorithm is now ready to find words that only occur once and only in one class. When a single word is found, it is tagged as unknown. The unknown words are added together as a special case that will handle words that do not belong into each class. We now have the words all sorted and categorized according to classification, the next step is to compute the probabilities of each class. P(c) is computed for each class by taking the number of occurrences of each class and dividing it by the sum of class occurrences. P(word | c) is computed for each word within the class by taking the occurrence of the word and dividing it by the total words in each class. Now that the probability of a classification and probability of a word within a classification are computed, the algorithm is ready to classify vulnerabilities. Figure 3.2 shows pseudo code used to compute the probability of a vulnerability description belongs to a classification. After computing the probability of each classification, the best one is returned and the vulnerability is associated within that classification. For each class: Pclass = P(c) For vsonds in sentence: Pclass = Pclass ‘ P(word | c) Figure 3.2. Psuedo code showing how a vulnerability’s probability is computed according to the naive Bayesian classification Naive Bayesian classification is offered as a classification aide to help enhance the string search. Naive Bayesian, as any other approaches, has its limitations. The largest limitation is that it is confined to the classifications that are given to the algorithm. If only two classifactions are given, any vulnerability must fall into one of the two possible classifications. When running the algorithm after computing the probabilities of 44 the classifications, the user may find that the probability of a vulnerability does not fall into its original classification. Despite the limitations to naive Bayesian classification, it presents users a way to search find new words and vulnerability classifications. 4S Chapter 4: Trend Analysis 4.1 Evaluation of Features The second key element within the thesis is the trend analysis as there is currently no implementation which encompasses the user’s search results. The trends given are simple and efficient. They help to give the user a basic overview of how the classification schema presented fits in with the vulnerabilities. The trend analysis works by processing the results that are returned by the search and turning them into graphs, statistics, and a CSV file. The results page for searching US-CERT for buffer overflow, buffer and overflow for the years 2000-2008 can be seen in Figure 4.1. Accompanying each search is a graph with the number of matching vulnerabilities. The matching vulnerabilities are the vulnerabilities that match the classification as defined within the search. The table associated with the graph shows the total number of vulnerabilities found per year, the number of matching vulnerabilities found per year, and the percent of vulnerabilities which are matching per year. If a user clicks on the words Analysis, then a CSV containing the matching vulnerabilities will become available. For example, the CSV for Analysis 1 would contain 601 entries. The CSV file contains the identification number of the vulnerability within the database searched, the date the vulnerability was published, and the description of the vulnerability. 46 Results Analysis 1 4...... ‘Nu Nun 01 Vulnambflios ‘nunuaqa Ibufleroverflow 000102030405060706 Year 2000 2001 2002 2003 2004 2005 2006 2007 2006 total Total 31 300 374 257 339 284 415 366 67 2404 Matchlnql 62 121 59 63 66 93 113 23 1301 Percent 3.2 20.7 32.4 23.0 18.6 23.2 22.4 30.9 34.3 2.5.0 Andy-1' 2 Hum 01 Vulnerabilities Year 2000 2001 2002 2003 2004 2005 2013-5 2007 2008 total Total 31 30-0 374 257 339 284 416 368 67 2404 Matchingl 64 121 61 65 69 94 113 23 611 Percent 3.2 21.3 32.4 23.7 19.2 23.2 22.6 30.9 34.3 25.4 Figure 4.1. Results page from VACT 47 Analysis 3 130—- “:20- £110- ! eo- Eg -, I I 3 so— ; 2 ° 40~ 5 I Ear 5 g 20- 5 g a 10- ’ - . U I I l l I oomoeosmosoeome Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 31 300 374 257 339 284 416 366 67 2404 Matching! 64 124 52 75 72 102 120 23 543 Percent 3.2 21.3 33.2 24.1 22.1 25.4 25.0 32.8 34.3 26.7 Nurn 0| Vdnombmloc Figure 4.l continued The graphs are computed with the aid of PyChart. (Saito) PyChart is a graphing utility written by Yasushi Saito. VACT contains a function linking the search results to the graphing utility. Once the images are created, they are displayed within the results 48 page. Refem'ng back to Figure 4.1, one can see that the graphs given are a breakdown of the vulnerabilities over time. The bar chart plots are returned per vulnerability classification searched one in terms of years. The results from each search are also put into a large bar chart. The CSV file is made by writing the categories used within the search to file. The basic format includes vulnerability id, vulnerability date, and vulnerability description. Because the dictionary contains the key of vulnerability name, vulnerability date with the value of the vulnerability description, it only takes one loop through each of the values to create the CSV. While looping through to create the CSV, the vulnerabilities are counted by their associated year. Therefore creating the graphs becomes easy as counting the values contained within each list. The set of statistics returned includes the breakdown of vulnerabilities per year as well as the total vulnerabilities found. 4.2 Efficiency In the last chapter, we found that VACT is limited by the time it takes to download the vulnerability lists from each database being searched. Even with the download time being the dominate factor, we would like the trend analysis to be efficient like the searching. To perform the trend analysis test setup, the same sets of vulnerabilities are being used that were used within the search (USCERT vulnerabilities, NVD 2007 vulnerabilities, and NVD 2003-2007 vulnerabilities). Each set of vulnerabilities was used as a result set that was passed to the trend analysis. Generating the statistics and writing to the CSV file, took less than 20 seconds. Creating graphs for each of the sets took an average time of 5 seconds. 49 Chapter 5: Real World Example 5.1 Problem Every now and again there is an article stating which operating system provides the best security. Many times the writer of the article bases the fact on the number of possible vulnerabilities that exist within the various operating systems. At one point, the author based the security on how fast that vulnerabilities have been patched. One is able to use Vulnerability Analysis and Classification Tool to verify that the user is giving accurate results. 5.2 Results When searching for the results I was faced with some interesting problems. Should the results come from US CERT because they deal only with severe vulnerabilities, NVD because they offer all CVE vulnerabilities or OSVDB because it offers the most vulnerabilities. To show the flexibility and why authors might report various results, I decided to compute the results from each database. Another problem comes within the classification that was used to find the resulting set of vulnerabilities. The set of vulnerability classifications that were decided upon is illustrated within Tables 5.1 and 5.2. Both tables provide a summary for the results obtained using VACT. Table 5.1 is specific to the Microsoft classifications that are used while Table 2 is specific to the Apple search classifications that are used. Figures 5.1-5.18 display the results returned by VACT. The data obtained from VACT illustrates a vast difference not only with the databases, but also within the classification that is used for the search. There is currently no quick and easy way to achieve the same results within the individual vulnerability databases. Within each 50 vulnerability database, the search would need to be run a minimum of 13 different times using the various classifications indicated within Tables 5.1 and 5.2. After each search, the user would need to record the results returned to gather the statistics. The individual would then need to compute the trends. Even after going through this process, the user would not have lists of vulnerabilities found by each classification. When viewing Table 5.1, one will find that Microsoft classifies 474 vulnerabilities within US—CERT, 852 Vulnerabilities within NVD and 771 vulnerabilities within OSVDB. One will also find this variance throughout the search results. To gain an accurate scope of the vulnerabilities, one would need to look through the CSV files to find the discrepancies within the vulnerabilities returned to obtain an overall count of vulnerabilities for both Microsoft and Apple. 51 NVD US-CERT OSVDB Search String Microsoft microsoft, windows microsoft, windows, xp microsoft, windows, vista microsoft, xp microsoft, vista windows xp windows vista XP Vista Search String Microsoft microsoft, windows microsoft, windows, xp microsoft, windows, vista microsoft, xp microsoft, vista windows xp windows vista Xp Vista Search String Microsoft microsoft, windows microsoft, windows, xp microsoft, windows, vista microsoft, xp microsoft, vista windows xp windows vista Xp Vista Results 1116 432 260 46 552 46 399 55 1887 67 Results 422 131 32 2 131 40 319 Results 1128 376 167 40 490 40 276 51 2079 72 Percent of Total 4.08 1.58 0.95 0.17 2.02 0.17 1.46 0.20 6.90 0.25 Percent of Total 19.19 5.96 1.46 0.09 5.96 0.09 1.82 0.27 17.74 0.27 Percent of Total 2.71 0.90 0.40 0.10 1.18 0.10 0.66 0.12 5.00 0.17 VACT Figure 5.8 5.8 5.9 5.9 5.10 5.10 5.11 5.11 5.12 5.12 VACT Figure 5.1 5.1 5.2 5.2 5.3 5.3 5.4 5.4 5.5 5.5 VACT Figure 5.15 5.15 5.16 5.16 5.17 5.17 5.18 5.18 5.19 5.19 Table 5.1. Summary of results and classifications from VACT pertaining to Microsoft 52 NVD US-CERT OSVDB Search String Apple mac os x apple, mac os x Search String Apple mac os x apple, mac os x Search String Apple mac os x apple, mac os x Results 524 442 264 Results 139 64 54 Results 351 477 102 Percent of Total 1.92 2.62 .97 Percent of Total 6.32 2.91 2.46 Percent of Total 0.84 1.15 0.25 VACT Figure 5.13 5.13 5.14 VACT Figure 5.6 5.6 5.7 VACT Figure 5.20 5.20 5.21 Table 5.2. Summary of results and classification from VACT pertaining to Apple 53 Results Analysis 1 .. eon-r g 70.01 a con-4 3 50.0- 3 40.0- ‘g 30.0- 2 20.0- 10.0“ 0.. [I Microsoft ] @0102030405060706 Your 2003 2001 2002 2003 2004 2005 2006 2007 2008 total Total 31 249 335 197 339 254 326 366 72 2199 Matching 5 31 62 47 64 54 88 61 10 422 Percent 16.13 12.45 16.99 23.861838 21.216263916371339 19.19 Andy-lo 2 3' 20w 3.- 3 3 10.0“ D 05 [I memoir windows ] mmmmmmmwu Vet 2060 2001 2002 2303 2004 2005 2006 200'? 2008 total Total 31 249 365 197 339 254 326 1936 722199 Matching 0 13 10 18 26 18 19 26 1 131 Percent 0.00 5.22 2.74 9.14 7.67 7.09 5.83 7.10 1.39 5.96 Figure 5. l. VACT results searching for Microsoft and Microsoft windows within US-CERT 54 Analysis 3 8.0- ; 7.0“ g 6.0-- 9 5.0— 5 4.0‘ 5 3m 2w 1.0- \“\\“““\\\\\\\\“\\\\N ““m“\\“‘\‘ [U mmmm xp] 8 L\\\\\\\\\\\\\\\\\\ E f i i a cavioeoaorm C001 2000 2001 2002 2003 2004 2005 2006 2007 2006 total Total 31 249 365 197 339 254 326 366 722199 Matching 0 2 6 6 8 5 3 1 1 32 Percent 0.00 0.80 1.64 3.05 2.36 1.9? 0.92 0.27 1.39 1.46 Andy-ls 4 33’ Hum oi Vulnomhiiliu ‘1 Ci I i l [i WOICQCBMOSWO?“ 200020012002200320042005200620072008tota| Total 3! 249 365 197 339 254 326 366 722199 Matchm90000000202 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.55 0.00 0.09 Figure 5.2. VACT results searching for Microsoft windows xp and Microsoft windows vista within US- CERT 55 Analyst. 5 F! 3 ‘1 9 Nurn ol Vulnerabiltios [U nicrosoflxp] C1 11 1 00 2000 2001 2002 2003200420052006 20072008 total Total 31 249 365 197 339 254 326 366 722199 Matching 5 6 26 20 28 16 22 6 2 131 Percent 16.13 2.41 7.1210.15 8.26 6.30 6.75 1.84 2.78 5.96 Analysi- 6 2.05 g g E b} S 1.0— {.33 > s: 3 :5: 5 l5 . 0 T i T 7 l r I 1 [El microsofl V1518] W 01 02 03 O4 06 M 07 00 You 2000 2001200220032004200520062007 2006 total Total 31 249 365 197 339 254 326 366 722199 Matching 0 0 0 0 0 O O 2 0 '2 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.55 0.00 0.09 Figure 5.3. VACT results searching for Microsoft xp and Microsoft vista within US-CERT 56 Analysis 7 51 Mom oi vumrabiltios [El windowsxp] W0102030405W0703 Year 20002001200220032004200520062007200810031 Total 31 249 365 197 339 254 326 366 722199 Matching 0 4 7 6 12 5 4 1 1 40 Percent 0.00 1.61 1.92 3.05 3.54 1.97 1.23 0.27 1.39 1.82 Analysis 8 9‘ .9 9’ NunofVuherabiltiu .. u ‘1 ‘1 ‘l '1 "l [I windowsvista] CI 1 l T l 101 000102030405060708 Year 20002001200220032004200520062007200810tal Total 31 249 385 197 339 254 328 366 722199 Matching 0 0 0 0 0 0 0 5 1 6 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.37 1.39 0.27 Figure 5.4. VACT results searching for windows xp and windows vista within US-CERT 57 Analylln 9 5 .‘8‘ ‘1 Nunolvmra o- 000102030405060700 Your 20C0 2001 2002 2003 2004 M5 2006 2007 2008 total Total 31 249 365 197 339 254 326 366 722199 Matching 10 36 112 54 72 27 39 31 9 390 Percent 32.251446306821411 212410631196 8.4712.5017.74 Analytic 10 5.0— Manoquinerabillies ‘l "l ‘1 d I J 2000 2‘00! 2002 212-20 2004 2005 20013 21037 2W8 total Total 31 249 365 197 339 254 326 366 722199 Matching O 0 0 O O 0 0 5 1 6 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.37 1.39 0.27 Figure 5.5. VACT results searching for xp and vista within US-CERT S8 Analytic l 1 M 01 Vulflflab‘m .5 N) U & 9 .o .0 9 ‘l '1' "1 f1 ‘1 lappie 000102080405060708 Year 200020012002200320042005 2006 20072008toul Total 31 24.9 365 197 339 254 3126 3'36 722199 Matching 1 l 3 0 12 21 42 48 5 139 Percent 3.23 0.40 0.82 3.05 3.54 82712881311 6.94 6.32 Andy-lo 12 320.0" 25 G %. > 0.0'7 ‘o’ .5 0.. mucosa 000102030406060703 Yea 2000 2001 2002 2003 2004 2005 2005 2007 2008 total Total 31 24.9 355 197 339 254 326 355 72 2199 Matching 0 1 2 0 11 16 21 11 2 64 P0100111 0.00 0.40 0.55 0.00 3.24 6.30 6.44 3.01 2.78 2.91 Figure 5.6. VACT results searching for apple and mac os x within US-CERT 59 Analynln 13 3 3 Num 0! Vdmrabiflies 6 ‘1 0" [annuamacoux] WO1QNNOSNO7OO You 200020012002 2W3 ”04200520062007 2W8 COIN TORI 31 249 355 197 339 254 325 3'53 722199 Matching O O O 0 '7 15 2O 10 2 54 Percent 0.00 0.00 0.00 0.00 2.06 5.91 6.13 2.73 2.78 2.46 Figure 5.7. VACT results searching for apple mac os x within US-CERT 60 Analyst. 1 0.. [I microsott] 000102000405N0700 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 10201679 21701541 2478 5007 6555 6596 196 27342 Matching 60 '76 1'72 61 92 1'26 265 257 '7 1116 Percent 5.88 4.53 7.93 3.96 3.71 2.52 3.98 3.90 3.57 4.08 Analysts 2 c .g 1030* 3 e E s ‘5 E Z c- [I mm mm | wmumuwmwm You 20002001 2002 20032004 2005 2006 2007 2008 beta! Total 1020 1679 2170 1541 2478 5007 6655 6596 196 273412 Matching 1'7 22 45 19 44 59 102 120 4 432 Percent 1.67 1.31 2.07 1.23 1.78 1.18 1.53 1.82 2.04 1.58 Figure 5.8. VACT results searching for Microsoft and Microsoft windows within NVD 61 \\\\““\\\\\\\\“\\\\\ [12 Worm: KO ] \0 00010203003110 Year 08 2000 2001 2002 2003 2004 2005 2006 2007 2008 tout Total 1020 1679 2170 1541 2478 5007 6655 6596 1915 27342 Matching 3 3 24 9 33 33 80 71 4 260 Percent 0.29 0.18 1.11 0.5-8 1.33 0.66 1.20 1.08 2.04 0.95 Analytic 4 8 ‘7 3 ‘1 Num ol Vdnorabiilivs 3 8 b b 1 1 [I mmttw‘ndomvish I G T 1111 WOIOEOGOJEISNWOS 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1020 1679 2170 1541 2478 5007 6655 6596 196 27342 Matchma000000143246 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.65 1.02 0.17 Figure 5.9. VACT results searching for Microsoft windows xp and Microsoft windows vista within NVD 62 Analysis 5 1"‘1 Num cl Vulmrabiities 8 ‘i’ [[1pr} qt u r ’1‘ 0405000700 Year 111 000102 [11111 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1020 1679 2170 1541 2478 5007 6655 6596 196 27342 Matching 14 21 74 21 54 61 163 140 4 552 Percent 1.37 1.25 3.41 1.36 2.18 1.22 2.45 2.12 2.04 2.02 Analyalsfi «3r 3: g 1 \ I1,3011“ § 1 . >200" 3 '5 1 5103‘ S 1 s . . n '31 Q olilTTTl'l'FrEmmmwml 1130102030406060700 200020012002200320042005200620072008 total Total 1020 1679 2170 1541 2478 5007 6655 6596 196 27342 Matching 0 O 0 0 0 0 1 43 2 46 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.65 1.02 0.17 Figure 5.10. VACT results searching for Microsoft xp and Microsoft vista within NVD 63 Analytic ‘7 lflwiedow‘w] 000102030405060708 You 200020012002200320042005200620072006 total Total 1020 1679 21701541 2478 5007 6665 6596 196 27342 Matching 5 20 38 27 57 59 95 93 5 399 Percent 0.49 1.19 1.75 1.75 2.301.18 1.43 1.41 2.55 1.46 Analysis 8 50.0"“ 1.5 40.0“ nbult 3 30.1w ‘5 1 T1 [Iwindowsvistaj 02030406060700 Your 1 000 2000 2001 2002 2003 2004 2005 2005 $307 2008 total ToLa110201679 2170 15411 2478 5007 6655 6596 196 27342 Matching 0 0 0 0 0 1 1 51 2 55 Percent 0.00 0.00 0.00 0.00 0.00 0.02 0.02 0.77 1.02 0.20 Figure 5.1 l. VACT results searching for windows xp and windows vista within NVD 64 Analylln 9 .§§.§ 31333 Nun 01 Vulnerabiitios 8 ”01020004050607“ Year 2000 2001 2002 2003 2004 2005 2006 2007 21308 total Total 1020 1679 2170 1541 2478 5007 6655 6596 196 27342 14314211an 50 95 196 120 192 320 431 463 20 1887 Percent 4.90 5.66 9.03 7.79 7.75 6.39 6.48 7.021020 6.90 Andy-1| 10 50.o~ 0 g .. E 40.0 % sonfi > —1 .5 20.0 g 10.0" o T T 1 1 T 000102030405060708 Your 200020012002200320042005200620072008 total Tota11020167921701541 3.1478500766556596 196 27342 Matching 0 0 O 0 0 2 10 53 2 67 P0110111 0.00 0.00 0.00 0.00 0.00 0.04 0.15 0.80 1.02 0.25 Figure 5.12. VACT results searching for xp and vista within NVD 65 Analysts 1 1 § 3 Num 01 Vulnonbiitios _3 ‘1 .3..- 000102030405080708 You 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 10201679 2170 1541 2478 5007 6655 6596 196 27342 Matcnmo 8 10 30 27 32 68 126 222 1 524 Percent 0.78 0.60 1.328 1.75 1.29 1.36 1.89 3.37 0.51 1.92 Analyils l2 _‘3‘ a Non 01 vammbllties mason] a 00 01 02 00 04 05 06 07 00 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1020 1679 2170 1541 2473 5007 6655 6596 196 27342 Matching 0 4 8 19 50 113 108 140 0 442 Percent 0.00 0.24 0.37 1.23 2.02 2.26 1.62 2.12 0.00 1.62 Figure 5.13. VACT results searching for apple and mac os x within NVD 66 Andy“: 13 1000-1 him 01 Vulmrahillins U wenacosr] 0001 02000405060706 200020012002200320042005200620072008 total Totalto201679217015412478500766556596 19627342 Matching 0 0 1 3 16 29 B4 131 0 264 Porcent 0.00 0.00 0.05 0.19 0.65 0.58 1.26 1.99 0.00 0.97 Figure 5.14. VACT results searching for apple mac os x within NVD 67 Andy.“ 1 _§ ? Mom 01 vammbitios 8 ‘L [I microwflj m0102080405060708 You 200020012002200320042005 20062007 2008 total Total 1360 1675 2320 2739 4773 746? 1054? 8342 2357 41580 Matching 90 '77 190 99 125 102 222183 401128 Forum 6.62 4.00 3.19 3.61 2.62 1.37 2.10 2.19 1.70 2.71 Analyst. 2 90.01 a 00.0-1 L- 70.0“ g 60.0"1 3 50m 3mm 3 30W 20m 10.0- [E microsoflm'ndm 1 wmmmmwmwm Vow 2000 2001 2002 2003 2004 2005 20062007 2008 total Total 1360 1675 2320 2739 4773 7467 10547 8342 2357 41580 Matching 24 17 44 27 49 46 71 90 8 376 Percent 1.76 1.01 1.90 0.99 1.03 0.62 0.67 1.08 0.34 0.90 Figure 5.15. VACT results searching for Microsoft and Microsoft windows within OSVDB 68 Analytic 3 Molvunombiflm a 3 8 8 8 '1’ ‘1 ‘1 ‘1 '1' .4anan 121 r :1 [limoohrmlomm] 000102 Cl 1 C7 1.1 You 84:3 8 84m 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1360 1675 2320 2739 4773 7467 10547 8342 2357 415-80 Matcnmg 2 4 28 6 16 14 50 42 5 167 Percent 0.15 0.24 1.21 0.22 0.34 0.19 0.47 0.50 0.21 0.40 Analysts 4 mod " .1 3 E ! 20.0“ 6 > 3103‘ 5 2 - [I mmmm] U 1 1 11T morozoaowsosmoa Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1360 167.5 2320 2739 4773 7467 10547 8342 2357 41580 Matching 0 0 0 0 0 0 1 33 6 40 Fervent 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.40 0.25 0.10 Figure 5.16. VACT results searching for Microsoft windows xp and Microsoft windows vista within OSVDB 69 Analysis 5 f3“ 3 Num 0! vmrabiltiu CD 11111 l1[Dmicrosollxp] 02000405060708 You H 24:: 2000 2001 200220032004 2005 2006 2007 2008 total Total 1360 1675 2320 2739 4773 7467 10547 8342 2357 41580 Matching 18 22 94 45 49 35 123 86 18 4.90 Percent 1.32 1.31 4.05 1.64 1.03 0.47 1.17 1.03 0.76 1.18 Andy-In 6 S ‘i 1 3r '4 Num of Vulnonb-ihhs 3 O -‘ I’ll] n I alBr-umcofl' vista] T 1 T 11 T oomozoaoaoa as 8 9, 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1360 1675 2320 2739 4773 7457 10547 8342 2357 41580 Matching 0 0 0 0 0 0 1 33 5 40 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.40 0.25 0.10 Figure 5.17. VACT results searching for Microsoft xp and Microsoft vista within OSVDB 70 Analytic 7 [ll windowsjp ] 00 01 02 03 04 05 W 07 00 Your 200020012002 2003 2004 2005 2006 2007 2008 total Total 1350 1675 2320 2739 4773 7457 10547 3342 2357 41530 Matching 4 14 33 17 29 33 64 70 12 276 Percent 0.290.841.42 0.62 0.61 0.44 0.61 0.84 0.51 0.66 Analytic 8 40.05 R i 30.04 E «E > 20.0“ '6 g 10.0“ 0111111 [Imndowswstaj 00 0‘ 02 03 04 05 00 07 00 Year 2000 2001 2002 2003 2004 2005 2006 200‘? 2008 total Total 1360 1675 2320 2739 4773 7467 10547 8342 2357 415-80 Matching 0 0 O 0 0 O 1 42 8 51 Percent 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.50 0.34 0.12 Figure 5.18. VACT results searching for windows xp and windows vista within OSVDB 71 Andy-Is 9 §§§ 35.. 31:? Num ol Vulnerabitios 000102030405000706 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1360 1675 2320 2739 4773 7467 10547 8342 2357 41580 Matching 75 83 193 164 287 278 465 432 102 2079 Percent 5.51 4.96 8.32 5.99 6.01 3.72 4.41 5.18 4.33 5.00 Analyul- 10 .. 40.0- :2 g 30.0- _g g 20.0“ '6 5 10.0— o , T 1 1 1 000102030406060708 Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 total Total 1360 1675 2320 2739 4773 7467 10547 8342 2357 41580 Matching 0 O 0 0 0 1 16 44 11 72 Percent 0.00 0.00 0.00 0.00 0.00 0.01 0.15 0.53 0.47 0.17 Figure 5.19. VACT results searching for xp and vista within OSVDB 72 Analyst: 11 é a Nun d van-tabla“ o— l.“- 000102030400000708 You 2000 2001 2002 2003 2004 2006 2006 2007 2008 total Total 13601675 2320 2739 4773 7467 10547 8342 2357 41580 Matching 10 5 21 16 24 21 50 121 B3 351 Percent 0.74 0-30 0.91 0.58 0.50 0.28 0-47 1.45 3.52 0.84 Analynln 12 \ g \ . N 9" \ K N. ‘3")qu \ ‘ \ lg \. \ N \ K N > \ '4 \ . \ 5 \ B ‘1 \. \ \ . g 1 \ \ K \ \ Z s K s x s s 0" .\ N \ "~ N ‘~ [Inuit-.051] 000102000405000700 Your 2000 20012002 200320042005 200620-07 2008 total Total 1350 1675 2320 2739 4773 7467 10547 8342 2357 41580 Matching 0 8 5 27 48 105 108 139 37 477 Percent 0.00 0.48 0.22 0.99 1.01 1.41 1.02 1.67 1.57 1.15 Figure 5.20. VACT results searching for apple and mac os x within OSVDB 73 Andy-II 13 40.0“ 1 .- E 30 i 20.0" ‘6 E 10.0“ 0" [Dapflcmxosx] 0001 QOSO‘CBNO7M YOU 21300 2001 2002 2003 2004 2005 2006 2037 2008 total Total 1303 1675 2323 2739 4773 7467 10547 8342 2357 41560 Matching 0 0 0 2 3 7 13 42 35 102 Pancant 0.00 0.00 0.00 0.07 0.06 0.09 0.12 0.50 1.48 0.25 Figure 5.21. VACT results searching for apple mac os it within OSVDB 74 Chapter 6: Conclusion Vulnerability Analysis and Classification Tool offers a unique way to find basic statistics on sets of vulnerabilities. There is currently no vulnerability database that is able to provide the statistical results on a user’s vulnerability classification schema. By providing a customizable schema and basic framework, it can suit the needs of various users. The tool also saves the user disc space by downloading the needed vulnerabilities at each run. The tradeoff to downloading the necessary vulnerabilities comes as the download time is the constraint of the tools runtime. 75 APPENDICES 76 Setting up VACT The steps listed below will help setup VACT. 1. Obtain a copy of Python. VACT was tested and run using Python 2.5. 2. Obtain a copy of Mod Python. Instructions for the setup and installation of Mod Python can be obtained at www.modpython.org. 3. Create a csv folder within the web directory. Give the folder APACHE_RUN_USER and APACHE_RUN_GROUP permissions. I found both to be www-data. Download and install Pychart (Saito) 5. Copy files into web directory. 77 VACT Code initial.py ##W #Call init to print out the initial user page for VACT #Be sure to have the javascript.js file within the same directory ##4## def init(): text = (html xmlns="http://www.w3.org/l 999/xhtml ">