Virtual Machinery logoJHawk logo
  Where's my ROI? - The financial benefits of using JHawk
Product Home Licensing & Pricing Metrics Guide Download Free Trial FAQ, News, Bugs Other Products

Any development manager who has managed the delivery of a software system will recognise the technical and productivity benefits of using JHawk. However,in any organisation it often falls to that same development manager to make the business case for purchasing and using JHawk. This section is designed to help you make that case.

Static analysis - approach, costs and benefits

The primary benefit of using a tool in the assessment of the quality of your code is that it provides an objective analysis. Just looking at code is purely subjective. Looking at any code (yours or someone else's)will be prone to bias (conciously or unconciously). By using an objective tool you are making one of the most important steps in any quality control system - 'Management by fact'.

Another approach to analysing code quality is to use visual inspection (Code Reviews). This is time consuming and typically involves the use of the projects most valued (and expensive) resources - the most experienced programmers.

Capers Jones has published a series of annual studies analysing the effectiveness of various code quality approaches. His studies are useful for a number of reasons - they are written by a person whose line of business is helping companies improve the quality of their code, they cover a very wide range of products,they cover a long period of time and , most importantly, they attempt to quantify the costs and benefits of different approaches. In the most recent study Capers Jones' analysis of 4 different test cases using the combination of three quality approaches (Static analysis (S), Code Inspection (I) and manual testing (T)) produced the following results -

CaseTest typesCost of Testing ($)Cost of Repairs ($)Total Cost ($)

The two lowest overall testing costs are for the cases (C1 and C3) where Static Analyisis was involved in the first case approximately $108000 was saved (compared to the most expensive test - C4) and in the second approximately $100000 was saved. In each case $12772 was the cost of the static analysis. The cost of the analysis was covered 8 times by the savings.

Where we can do direct analysis (ie where the only difference is the use of static analysis) the total cost savings are $33000 (C1 vs C2) and $151000 (C3 vs C4). In the case of C3 vs C2 we see that Inspection is more effective than Static Analysis - reducing the total cost by $11000.

In the Capers Jones study ( (page 7)) he mentions a number of actions that development managers could take that he suggested had a greater than 90% chance of success in reducing delivered defects - the first three of these are -

  • Formal Inspections (Requirements, Design, and Code)
  • Text static analysis
  • Code static analysis (for about 25 languages out of 2,500 in all)

Capers Jones' analysis also shows significant reductions in defects at delivery at higher SEI CMMI Levels( (page 12)). Two important criteria are introduced at CMMI Level 2 - MA (Measurement and Analysis - and SAM (Supplier Agreed Management JHawk can play a critical role in both of these criteria by providing an objective quality meaure that can be used to assess internally produced and externally supplied (e.g. outsourced) code. The ability to tailor JHawk to a managed quality program means that it can grow with your process definition as you move on to CMMI Level 3 which requires finer levels of control and definition of processes.

A report from 2004 (The Business case for Software Quality - Bender RPT, Inc) lists a number of observations on the attitudes of companies to testing. Even although this report is from 2004 my experience and what I read of others experience suggests that little has changed.

  • Test automation not fully deployed
  • Testing rarely fully integrated into the development life cycle
  • Most organizationsí test processes not very disciplined for example they are not Measurable or repeatable
  • Testing perceived to take too long, costs too much, delays project
  • Q/A and test first groups to be downsized in budget cuts

  • There is a 'chain of consequence' in these findings. If you don't do 1 you will find it difficult to do 2. If you haven't done 1 and 2 then it will be very difficult to achieve 3. If you are using a labour intensive process as a result of 1,2 and 3 it is hardly surprising that you get the feelings mentioned in 4 and if you have the feelings mentioned in 4 there is no reason that you wouldn't do 5.

    JHawk - not just static analysis

    JHawk contributes more than just static analysis to the code quality regime. By using the range of metrics produced users can pinpoint code that is more likely to have errors. This can allow code reviews to be 'directed' making sure that susceptible code is inspected throroughly. Reviewers who have to analyse code in a given time period may well end up rushing their reviews as the deadline approaches - JHawk can push susceptible code up to the front of the review process.

    A number of the metrics produced by JHawk (particularly those at class and package level) relate directly to potential design defects. Detecting these early is critical as design defects are more difficult to fix and have wider defect implementations due to the amount of code that they can affect.

    The professional versions of JHawk provide the ability to create your own metrics. This can allow the creation of metrics more finely tuned to the type of application being analysed. These metrics do not need to be related to code artefacts they can use data from defect databases, code repositories, anything that you can access via Java code. This allows you to create an integrated view of those factors that may affect the quality of the software that you produce.

    JHawk also provides many benefits that are a reassurance of the quality of our product and that will contribute to reducing the Total Cost of using this product.

    • We practice what we preach. We use JHawk in our day-to-day development to ensure that our code meets the highest standards of quality. To be honest it would be embarassing if we didn't.
    • JHawk has been constantly available from Virtual Machinery as a commercial product since 2000. It has been under development since 1996 when it was initially used to analyse Smalltalk code.
    • JHawk is in use across the world in development environments of all types and sizes - commercial in-house, software development. It is also one of the most popular tools in the academic study of Java software metrics.
    • Comprehensive documentation reduces time spent learning JHawk.
    • JHawk is simple to install and configure and is written entirely in Java. JHawk has been tested in Windows, Linux, AIX and Mac OSX environments but should run anywhere that Java runs. JHawk can run as as a standalone application and at the command line as part of an automated build process.
    • JHawk produces an interchange format that can be used by the JHawk Data Viewer to compare metrics over time (e.g. different builds)

    We hope that this information is useful to you. Please feel free to contact us at if you have any questions.

    Click here to download demo Click here to read our handy guide to java metrics

    You may be interested in some of our other products. All have demo or trial versions. Just click on the links below to find out more -

    Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.


    Contact Us

    All Content © 2020 Virtual Machinery   All Rights Reserved.