Skip to content

Custom violations evaluator (incubating)#

The plugin uses a ViolationsEvaluator to determine what to do with the results collected from all the active tools (if any). The built-in behaviour is provided by the DefaultViolationsEvaluator, which you can read more about below. The plugin's violations evaluation behaviour is not fixed, and it can be customised by providing an implementation of the ViolationsEvaluator interface.

Table of contents#


The DefaultViolationsEvaluator#

The plugin has a default mechanism to decide whether to consider a build as passing or failed. The mechanism is manifesting itself as the penalty closure:

staticAnalysis {
    penalty {
        maxErrors = 0
        maxWarnings = 10
    }
    //...
}

This closure instructs the plugin to use a DefaultViolationsEvaluator that will count the number of errors and warnings and compare them against the set thresholds. For more details, see the Configurable failure thresholds documentation.

Creating a custom violations evaluator#

In order to provide a custom evaluator, you can implement the ViolationsEvaluator interface and provide that implementation to the evaluator property of the staticAnalysis closure. The ViolationsEvaluator can be provided as a closure as well:

staticAnalysis {
    evaluator { Set allViolations ->
       // add your evaluation logic here
    }
    //...
}

The evaluator is invoked after all the collectViolations tasks have been completed, and is the last step in executing the plugin's main task, evaluateViolations.

The evaluation logic can be any arbitrary function that respects this contract: * The evaluator receives a set containing all the Violations that have been collected by the tools (one per tool) * If the build is to be considered successful, then the evaluator will run to completion without throwing exceptions * If the build is to be considered failed, then the evaluator will throw a GradleException

Anything that respect such contract is valid. For example, a custom evaluator might: * Collect all the report files and upload them somewhere, or send them to Slack or an email address * Use the GitHub API to report the issues on the PR that the build is running on, à la GNAG * Only break the build if there are errors or warnings in one specific report * Or anything else that you can think of

For example, this custom evaluator fails the build if PMD errors are greater than five:

evaluator { Set allViolations ->
    allViolations.each { violation ->
        if (violation.name == "PMD" && violation.errors > 5) {
            throw new GradleException("PMD Violations exceeded")
        }
    }
}
The properties you can read from a Violation result are:
  • name: Possible values are: "PMD", "Checkstyle", "Spotbugs", "KTlint", "Detekt" and "Lint".
  • errors: Represents the number of errors found during the analysis.
  • warnings: Represents the number of warnings found during the analysis.
  • reports: Contains a list of the generated report files.

Please note that the presence of an evaluator property will make the plugin ignore the penalty closure and its thresholds. If you want to provide behaviour on top of the default DefaultViolationsEvaluator, you can have your own evaluator run its logic and then delegate the thresholds counting to an instance of DefaultViolationsEvaluator you create.