User Guide

Getting Started

Please refer to the Install Guide for detailed information on getting Code Dx up and running.

Please note that for .NET analysis, Code Dx requires the installation of the .NET runtime, FxCop (Code Analysis), and CAT.NET. See the Installing .NET Tools section in the Install Guide on instructions about how to install these tools.

Code Dx Quick Start

  1. Launch the downloaded Code Dx installer for your platform

  2. Customize the installer defaults as needed; see the Install Guide for details

  3. Once the installation is complete the installer will launch your default browser to your Code Dx installation

  4. In the login area, sign-in using the admin credentials you configured in the installer and accept the license agreement

  5. Once Code Dx is open in your browser, you should see this:

  6. Open the Project List page. At this point there are no projects currently present in Code Dx. The next step will be to create a project by selecting the New Project button and entering the project name.

  7. Then select New Analysis, which is located to the right of the project name.

  8. Select Add File then upload your own Java source, JVM binaries, C/C++ source, PHP source, Scala source, Ruby on Rails source, JavaScript source, Python source, and/or .NET binaries/source. Note: source and binary files must be uploaded in specific file formats. If you would like, you can use one of the sample datasets we recommend in the next section.

Sample Datasets

If you would like to use sample code for testing purposes, the following are some datasets that are all intentionally vulnerable applications used for educational and training purposes. They're referenced by their primary language although some of them are multi-language.

For the following datasets, you can configure your new project's Git Config to fetch the source directly from GitHub using their git URL.

For other datasets, we recommend that you browse GitHub for different projects and scan some of them for testing purposes. Here are some queries to get you started:

Session Management

Logging In

Code Dx is a web-based application. Using a browser, the first thing you should do is navigate to Code Dx and log in. If this is the first time visiting the site after installation, the only usable login credentials will be the Super User's credentials, as configured during installation.

If Remember Me is checked, the server will remember your session until you explicitly log out. This means that even if you leave the site and come back, or if the server restarts, you will not need to log in again. The Remember Me option can be disabled entirely or configured to just remember the username, so please keep in mind that the behavior of this option might vary dependent on how Code Dx is configured by your administrator.

Once more users are added to the Code Dx system, they will be able to log in using this same form.

Log in as the Super User (for this guide, the Super User's username is admin). Once logged in, the Home page will display the User's name ("admin") at the top next to the user icon, and the log-in form will be gone. Note there are now additional page links to visit.

Logging Out

Logging out can happen in one of two ways. The first is by selecting the Logout option from the user sub-menu of the navigation menu:

The second is an automated logout once your session expires. If you leave the Code Dx site for a certain period of time (this is configuration dependent but is usually 30 minutes) you will be automatically logged out. If you select the Remember Me option when logging in, Code Dx will remember you on that computer for your next site visit, but only if this option is enabled by your administrator.

Code Dx Administration

Admin users have access to the Admin (Manage Site) page, where they can easily manage Code Dx.

The Admin page uses a tabbed layout with a tab for each major management section.

At the top of the page, the License Management section shows information about your current Code Dx license.

When you first arrive on the Admin page, the Projects tab will be active. As you click through to other tabs on the Admin page, your browser will remember the most recent tab you used, and will automatically activate that tab if you revisit the page later.

Many of the lists on the Admin page (e.g. Users, User Groups, Projects) can be filtered by typing in the "Filter <items>" textbox at the top of each respective list.

Projects Administration

The Projects section lists all of the projects in your Code Dx installation. You can create new projects and configure them. There are some differences between the list of projects shown on the Admin and Project List pages:

The Config dropdown menu (the gear button next to each project) provides options for configuring a project. The options in the dropdown are the same as they are on the Project List page, assuming the Project List page is viewed by a user with full permissions.

Projects are deleted by clicking the trash can icon to the right of the project name and clicking the confirmation.

For details about the project configurations, please see the Project Management section.

Project Metadata Fields

Project Metadata Fields are an Enterprise-only feature which allows users of Code Dx to create and configure custom metadata for their projects. A form on the Admin page allows for the definition of custom fields which may be used by any project in your Code Dx installation. This section covers how to define those fields. For information about entering values for those fields on projects, refer to the Project Metadata section under Project Management.

Assuming you have a Code Dx license that supports the Project Metadata feature, select the Project Metadata Fields tab on the Admin page.

after selecting the Project Metadata Fields tab

*Note: the first time you view the Project Metadata Fields, no fields will be defined yet.

To create a field, press the Create a field button.

the "Create a field" form

The Field Type dropdown menu lets you pick how users will enter values into the field:

When you pick "Dropdown" as the Field Type, you must also define at least one "choice" by clicking the Add a choice button.

Creating a dropdown field, need to click "Add a choice"

Each choice must have a Display - this is what a user will pick from the dropdown menu when filling in the choice's respective field. ID is optional - it's most useful when programmatically consuming metadata. For example, the Dropdown field named "Criticality", which is shown in the first screenshot, uses the numbers 0 through 4 as IDs, and a more human-readable value as Display. In a Project configured as "Medium" criticality, an XML would include the number 2 alongside the word "Medium" in the project metadata section.

Creating a dropdown field, one choice created, a second one on the way

Fill in the ID however you see fit, or not at all, but note that if you do specify an ID, it needs to be different from the other IDs for that field.

Filling in the same ID in the same field is an error

You can add, edit, and remove choices for Dropdown fields while editing that field. Don't forget to click OK when you're done. The OK button is located to the right of the Dropdown name.

Sometimes when you try to save a field after having removed choices, a conflict occurs:

Users Administration

The Users section lets admins add new users, control whether they are admins, and reset passwords. Users can be marked as inactive or deleted. The Super User's admin and active states may not be modified.

There are three ways to add users to Code Dx:

To change the type of user being created or added, click the arrow on the right of the Create Local User button and select the desired user type.

Note that the SAML user type will not be displayed if SAML is unconfigured or if not using an Enterprise license for Code Dx. The Download SAML Metadata link shown behind the drop-down menu is only visible if SAML is available.

Adding a local user is simple. Just click the Create Local User button to open the New User form. Enter the name and password for the user you want to create, then click Create User.

After adding a few more local users, the Users List will look like this.

To reset an existing user’s password, click the key icon to the right of the Active button and enter the new user password.

Adding an LDAP user is easy as well (note that you need to have LDAP configured in order to add LDAP users – see the Install Guide for instructions on how to configure Code Dx for LDAP integration). Just click the arrow on the right of the Create Local User button, click LDAP, and click the new Add LDAP User button to open its corresponding form.

Since the user already exists in your LDAP system, the only required step is to let Code Dx know that they exist by adding their sign-in name as a new LDAP user. The sign-in name will vary based on the userSearchTemplate used in your LDAP config. For example, with a template of sAMAccountName={0},ou=Users,dc=initech,dc=com and a user with sAMAccountName=blumbergh, their sign-in name (and the name to register) would be blumbergh. Once you've added Bill to Code Dx as an LDAP user, he can log into Code Dx with his Initech password instead of having to remember a new password just for Code Dx.

Adding a SAML user is similar to adding an LDAP user (note that you need to be using an Enterprise license for Code Dx and have SAML configured - see the Install Guide for instructions on how to configure Code Dx for SAML integration). Click on the arrow on the right of the Create Local User button, click SAML, and click the new Add SAML User button (the name of this button may change depending on your configuration. This option will not appear if SAML is unconfigured.)

Enter the display name of the user to add. This may be different from the username used to sign in to your SAML portal. Check with your SAML provider or other, existing SAML-integrated services to determine the display names for your users. For example, Bill Lumbergh may sign in to your SAML portal using [email protected], but his display name reported by the SAML provider might be Bill Lumbergh. In this case, the new user's name would be Bill Lumbergh. (Note that the attribute for the display name can be set manually in your SAML configuration.)

You can easily make any user an admin or change whether or not they are active with a simple switch. Select the trash can icon to the far right to delete a user.

In the screenshot above, Milton has been marked as inactive, and Bill Lumbergh has been made an admin. Users that are associated with an Admin switch that has a red background, have administrative privileges within Code Dx; and the other users do not. Additionally, when an Active button has a blue background, the corresponding users are active; otherwise, the users are inactive. Being inactive is similar to being deleted in that the user cannot log into Code Dx. Note that any activity performed by inactive and deleted users are still recorded by Code Dx.

User Project Roles

The Users section allows admins to set roles for users on projects. To set roles for a user, click the gear icon and select the Configure Roles... option in the drop down.

This will open the Project Roles modal for that user.

This modal will list projects and show the roles the user has on those projects. See the User Role section for more information about configuring roles.

User Filtering

The Users section includes a "Filter users" option, which can behave as a basic text search or as a search via a user's properties. Each word entered, separated by a space, acts as its own filter that a user's name must contain.

In the above screenshot, searching for "a" and "b" as separate words will show all users with an "a" and a "b" in their display name.

Filtering Users by Properties with Keywords

User search allows for specially-formatted, pre-defined keywords to filter users. These filters start with a $, generally of the form "$keyword". The following keywords are supported:

Searching with multiple keywords will show users that match all keywords. In the above screenshot, only active, local users with "ta" in their name are displayed.

Note: If searching for a user whose name starts with $, you can use $$ to search for that name. Searching for $$test will show users with a name containing $test.

Filter Negation

Keyword filters can be negated by placing a ! in front of the search term, eg !$type:local. Negation has no effect on text filters. Searching for !$is-active !$type:ldap will show all disabled, non-LDAP users.

API Keys Administration

API Keys can be generated for use with the Code Dx API. Typically one key would be generated for a specific purpose, such as integrating with a specific tool or plugin. This would allow for fine-grained control over each API key’s active/inactive state, as well as setting specific user roles for each key.

Clicking on the Create New Key button will offer up a form to enter in a name for the new API key:

Entering in the new name, and pressing enter or the Create button will create the new API key displaying it in the Key listing:

The key can be regenerated at any point in time by clicking on the wrench icon and can be deleted by clicking on the trash can icon.

Managing roles for each API key is done from the user roles configuration form, just as with regular users.

For more information on Code Dx API capabilities, please read the Code Dx API Guide.

User Groups

User Groups are a feature that allows permissions to be assigned to users in bulk. Think of a user group as a "team", e.g. "Project X Developers" or "Organization Y Managers".

User Groups section overview

You can create a user group by clicking the New Group button at the top of the User Groups tab on the Admin page.

Enter a name for the new group and submit the form...

User Groups 'new group' form

and the new user group will appear in the list below.

User Groups after creating a new group

User Group Members

Once the group is created, you can add members by clicking the 0 Members button on the right side. Doing so will open a modal which lets you add and remove members. Select a user from the dropdown then click the Add Member button to add the selected member to the group. The member will appear in the list above. Click the red "x" button next to a user to revoke their membership from the group (This does not delete the user). Click the "x" in the upper-right corner of the modal, or anywhere in the dark background outside of the modal to dismiss it.

User Groups members management modal

User Group Configuration

The "gear" button on the right of the user group lets you access its configuration menu. From there, you can Rename the group, assign it as a sub-group (Set Parent Group), Configure LDAP, Configure Roles, or Delete the group.

User Group config dropdown

User Group - Rename

To rename a group, pick the Rename option from the group's config menu, type the new name, then press enter. Press escape while renaming to cancel the rename action and keep the old name.

User Group - Set Parent

To assign a group as a sub-group of another group, select the Set Parent Group option of the group that will be the sub-group. Then from the Select a parent group dropdown, select the parent group, then click OK.

Selecting a parent group for the 'Managers' group

To make a sub-group a top-level group again (undoing the previous action), select the group's Set Parent Group option again, click the "x" in the Select a parent group dropdown to clear its selection, then click OK. In other words, set the group's parent to "no parent" to make it a top-level group.

Clearing a sub-group's parent to make it a top-level group

If a user group becomes the parent of at least one sub-group, a button will appear next to the {num} Members button, indicating the number of sub-groups belonging to that group. An additional button with a chevron icon will appear to the left of the parent group's name. Clicking either of these two buttons will expand and collapse the group's list of sub-groups.

User Group - Configure LDAP

Note: This section requires a valid LDAP configuration in your codedx.props file. See the Install Guide for more information. This feature may be presented when LDAP is configured but is only compatible with providers that create a user attribute containing group membership for Code Dx to check, such as Active Directory.

To manage mapping of LDAP groups to Code Dx groups, pick the LDAP Config option from your group's config menu. Here you can provide a comma-separated list of LDAP group names.

User Group LDAP configuration modal

Note: The names for your LDAP groups are pulled from the DN path attribute specified in your codedx.props file. Please see the Install Guide on LDAP group mapping for more information on how to configure Code Dx appropriately.

When an LDAP user signs in, their LDAP groups will be checked against your mappings configured on this page. The user will be added to the Code Dx group if they are a member of any of the listed LDAP groups, and removed otherwise.

If users are added or removed from an LDAP group, their membership will not be updated until they sign out and back in. You can force a manual refresh of all group's LDAP members by clicking the "Refresh" button on this page.

Note: If the LDAP Group Names textbox is empty, a membership refresh will have no effect on that group.

User Group - Configure Roles

Assigning roles to a user group will cause each of the assigned roles to be inherited by members of that group or any of its sub-groups. For example, you could create a "Developers" group, add 10 users as members, then grant the Read and Update roles to the "Developers" group for several projects. The effect would be the same as if you individually granted each of those 10 members access to each of those projects, but with much less effort required on your part.

Assigning roles to a user group

Note that for any given user that "inherits" a role from a user group, that role will appear on that user's individual role configuration modal as an orange bar above the role button. For example, "Milton" could be a user of Code Dx with no roles explicitly granted to him on any project. If Milton is a member of the "Developers" user group, and that group has the Read role for the "WebGoat" project, Milton will be able to see the "WebGoat" project despite his lack of explicit permissions for that project.

User Group - Delete

User groups can be deleted at any time by clicking the Delete option from that group's config menu. Before the group is actually deleted, a modal will pop up to confirm your intent. For user groups that have sub-groups, there are two "modes" of deletion:

  1. Delete the current group, all of its sub-groups, and all of their sub-groups, and so on. (In other words, delete the whole group tree starting from the current group)
  2. Delete only the current group. All of its sub-groups will become top-level groups, and all of the sub-sub-groups will be unaffected.

Delete confirmation for a user group with sub-groups

For user groups without sub-groups, the modal will only have one Delete button. Clicking it will simply delete the current group.

Manual Entry Configuration

The Manual Entry Configuration section allows Code Dx administrators to define custom values which can be entered into certain fields in the Manual Results form. Note that this section will only appear for users with an Enterprise edition of Code Dx.

The *Manual Entry Configuration* section

There are two sections; Detection Methods and Allowed Tools.

Configuring Detection Methods

Code Dx provides built-in detection methods, i.e. ways to describe how a finding was discovered. These built-in detection methods reflect the types of tools currently supported by Code Dx. For manual entry, you may wish to specify your own custom detection methods. To do so, click the Add Detection Method button, fill in a name, and click OK.

A custom detection method has been added

You can rename your custom detection methods at will. The built-in detection methods cannot be edited in any way (indicated by the lock icon next to their edit/delete buttons).

You can delete your custom detection methods without hassle as long as they are not in use (i.e. as long as no manually-entered results use it as their own detection method). If your custom detection is in use when you try to delete it, you will have to choose a replacement. All results using that detection method will be edited to use the replacement detection method instead. This will likely trigger the recorrelation prompt, since detection method is one of the correlation criteria.

Deleting a custom detection method, and choosing a replacement

Configuring "Allowed Tools"

The Allowed Tools section lets you define which tools will appear in the Tool dropdown in the Manual Result form. The names you add to this list do not necessarily need to correspond to a tool known to Code Dx, or even a real tool, for that matter; you can enter any tool name you want. To add a name to the list, click the Add Allowed Tool button, fill in a name, and click OK. To remove a name from the list, click the red delete button to its right. Unlike the Detection Methods delete, you don't need to pick a replacement to delete an item in this list; this list only controls which tools are available when creating a new manual result, not which tools exist.

The "Allowed Tools" section, with a few tools added

Add-In Tools

The Add-In Tools section appears when the Tool Orchestration Service is enabled. See the Tool Orchestration Configuration section of the Install Guide for instructions to enable this feature.

An add-in tool is based on a scan request file that you define and register with Code Dx. A scan request file contains the instructions that the tool service needs to invoke an application security testing tool on a k8s cluster and ingest its output into Code Dx.

The Add-In Tools section of the Admin page lets you manage the list of application security testing tools that can run on your cluster. Code Dx registers the four tools shown below at install-time.

Add-In Tools

You can use the Create New Tool button to add a tool registration, see the Walkthrough: Add Tool section for an example. You can remove a registration by clicking its trash can icon.

Add-In tools must be enabled on a per-project basis, and a registered tool starts in a disabled state. See the Customize Add-In Tools section to learn how to enable a tool for a specific project. You can also use the Default enabled toggle to enable a tool for every project, excluding those where it was explicitly disabled. Avoid enabling tools by default when they include project-based settings.

Clicking the wrench icon opens the Add-In Tool Registration window where you can edit registration details.

Add-In Tools

You can change the tool's name by editing the window title and clicking OK, but you must click Done to save a tool name change. Tool names must be unique, so pick a name that is not already in use.

Some add-in tools, like DAST tools, do not require an analysis input. Code Dx will offer to run them with each new analysis. Others require an input file, and Code Dx will scan a file to build a list of tags describing its contents. Tool registration data lets Code Dx select appropriate add-in tools to run. The Matched Tags section lets you associate content tags with an add-in tool. Select the Tag type and Language, or Runtime, and click Add Tag to link a tool with a content type.

The TOML Spec section includes the scan request file content that defines an add-in tool. See the Scan Request File section to learn more about scan request files.

Once you have finished editing registration details, click Done to save your changes.

License Management

Code Dx requires a valid license to run.

When you purchase or evaluate Code Dx, you will receive (usually via email) a new license (a blob of letters and numbers) with instructions about the installation. These instructions can also be found in the Install Guide.

If your license expired or is invalid, the License Management page is automatically displayed with a message indicating the problem. The message is located in the header. Please see the Install Guide for instructions for requesting a new license.

Once a license is installed, its summary will be displayed at the top of the Admin page.

In the example above, the license summary includes the company name (Code Dx, Inc.), product (Enterprise), user-count restriction (50), number of active users (23), project limit (20), number of existing projects (16), and expiration date with timestamp (Tue May 02, 2023 at 14:33:15 Eastern Standard Time).

Your license may have a user-count restriction. The user restriction limits the number of active user accounts managed by Code Dx, regardless of whether they are Code Dx local users, LDAP users, or SAML users. The "admin" that was selected for your Code Dx installation does not count against this limit. In the example below, [email protected], Michael, Peter, and Samir all count against the user limitation because they are active users. However, Milton does not count against the user limit because he is inactive; and codedx is not included because she is the "admin" for the Code Dx installation.

If the system reaches the user-count limit, an error notification will be displayed when creating or reactivating users. This can be remedied by deactivating or deleting users that no longer have a need to log in and use Code Dx. Alternatively, arrangements can be made with our sales team to upgrade and replace the current license with one that has a larger user limit.

Regarding a license with a restricted number of projects, if you get close to the project limit, a warning notification will be displayed. If the project limit is reached, an error notification will be shown. In either case, you may want to contact our sales team for a new license that allows for more projects.

Machine Learning (ML) Control Panel

Note: This section is only applicable to Code Dx Enterprise users with the Machine Learning Triage Assistance add-on. You must be an admin to both view and interact with the interfaces depicted in the sections that follow.

The Machine Learning Control Panel, hereafter referred to as the "control panel", allows admins to manage Code Dx's machine learning capabilities. The control panel is divded into two sections. The first is the ML Service Management section, and the second is the Excluded Projects section.

ML Service Management

The ML Service Management section of the control panel allows admins to manually trigger the training of the prediction model. To make use of machine learning, Code Dx requires that at least 100 findings have been actively triaged. These triaged finding can come from multiple projects. If you have not met this requirement, then you will be presented with a statement detailing how many findings you have actively triaged and a statement detailing the minimum requirements.

Code Dx's machine learning capabilities can be enabled or disabled by clicking on the switch in the top right corner of the section.

The ML Service Management section of the control panel transitions back and forth between two states. The first is an Idle state, which means that Code Dx is not currently training a prediction model. The second is a Working state, which means that Code Dx is currently training a prediction model. If the section is in an Idle state, then either a Build Prediction Model or Update Prediction Model button will be present. Specifically, Build Prediction Model will be present if a prediction model has not been trained, and Update Prediction Model will be present if a prediction model has been trained.

As can be seen in the images above, the section is in an Idle state. Code Dx will begin training a prediction model when the Build Prediction Model or the Update Prediction Model button is clicked. This will transition the section into a Working state. If you were presented with a Build Prediction Model button, then you will be presented with a Building Prediction Model message when training begins. Otherwise, you will be presented with an Updating Prediction Model message.

Once training has completed, the section will transition back to an Idle state and will present the user with when the last training session completed, how long it took, and whether or not it succeeded.

If machine learning is enabled, Code Dx is capable of automatically (re)training a prediction model so that it does not have to be done manually. The ML Service Management section will automatically transition from an Idle state to a Working state when Code Dx automatically initiates a training session. Whether automatic updates of the prediction model occur at all, and the time at which they occur, are configurable via your codedx.props file (see the Install Guide for more details).

Excluded Projects

The Excluded Projects section allows admins to configure which projects should not be considered when Code Dx trains a prediction model.

To exclude a project, either select it from or type in its name in the selector widget next to the Add Exclusion button.

Clicking on the Add Exclusion button will exclude the selected project from prediction models that Code Dx trains. All excluded projects will be listed below the selector widget.

Excluded projects can be filtered by project name.

You can also choose to reinclude an excluded project so that it may be considered during training by clicking on the red trash can button that exists in the same row as the excluded project you would like to reinclude.

My Settings Page

The My Settings page allows you to manage certain aspects of your Code Dx account. To reach the My Settings page, while logged in, open the dropdown menu in the upper right corner of any Code Dx page, then click the "My Settings" link.

My Settings link in the dropdown menu

The My Settings page provides different functionality in tabs:

Changing your Password

The Password tab on the My Settings page allows you to change your password. To change your password, just fill in the form and click the Change Password button. If successful, a "Password changed" notification will appear as a confirmation.

If you forgot your password, you will need to ask your Code Dx administrator to reset it for you.

Personal Access Tokens

The Personal Access Tokens tab on the My Settings page allows you to manage your Personal Access Tokens. A personal access token is used as a form of Bearer Authentication with Code Dx's REST API, where any request providing a personal access token is assumed to be made by you. See the API Guide for more information about using a personal access token with the Code Dx REST API.

personal access tokens overview

Creating a New Personal Access Token

On the Personal Access Tokens tab, click the Generate new token button to open the New Personal Access Token form. In the form, you must pick a name for the new token, and which roles the token will inherit from you.

The name will only be seen by you; it is only used to display the token's status in your list of personal access tokens.

For role inheritance, you can choose between "Inherit all of my permissions" and "Inherit specific roles". If you choose "Inherit all of my permissions", the token may be used to perform any action you would be able to perform if you had authenticated via your username and password, with the exception of manipulating your personal access tokens. If you choose "Inherit specific roles", the token will be restricted to only the roles you select from the buttons below. A personal access token can never perform an action that you could not perform. For example, if you only have up to the create role on a particular project, but you select up to the manage role for the token's inheritance, the token will still not be able to perform actions associated with the manage role on that project, because it is limited by your own permissions. Conversely, even if you have the manage role on a particular project, if you only select up to the create role for the token's role inheritance, then the new token will not be able to perform actions associated with the manage role on that project.

Once you have filled out the form, click the Generate Token to create the token.

a filled-out version of the New Personal Access Token form

If successful, the form will disappear and you will be given a chance to copy the token. The token's format is secret-token: followed by 40 random characters. You should copy the token and store it somewhere safe, such as a secure password manager. For convenience, clicking the clipboard icon to the right of the token will copy the token onto your clipboard. Note that your browser may not support clipboard manipulation. If it works, a green check mark will appear next to the token; if not, you'll need to highlight the token and manually copy it. Once you leave this view, you won't be able to see the raw token again. If you lose it, you'll need to generate a new token and update any plugins or scripts that were using that token.

a new personal access token

Personal Access Token List

Once you have created some personal access tokens, they will appear in the Personal Access Tokens tab as a list. Whenever one of your personal access tokens is used to access the Code Dx REST API, its "last used" time is updated. This is reflected in the list view, and is a good way to gauge whether your tokens are being used in the way you expect. You can immediately revoke a token by deleting it by clicking the red button next to the token.

example list of personal access tokens

Project Management


The following are the key terms used throughout Code Dx and this guide.

Projects contain any number of Findings. Findings can either be created manually, or automatically via Analysis. Results (either from tools during Analysis, or from manual entry) are processed, correlated, and associated with Findings.

The Project List

The Project List page presents a list of Code Dx projects that currently exist. To access the Project List page, just click the Projects link in the page header after logging in. If this is the first time using Code Dx, the Project List may be empty. Admin users can create new projects by clicking the button labeled New Project.

Click the button to open the New Project form.

Create a new project by entering a name and clicking the Create Project button. The new project appears in the project list.

The Findings link opens the Findings page, which provides an overview of the analysis results. It focuses on a powerful filtering system, triage workflow, and issue tracking, with links to drill into more details via the Finding Details page.

The Config dropdown menu gives a user the manage role the ability to configure a project. The options are Rename, Analysis Config, Project Metadata, Git Config, Jira Config, User Roles, Tool Config, and Tool Connectors. For details about these options, refer to the sections that follow.

A project's config menu

For convenience, once you add certain configurations from the Config dropdown, the corresponding menu icons will appear directly on the project as buttons that open that config's form. If you have many projects, this feature will help you quickly find which of your projects have a particular configuration. It will also save a click by allowing you to skip going through the Config dropdown.

The following Config options will appear on a project as buttons, once configured:

A project with some config buttons

Once a project is created it is recommended to assign one or more users to it and give them the manage role. This enables them to create and archive analyses and control the project configurations. If you're using Code Dx Enterprise, someone with the create role can access the Tool Connectors option in the Config dropdown menu; however, this is the only option available to them and they can only run the tool connectors.

When the Tool Orchestration Service is enabled, the Config menu will include a Tool Orchestration item where you can select Configure to access the Tool Service Configuration page and View Analyses to open the Orchestrated Analyses page.

Tool Orchestration Config Menu

Filtering Projects

In the case where you have many projects, it can be tedious scrolling to the project you are looking for. To help with this, the project filter lets you filter the Project List by name. To filter, enter your text into the Filter projects text box. The Project List will update as you type. In the screenshot below, the word "sample" has been entered, and the Project List displays only those projects that have the text "sample" (case-insensitive) in their project names.

Using the Project Filter to show Projects with "sample" in the name

You can clear your text by pressing the "X" button on the Filter projects text box.

Users of Code Dx Enterprise will also have access to advanced filters, which operate on Project Metadata. To expand the advanced filter section, click the "advanced" link to the right of the project name filter input. Note that if you have not configured any Project Metadata Fields, the "advanced" link will not appear. If you have, an input for each field will appear. Enter your search criteria in its respective input to filter the project list. Text fields behave similarly to the basic (project name) filter; as long as some part of a project's metadata on that field matches your input, that project matches. Dropdown fields operate on exact matches, for example; "Projects with 'High' Criticality". Tag fields operate on the principle, "if a project's tags contain at least one of the tags in my filter input, that project matches." In the example below, we filter on projects where the Project Owner field is "Richard Hendricks".

Using the advanced project filter to filter on project metadata

Project Groups

Projects may be repositioned in a hierarchy, where one project may become the parent (or group) containing another project. This functionality may be accessed via the Move into option in a project's config menu: Click Move into, then select a "parent project" from the dropdown that appears (or clear the dropdown to pick "no parent project"), and click the OK button.

selecting a parent for a project

Once you move one or more projects into a "parent" project, the parent project can be considered as a "project group". The Project List UI will display project groups as a summary of all findings for all projects in that group, including the group project itself. Project groups in the UI can be expanded to show their respective "child" projects using the chevron (>) button next to the name. Note that a project group is still a project, and can still have findings of its own. The summary of findings specific to the parent project will appear above the child projects when you expand the group. There is no inherent limit to how deeply-nested projects can be. A child project can have its own child projects, and so on.

a project group

Analysis Configuration

The Analysis Configuration dialog is used to control two analysis-related settings for a given project. Users with the manage role for a project can access the Analysis Configuration dialog from the Project List page. Locate the project to be configured, open the Config dropdown, and select Analysis Config from the menu.

Sample Analysis Config Dialog


As files are analyzed with Code Dx, each one is remembered as an analysis input. As more and more analyses are performed with a project, these analysis inputs could start to pile up. The Auto-Archival setting in the Analysis Configuration dialog controls how old analysis inputs are handled.

By default, auto-archival is enabled. As new inputs are analyzed, old inputs of the same type will be archived. For example, two analyses are performed in series on a project, both supplying a SpotBugs results file. In this scenario, the SpotBugs results file provided for the second analysis is perceived as "newer", so it will replace the SpotBugs results from the first analysis. The analysis input for SpotBugs results in the first analysis will be archived. Any findings that were present in the first file but not the second will have their statuses changed to Gone as a part of this process.

With auto-archival disabled, the two SpotBugs result files will both remain present. This can be useful if you wish to provide one SpotBugs results file for a part of your application, but a different SpotBugs results file for a different part of your application. Both files may be analyzed without interfering with each other. A downside to this approach is that without manual management of the analysis inputs, they will begin to pile up, potentially degrading the performance of filters and other interactions. You can manually archive old inputs from the Analysis Inputs List

Prevent Correlation

If the Prevent tool result correlation option is checked, then multiple tool results will not be added to a finding. This will give you a separate finding for every issue reported by a tool. Tool results will still be associated with rules according to the selected Rule Set; however, when multiple instances of the same issue occur at the same location, they will not be merged.

Enable Hybrid Analysis

If the Enable hybrid analysis option is checked, then additional steps will be performed during analysis to enable hybrid analysis. If you upload files that have Java Source or Java Binary files in them, Code Dx will analyze the structure of these files (gathering information about their classes and methods) which will later be used to perform hybrid correlation. Note that this extra analysis is time-consuming; the larger the project, the longer the analysis. Because of this, the Enable hybrid analysis option is unchecked by default.

Finding Lifecycle

If the Allow gone findings to be reopened option is checked, then findings will be reused and have their status set to Reopened if they reappear later at the same location. With this option disabled, a new finding will be created instead.

If the Reopen resolved findings when updated option is checked, then findings set to a resolved status (i.e., Ignored, False Positive, Fixed, Mitigated) will have their status changed to Reopened if new data is brought in from a tool (not matching previously seen data). Findings set to Fixed will be changed to Reopened if reported, regardless of if the data is new or not (since this signals that the issue has not been fixed).

Rule Set Associations

The Rule Set Associations section of the Analysis Config dialog allows you to select the Rule Set that will be used to correlate similar tool results into Findings. By default, new projects will use the built-in "Code Dx Rules" set. The "Don't use any Rules" option is available in case you don't want tool results to be mapped to rules. More information on Rule Sets can be found in the Rule Sets section of this guide.

Users with the admin role can use this section to manage Rule Sets by creating, cloning, or deleting them.

Pointing out the Add and Clone buttons for Rule Sets

Adding a Rule Set via the Add button will initialize a blank Rule Set.

"Add Rule Set" Form

A cloned Rule Set will be initialized as a copy of the "parent" set. This can be useful if you want one project to use mostly the same correlation logic, but with a few alterations from another project. Also note that the default Code Dx Rules set is read-only. To make modifications to it, create a clone of it, then make the modifications to the clone instead.

"Clone Rule Set" Form

To modify an existing Rule Set (or simply view it, in the case of the read-only Code Dx Rules set), click the pencil button (or the eye button) next to the Rule Set's name. This will open in a new tab.

Reminder: When making changes to the Analysis Configuration dialog, make sure to press the OK button to save them.

Reminder 2: Since a project's configured Rule Set determines the manner in which results are correlated, changing that configuration necessitates an update of the correlation. This happens when the configured Rule Set for a project is modified in any way, or the Analysis Configuration is changed to use a different Rule Set. When this happens, the Findings page will display a notification prompting users to do so.

Trigger Re-Correlation

Host Scope Associations

Note: this section is only applicable to Code Dx Enterprise users with the InfraSec add-on.

Host Scopes are sets of projects that share host information with each other. They allow the Host Normalization process to determine which hosts are actually the same hosts within a Host Scope. Selecting any of the Host Scopes in the associations list will associate the current project with that Host Scope, and implies that the current project's host information belongs to the selected Host Scope. Clicking on the Manage Host Scopes button will navigate you to the Hosts Page where you can manage your Host Scopes.

Host Scope Associations Example

Zip Content Rules

When a zip-like file (e.g. Zip, Jar, War, etc) is uploaded to a Code Dx project, that project's Zip Content Exclusion Rules and Zip Content Identification Rules determine how entries in that zip (and possibly entries in other zip-like files nested within the main zip) will be treated by bundled tools using that file as input.

Exclusion Rules determine which zip entries will be ignored by bundled tools.

Identification Rules determine the perceived source of the zip entries, as either "library code" or "custom code" (third-party or first-party, respectively). Many tools will only be interested in "custom code", and others (like component analysis tools) will only be interested in "library code".

Proper configuration of these rules can drastically reduce the number of unwanted findings, e.g. by avoiding analyzing files from a third-party library whose code you cannot directly modify.

By default, all entries in a zip-like file will be included, and their role ("library code" or "custom code") will be automatically guessed by Code Dx.

Default configuration for Zip Content Exclusion/Identification Rules

The two Zip Content Rule sections in the Analysis Configuration form share a common format. Each row represents a rule, where files matching that rule's pattern will be subject to the decision chosen from its respective dropdown menu. Later rules (further down the list) take precedence over earlier rules. The first rule will always use ** as its pattern, since it is the fallback for all zip entries. Its pattern may not be changed, but its decision may be changed. The patterns must be Glob Patterns, e.g. **.java matches any .java file in any folder. Patterns should use forward slashes (/) to denote directories instead of backslashes (\\) even on Windows.

The / button at the right of the pattern input can be clicked to add a nested pattern which will apply to files nested in a zip-like file matched by the first pattern. For example, in a project where a .war file is typically uploaded, one might configure a pattern to match a particular .jar file inside that .war file, then click the / button to configure a pattern to match certain .class files inside that particular .jar. To undo adding a nested pattern, mouse over the > icon to the left of its text input. The icon will become a delete button, which can be clicked to remove the nested pattern.

To remove an entire rule, click the red delete button (with a trash can icon) to the right of the rule.

Zip Content Exclusion Rules

Exclusion rules can be used to allow tools to completely ignore certain entries in an uploaded zip file. A typical use-case for this is to avoid analyzing test files. In the example below, all .java files in any subdirectory of the src/test/java directory in the main zip will be excluded. Note that there is no leading ** before src, so it will only match if the src folder is at the "top level" of the zip. E.g. if the pattern won't match a file like other/src/test/java/Foo.java, but it will match a file like src/test/java/Foo.java.

Example zip content exclusion rule with src/test/java/\*\*.java excluded

Zip Content Identification Rules

Identification rules can be used to direct the attention of bundled tools to the correct sections of an uploaded zip file. For example, many projects will contain a mix of first-party and third-party code, and without insider knowledge, there generally isn't a way for Code Dx to know which is which.

A poignant example of this is when uploading a .war file to be analyzed. The typical internal structure of a .war file includes many third-party libraries as .jar files, and often the custom code (first-party) is compiled into another .jar file and placed alongside the third-party .jars. In this case, Code Dx has no way to distinguish whether each individual .jar file is first or third party, so it may (read "generally will") assume that all of them are first-party. This can lead to analysis tools becoming overwhelmed and running out of memory, causing the analysis to fail, or if the tool doesn't fail, it will have produced a large number of unactionable results due to it analyzing code from those third-party .jars.

In the example below, a configuration has been made to address a particular case like the one described above. It starts by assuming any .jar file is "Library Code", by combining the **.jar pattern with the Mark as "Library Code" decision. The next rule uses a more specific pattern to match a my-custom-code.jar and identify it as "Custom Code". This rule overrides the previous one because it comes after. Next, the user realized that they had some third-party library classes embedded in their "custom code" .jar file, so they configured a rule to mark those specific files as "Library Code". This was done by first entring **my-custom-code.jar as the pattern, then clicking the / button to add a nested pattern, then entering **third-party-stuff-inside-that-jar.class as the nested pattern.

Example zip content identification rules

Tool Configuration

During analysis, tool results are identified by a tool result type, which is a combination of the tool's name, any number of "groupings" (e.g. categories), and a name. The Tool Configuration page allows users with the manage role on a project to enable and disable tool result types for that project. Results whose tool result types are disabled by configuration will be ignored during analysis.

Users with the manage role on a project can access the Tool Configuration page for that project via the Project List page. On that page, locate the desired project, open the Config dropdown menu, and click the Tool Config link from the dropdown. Admin users can also access the Tool Configuration page from a similar dropdown on the Admin page.

Overview of the Tool Configuration Page

Tool result types are organized in a hierarchy, grouped by tool, then category(ies), then name. Tool result types can be enabled and disabled at any level of their hierarchy by clicking its respective on/off switch. For example, it is possible to completely disable a tool by clicking the switch next to that tool in the Tool Configuration page. Some entries will be disabled by default. The default enabled state is carefully selected by the Code Dx team to provide the best results for the Code Dx users. However, this can be overridden at any time from this page by just re-enabling the desired tool result types. Clicking on an entry (aside from its toggle switch) will expand (or collapse) it, showing all of its sub-entries.

Note that any changes made on this page are project-wide, impacting all users of the project.

For instance, the following screenshot shows the Experimental group within SpotBugs disabled by default.

Disabling the 'Experimental' category under SpotBugs

Code Dx comes with a large set of predefined tool result types, based on the results generated by a collection of open-source tools. When Code Dx encounters a new type of tool result, it will create a corresponding entry based on the result's raw tool code. These entries are referred to as "observed", and are marked with an eye icon.

If a change to the tool configuration would cause existing tool results to be disabled, it does not immediately remove those results. Instead a notification will appear, indicating the number of results that would be affected, prompting the user to purge those results. Clicking the Purge button in the notification will remove any tool results that still exist despite being disabled. Doing so is highly recommended, as having fewer tool results will improve the performance and responsiveness of Code Dx. If you do not purge disabled tool results, they will remain present in the project, and will continue to appear in filters and affect future analyses.

The 'Purge' notification

User Roles Configuration

To manage user roles for a project, click the User Roles option from that project's configuration menu (on the Project List page or the Admin page). Each role designates a set of specific actions that a user and/or user group is allowed to take on a project.

The User Roles dialog will appear. In this view, there is a tab for users and user groups. On each tab, there is a row for each user or user group. Each button represents a role which that user or user group has in that project. All roles are assigned per-user or per-user group, per-project, meaning that a user's or user group's roles for one project are not necessarily the same as the roles for another project. For each user, if they are marked as admin or inactive, the view will display a marker next to their name to show that fact. Note that user groups cannot be admin or inactive.

The different roles are as follows:

Clicking one of the role buttons in the User Roles dialog will give the corresponding user or user group all roles up to (and including) that role. For example, giving a user or user group the create role will also grant the read and update roles. Clicking the X button will remove all of that user's or user group's roles. Admin users automatically inherit all roles, but can also be granted roles explicitly.

If a user or user group is inheriting roles, the inherited roles will display as an orange bar above the corresponding role button.

If the Grant these permissions to sub-projects checkbox is enabled, any roles users or user groups have for this project will be inherited in this project's sub-projects.

Git Configuration

To manage the git configuration for a project, click the Git Config option from that project's configuration menu, on the Project List page or the Admin page.

The Git Configuration popup will appear. The form inside is used to tell Code Dx to use a Git repository as the subject of analysis for this project. Once configured, Code Dx will automatically include the contents of the configured repository as an input for each analysis with this project.

The form (shown above) has two fields: Repository URL and Branch. The Repository URL should be filled out with the URL that you would use to clone the repository. The Branch field should be filled with the name of the branch in that repository that you want Code Dx to analyze. If left blank, Code Dx will assume you mean the "master" branch, which is the main branch for most Git repositories.

For many projects, setting up a Git configuration is as easy as copying the repository's URL into the form. For example, if you wanted to analyze the contents of the open-source WebGoat repository, you would find the clone URL on the side of the GitHub repository page, and copy it into the Repository URL field of the Git Configuration form.

Code Dx will verify the repository's existence and determine whether it needs credentials to connect. For public (open-source) repositories, no credentials are required, and you can press the Ok button to save and close the form. If this is the case, you may skip to Saving the Git Configuration; otherwise, read on.

Git Credentials

Some Git repositories are private, and require credentials for access. Code Dx supports two forms of authentication; HTTP and SSH. Depending on the URL in the Repository URL field, Code Dx will automatically determine which type of credentials are required.

HTTP Credentials

HTTP credentials are a username and password. For GitHub repositories, these will generally be your GitHub account name and password. GitHub also supports creating "Personal access tokens" (see https://github.com/settings/tokens), which can be used in place of a password.

SSH Credentials

SSH uses a pair of files known together as a "keypair", or separately as a "private key" and "public key". For Code Dx to connect to a repository via SSH, it needs your "private key". The system in charge of the repository's security will also need your "public key".

If you are trying to access a private GitHub repository, visit your SSH Keys page at https://github.com/settings/ssh to register your SSH key with GitHub. GitHub also provides help with SSH-related issues at https://help.github.com/categories/ssh/

Some users will already have an SSH keypair on their computer. The two files are generally located in <userhome>/.ssh/ and will be named id_rsa for the private key, and id_rsa.pub for the public key. It is possible to use this pair, but you may want to generate a separate pair for use with Code Dx.

Once you have located or generated a keypair, copy the contents of the private key file into the Private Key field of the form.

When generating a keypair, you have the option to provide a "passphrase" for the private key. If you do this, Code Dx will need that passphrase in order to use your private key. Enter it in the Key Passphrase field of the form.

Two-Factor Authentication with GitHub

If you need to connect to a GitHub repository, and your GitHub account has two-factor authentication set up, you cannot use your regular username and password to authenticate. To connect over HTTP (e.g. https://github.com/user/repo), you will have to set up a Personal Access Token and use it in place of your regular password. You can still connect over SSH (e.g. [email protected]:user/repo.git) as usual.

Saving the Git Configuration

Once you have entered a URL, an optional Branch, and entered whatever Credentials are necessary, you can click the Ok button to save the configuration. Doing so will close the form and tell Code Dx to obtain a local clone of the configured repository. Depending on the size of the repository, the length of its history, and your network connection, the clone operation may take anywhere from seconds to hours. Once started, a progress bar will be displayed underneath the project's title in the Project List page.

The "cloning" job has several subtasks, so you will see the progress bar fill up several times. When the job is complete, the progress bar will turn blue, wait for a couple of seconds, then slide out of view.

Once the clone is ready, the New Analysis page will automatically include the latest contents of the configured branch of the configured repository as an input. See the Analyses section for more detail.

Issue Tracker Configuration

Code Dx allows you to associate findings with issues or work items in an issue tracker, either by creating a new issue or work item, or by identifying an existing issue or work item.

To configure an Issue Tracker for a project, select the Issue Tracker Config option from that project's Config menu on either the Project List page or the Projects list section on the Admin page. That will bring up a dialog. Note: the "Code Dx -> *", "* -> Code Dx", "Status Mapping", and "Auto Create" tabs are visible in Code Dx Enterprise only. Only Jira has the "Status Mapping" tab.

Issue Tracker Configuration Modal

Enter the URL for your Issue Tracker server (including the "http://" or "https://", even if you're using an IP address), as well as the credentials required (Jira uses username/password or Account Email/API Token; GitLab uses a Personal Access Token that has the "api" scope; Azure DevOps uses a Personal Access Token with permissions of "Read" for "Graph" and "Project & Team" scopes, and "Read, Write, Manage" permissions for "Work Items" scope; ServiceNow uses username and password) for the user in whose name the issues or work items will be created. Then click the Verify button. Code Dx will connect with the server and retrieve a list of projects the user can access. Select the project you want from the dropdown menu. Code Dx will periodically query the issue tracker server to refresh the status for all of the issues or work items associated with a given project. The Refresh Interval specifies the number of minutes between refreshes (the default is 60 minutes). Click Ok to save your configuration.

If you delete the issue tracker configuration for a given project, all of the issue or work item associations tied to the findings in that project are deleted. None of the issues or work items on the issue tracker server itself are affected.

Advanced Field Configuration

Note: this section is only applicable to Code Dx Enterprise users.

When creating an issue or work item from Code Dx, we provide several standard fields (e.g., summary, description). But many issue trackers provide more than just a few fields for issues or work items and can be configured to require these additional fields when creating an issue or work item. Issue trackers also sometimes allow the creation of custom fields on a per-project or per-server basis. Code Dx provides for this situation through "Advanced Fields". Jira users should note that Code Dx supports all of Jira's "Standard" custom fields; while many of Jira's "Advanced" custom fields will also work correctly, some are implemented via third-party plugins and cannot be fully supported. These fields will still appear and can be used if the correct format is known, but they should be left empty otherwise.

If you're using Code Dx Enterprise, you can create template expressions for any of the fields available when creating an issue or work item for the configured issue tracker server. These expressions will be applied to the relevant Code Dx finding (or findings) when you create an issue or work item, which allows Code Dx to pre-populate the field with data from the finding, according to your specification. More technical users should be advised that the template language is the JavaScript Handlebars library and that all of the template expressions are Handlebars Expressions.

Code Dx will use its own default values for the Summary (Jira), Title (Azure DevOps, GitLab), or Short Description (ServiceNow) and Description fields if none are specified.

Users should also note that because fields can be given template expressions, which won't be evaluated until a finding is available, the validation that can be done on the fields is limited. The issue tracker field mappings are an advanced feature, and it is incumbent upon the user to make sure that the default values and expressions entered will produce valid values for the relevant issue tracker field types.

Expression Basics

The template engine will use the text you provide as-is, but it will treat anything inside pairs of double braces, {{ }} as an expression to be evaluated using the active finding or findings. Code Dx defines five basic data objects that can be used in the template expressions:

These objects can be used to construct expressions containing data from the active findings. For example,

Finding {{finding.id}} has {{finding.severity.name}} severity

will, when applied to a Code Dx finding with ID 1 and High severity, produce the text:

Finding 1 has High severity
Finding Objects

The following fields are available on all finding objects (each element in the allFindings array, finding, and common). Fields marked as being optional may be omitted or be set to null, all other fields will be present. The only exception to this rule is the common object, where any value not shared by all findings will be set to null regardless of if it is optional or not.

Project Object

Expression Logic

Most Handlebars expressions can be used. Some basic examples are given here, but much more information is available in the Handlebars documentation. Specific sections of interest are Expressions, Block Helpers, and Built-in Helpers.

Boolean Expressions

You can add basic boolean logic to your expressions by using the if helper.

For example, the expression

{{#if finding.detection.isDast}}
    This finding is a DAST finding.
    This finding is not a DAST finding.

will result in

This finding is a DAST finding.

when evaluated on a DAST finding, and will result in

This finding is not a DAST finding.

when evaluated on any other finding.

Iterating Lists

You can iterate over arrays by using the each helper.

For example, the expression

{{#each allFindings}}

will result in

1, 2, 3, 4,

when evaluated on a group of findings with the IDs of 1, 2, 3, and 4.

Understanding and utilizing {{#each}} is important, because as you can see in the above summary of the properties of the finding objects, many of the properties are arrays and therefore can't simply be accessed directly—you need to iterate over them and access each property inside the loop.


Code Dx includes all the #if, #unless, #each, and #with helpers provided by Handlebars. Several other helpers are also provided to assist as well.


The formatDate helper allows you to format a date by specifying a format string. For example:

{{formatDate finding.firstSeenOnDate 'YYYY-MM-DD'}}

will take the creation date for the finding and convert it into an ISO-8601 valid format. You can use the symbols below to create your format string:

The formatDate helper uses Moment.js under the hood, so you can look at its documentation for more formatting symbols.


The makeOxfordList helper assists in generating an Oxford list from an array of elements. The body will be evaluated for every item in the list.

For example, to list all of the Code Dx Finding IDs, the template

{{#makeOxfordList allFindings ',' 'and'}}{{this.id}}{{/makeOxfordList}}

will result in

39955, 39956, 39939, and 39940

when evaluated on a group of findings with the IDs 39955, 39956, 39939, and 39940.


The formatLocation helper formats a location object into a human-readable version similar to those displayed elsewhere in Code Dx.

For example, the template

{{formatLocation finding.location}}

will result in


when evaluated on a finding located in WEB-INF/classes/org/owasp/webgoat/lessons/AbstractLesson.java on line 182 (columns 25-50).

By default, the short name of the location is used, and any column numbers are omitted. You may opt to show the complete location information by passing true as the second parameter to this helper. For example, the template

{{formatLocation finding.location true}}

will result in


in the previous example.


The formatCWE helper creates a more informative representation of a finding's CWE (if one is available). Note: When using this helper, the last two parameters are optional. The true/false parameter determines if a link to MITRE will be available. The trackerType parameter will default to the issue tracker type currently being processed.

For example,

{{formatCWE finding.cwe true trackerType}}

will result in

CWE 78 - Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') ([MITRE|https://cwe.mitre.org/data/definitions/78.html])

when evaluated on a finding associated with CWE-78.

If the second argument provided is true, a MITRE link for the CWE will be included (formatted properly for the issue tracker being used).

If no CWE is present on the finding, this helper will evaluate to an empty string if the second argument is false, or to "No Common Weakness Enumeration information available" if the second argument is true.


The stripHtmlMarkup helper takes an HTML string and returns a copy with all HTML tags removed, and newlines/spaces inserted as necessary while attempting to preserve native formatting. By default, HTML escape sequences will be converted in the result; use false for the second parameter (as seen below) instead to prevent this. Whitespace-equivalent escape sequences (eg &nbsp;) will simply be replaced with a space regardless of the second parameter value.

For example, a finding with a description of:

    Cross-site scripting vulnerabilities occur when:
        <li>Data enters through an untrusted source</li>
        <li>The data is included in dynamic content without being validated for malicious code</li>

and a template with:

{{{stripHtmlMarkup finding.descriptions.general.content true}}}

will result in

Cross-site scripting vulnerabilities occur when:

- Data enters through an untrusted source
- The data is included in dynamic content without being validated for malicious code

Known Issues/Limitations:


The makeMarkupFromHtml helper takes an HTML string and reformats it into appropriate markup for the current issue tracker. The helper uses the following formats:

The helper takes three arguments - the first is required, the following two are optional: the HTML to convert, the name of the issue tracker type to target (use trackerType if not manually overriding), and a boolean flag (true/false) indicating whether to pre-clean the HTML of any non-textual elements (defaults to false). The pre-cleaning flag is usually not necessary, but if there are formatting issues, the pre-cleaning option may alleviate these issues.

For example, a finding with a description of:

    Cross-site scripting vulnerabilities occur when:
        <li>Data enters through an untrusted source</li>

and a template with:

{{{makeMarkupFromHtml finding.descriptions.general.content}}}

will approximately result in:

(If Jira)
h1. Explanation

Cross-site scripting vulnerabilities occur when:

## Data enters through an untrusted source
(If Azure DevOps)
<h1>Explanation</h1><p>Cross-site scripting vulnerabilities occur when: <ol><li>Data enters through an untrusted source</li></ol></p>
(If ServiceNow)
Cross-site scripting vulnerabilities occur when:

- Data enters through an untrusted source
(If GitLab)

## Explanation

Cross-site scripting vulnerabilities occur when:

1. Data enters through an untrusted source

Keep in mind that the use of {{}} vs {{{}}} will affect how Handlebars escapes any HTML-like content - typically the {{{}}} literal format is appropriate.

Known Issues/Limitations:

count and countDistinct

The count and countDistinct helpers will return the number of items in the specified array. count will count any duplicate values, whereas countDistinct will only count unique values.

For example,

{{count allFindings}} findings

will result in

23 findings

when evaluated on a group of 23 findings.

minBy and maxBy

The minBy and maxBy helpers will cause the provided expression body to be evaluated against the object with the lowest or highest value. The path to this value is provided as an argument.

For example,

{{#minBy allFindings "severity.key"}}
    The lowest severity is on finding {{id}} with severity of {{severity.name}}.

will result in

The lowest severity is on finding 10 with severity of Info.

when evaluated on a group of findings, where the lowest severity finding has the ID of 10 and severity of info.

In the case of a tie (e.g., multiple findings or results with the minimum or maximum value), the first item with that value will be used. As outlined in Finding Objects, in the case of a tie, this will give the finding or result with the highest severity and lowest numeric ID.

Nested Arrays

These helpers can also handle nested arrays in a more advanced use case. This is signified by adding a .[] at any point the helper should continue iterating over inner arrays. The expression body's context will be at the inner-most object, and the parent object(s) may be referenced if desired.

Here are two examples to illustrate these points:

{{#maxBy allFindings "results.[].severity.key"}}
    Finding {{../../id}}, Result {{descriptor.name}}, Severity {{severity.name}}

will result in

Finding 10, Result My Weakness, Severity Medium

when evaluated against a group of one or more findings where the highest severity result across all of the findings is a result on finding 10, with a descriptor name of 'My Weakness' and medium severity.

{{#maxBy allFindings "results.[].metadata.cvssV3"}}{{metadata.cvssV3}}{{/maxBy}}

will result in


when evaluated against a group of findings where the highest CVSS V3 metadata entry on any of the findings' results is 9.8.

Notice that the finding ID is accessed using {{../../id}}. You may have been expecting {{../id}} to fetch the finding ID, since the finding is the parent of the result that was selected. However, the array of results itself is the immediate parent, and the parent of that is the finding, so ../.. is used.

Enumerable Fields

Custom fields that are represented as one of a set of enumerable values (e.g., a set of radio buttons or a dropdown menu) can be configured to be pre-populated by selecting the enumerable Code Dx field from the available dropdown menu. The currently defined enumerable fields are:

Once you select a Code Dx enumerable field, you'll see a table with a row for each possible value, along with a dropdown containing the possible values of your custom field. Simply choose from the dropdown which of your custom values you want to use for each Code Dx value. The Static Value option is there if you wish to define a single value for the Jira field, regardless of the values in the Code Dx finding.

Issue Tracker Two-Way Sync

Code Dx can be configured to automatically update issue or work item fields in response to any changes to a finding within Code Dx. This is configurable on the "Code Dx -> *" tab.

Each field listed on the "Code Dx -> *" tab will have a "Keep synced" checkbox located to the right of the field's title. Simply enable this option to have Code Dx push updates to editable fields for issues when the issue's or work item's associated Code Dx finding has changed.

Issue Tracker Keep Synced Option

Code Dx can also be configured to watch specific issue or work item fields and update associated findings accordingly. This is configurable on the "* -> Code Dx" tab. Currently, only single select dropdowns and radio button fields can be mapped to affect Code Dx finding Triage Status and/or Severity Override.

Issue Tracker Two-Way Sync Examples

Below is an example configuration for Two-Way Sync with Jira.

Simple Mappings

In the picture above, we have defined how Code Dx should update associated Jira Issues when the status of a finding is changed. For example, when the finding's status is changed to "Fixed", Code Dx will update the associated Issue's "Resolution" and "Code Dx Finding Status" fields to "Fixed".

Simple Jira to Code Dx Mappings

In the picture above, we have defined how changes to Jira Issues should affect the associated Code Dx finding. When an Issue's "Code Dx Severity" field is changed, Code Dx will set the associated finding's severity to the appropriate value. The same can be said for an Issue's "Code Dx Finding Status" field and a finding's status.

Automatic Status Updating

Note: this section is only applicable to Code Dx Enterprise users who are configuring a Jira integration.

Code Dx can be configured to automatically update Jira issue statuses in response to status changes within Code Dx. This is configurable on the "Status Mapping" tab.

Issue Tracker Status Mapping

When automatic status updating is enabled, a list of Code Dx triage statuses will be shown, along with a drop down to pick the associated Jira status. These mappings are optional, if one is not selected, then no action will be taken on findings with that status.

After configuring status mappings, any time the status of a finding is updated, the associated Jira issue will be updated according to the mapping (if applicable). If a transition is not available then no action will be taken. If a transition requires some input for a field, Code Dx will attempt to use any defined mappings in the "Code Dx -> Jira" tab that are marked "Keep synced" to satisfy those requirements. In the above example for a Two-Way Sync configuration, if an Issue were to require a value for the "Resolution" field, Code Dx will use the appropriate value based on the status of the finding. Additionally, if multiple findings are associated with the same Jira issue, the Jira status will only be updated if all findings map to the same status.

Automatic Issue Creation

Code Dx can be configured to automatically create issues or work items based on a number of different criteria. This is configurable on the "Auto Create" tab.

Issue Tracker Auto Create

The picture above shows the Auto Create configuration tab. By default, Auto Create is disabled. To enable Auto Create, Jira and GitLab users should check the box labeled "Automatically create issues for findings", Azure DevOps users should check the box labeled "Automatically create work items for findings", and ServiceNow users should check the box labeled "Automatically create incidents for findings".

After enabling Auto Create, the rest of the form will be enabled and further configuration options are available.

Issue Configuration

Issue Tracker Auto Create Issue Info

The above picture shows the section of the Auto Create tab where users can configure the following:

Finding Grouping

The "Finding Grouping" section allows users to either have Code Dx create one issue or work item per finding, or group multiple findings together per single issue or work item. If "Multiple findings per ticket, grouped by..." is selected, the drop down below will be enabled.

Issue Tracker Auto Create Grouping

The selection(s) made here determines how findings are grouped. Multiple selections are allowed and the order of the selections matters. For example, if "Location" is selected first and "Severity" is selected second, Code Dx will first group findings by their Location and then by their Severity. So, if you had two findings at the same location but with different severities, these findings would be associated to different issues or work items.

Ticket Summary

The "Ticket Summary - Template" field determines what Code Dx uses for the summary or title when issues or work items are created. This field supports the same templates used on the field mappings tab (ie, "Code Dx -> *" tab). For example, if you want Code Dx to create issues or work items and have the summary display the findings location and severity, you may have a Ticket (Work Item) Summary configured like:

Issue Tracker Auto Create Summary

The "Insert placeholder..." control under the input will help in determining what kind of template expression to use.


Issue Tracker Auto Create Filtering

Pictured above are the options one can use to filter which findings should have issues automatically created.

The filters are:

If a filter is left completely blank, then that filter will not be used and all findings will be considered. For example, if you leave the Severity filter completely blank (ie, nothing is checked), all severities will be considered.

Tool Connectors

Tool Connectors allow Code Dx to pull results directly from external tools, without the manual work of exporting the results from those tools and uploading the results into Code Dx. Users with the manage role can configure a connection to their tools one time, and have Code Dx take care of the rest.

Code Dx currently supports connectors for the following tools:

The Tool Connectors dialog for a project can be accessed from its respective Config menu on the Project List page or the Admin page.

On a new project, no tool connectors will be configured.

Tool Connectors dialog with no configured connectors

In the image above, there are no tool connectors configured. Clicking any of the links in the bottom section will open a form to configure a connector for the link's respective tool. For this example, we'll configure a new Checkmarx connector by clicking the New Checkmarx Connector link.

A blank Checkmarx Connector configuration form

Each tool connector configuration form will have a common set of fields:

Note that credentials entered for tool connector configurations will be stored (encrypted, but still reversible) by Code Dx. Cautious users may wish to create one-off accounts in a tool, with the sole purpose of connecting Code Dx to that tool. This will help avoid actual users' credentials from being stolen if the Code Dx server is somehow compromised.

When entering credentials like passwords, a Verify button will sometimes appear on a field. When it does, the user must click the Verify button in order to continue on to fields that depend on the password (e.g. the project dropdown). This is done to avoid inadvertently locking out a user by attempting to log in while the user is typing his or her password.

Connector configuration before password validation

This is how the form looks after you click the Verify button.

Connector configuration after password validation

Once all of the fields are completed, press the OK button to save the configuration and return to the connectors list. In the image below, three connectors have been configured.

Tool Connectors dialog with three configured connectors

Each configured connector has three buttons:

After pressing the Run Now button, the Tool Connectors dialog will close, a new analysis will begin in the background, and a notification will display.

A tool connector analysis has started

Qualys VM Tool Connector

Note: this section is only applicable to Code Dx Enterprise users with the InfraSec add-on.

In addition to the tool connector fields mentioned above, the Qualys VM connector has two unique form configurations to choose from. The default form configuration has customization options including severity types, Asset Group Titles, IP Ranges, and Include findings last seen field. Include findings last seen is a required field and determines how far back to consider vulnerabilities that will be pulled into Code Dx. Asset Group Titles and IP Ranges are optional fields and act as filters. For example, if you provide an IP range, only that information will be pulled into Code Dx. Additionally, if both fields are left blank, all vulnerability information in Qualys will be pulled into Code Dx. Multiple IPs can be specified by separating them with a comma, and IP ranges can be specified by separating them with a hyphen.

Qualys VM tool connector configuration

To access the second form configuration, check the Import data using a Report Template option. This form will present you with a Report Template dropdown and Check on report every field. Both fields are required for this configuration. The Check on report every field determines how often Code Dx will interface with Qualys to get the status of the report being analyzed. The Report Template dropdown is populated with report templates that have been configured for your Qualys VM subscription. Code Dx will request that Qualys generates a report using the selected report template, and once the report has successfully been generated, it will be imported into Code Dx.

Qualys VM tool connector report template

Project Metadata

Project Metadata is an Enterprise-only feature which allows users of Code Dx to enter values into Project Metadata Fields for any project they have the manage role for.

The Project Metadata dialog can be opened by clicking the Project Metadata... option in the project's Config menu on either the Project List page or the Projects List section on the Admin page.

It is up to an admin user to define the fields; once defined, they will be available to every project in your Code Dx installation. Below is a screenshot of the Project Metadata dialog after some example fields have been created by an admin user.

Example Project Metadata with blank fields

And a screenshot of the same Project Metadata dialog with some values filled in.

Example Project Metadata with filled fields

Each field has a Reset button (with a circular arrow icon) to the right, which will reset the field back to its saved state. If you make any edits to that field and want to undo them, just click the Reset button.

Each field also has a Clear button (with an "X" icon) to the right, which will clear any value in that field. (Note that the Reset can also undo clears, as long as they aren't saved yet.)

There are four types of fields:

Tool Service Configuration

When the Tool Orchestration Service is enabled, the Tool Service Configuration page can be opened from the Config menu by selecting Tool Orchestration and Configure. Here you can configure tool orchestration for an entire Code Dx project. The page has three sections: Manage Certificates, Project Secrets, and Customize Add-In Tools where all but the Customize Add-In Tools are initially collapsed.

Manage Certificates

This section lets you manage a list of certificates that tool orchestration components should trust.

Manage Certificates

The Manage Certificates section lets a tool or the Tool Orchestration Service handle applications that use a self-signed certificate or a certificate issued by a certificate authority that's not well-known. Click +Add File and specify a certificate file to update the list. You will see your certificate in the list after an upload completes. Code Dx will give you the option to overwrite an existing certificate file or cancel your upload if you choose a file that's already in the list. The upload time will appear in the list under each certificate filename to help you manage your certificates.

Click the trash can icon to remove a certificate from the list and prevent future access by tool orchestration components. Removing a certificate will not remove the certificate from any in progress orchestration-related activity.

Project Secrets

This section lets you manage data that you can share with one or more tool orchestration components that may require account credentials, keys, or other types of sensitive data.

Project Secrets

Click Add New Secret to start generating data for your secret, specify a name, and click OK to define your list of fields. To add a field, click Add Field. To add a sensitive field, click Add Sensitive Field. Specify a name for the field, and click OK. With sensitive data entry, your value is masked as you type, and you must confirm the correct value by entering it twice. Also, sensitive values are write-only and cannot be retrieved from the API of the Tool Orchestration Service. When you have finished specifying fields and field values, click the save icon button to store your secret.

Project secrets get stored as Kubernetes Secrets, so it's recommended that you follow the Kubernetes guidance on encrypting secret data at rest (https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).

You can edit field values at any time, and you can undo an edit that's in progress by clicking the undo icon to the right of the field value. Project secrets support limited editing; you cannot change a secret's name, add or remove fields, or change the data entry mode for a field value. Click the trash can icon and Add New Secret to delete and recreate a secret that requires those edit types.

Project secrets are meant to be used with add-in tools, so Code Dx will display a warning icon to highlight those that are not yet assigned to a tool. You can learn about assigning secrets to tools in the Customize Add-In Tools section.

Unassigned Project Secrets

Customize Add-In Tools

This section lets you customize tools that you previously registered on the Admin page. Code Dx shows your list of registered tools on the left, and you can select a tool to enable/disable it, assign one or more project secrets, or adjust tool behavior by changing any custom TOML configuration data the tool can read.

Customize Add-In Tools

What you can customize will vary by tool. For example, the default Security Code Scan tool has no project-specific TOML configuration, and it does not read project secrets, but the Enabled/Disabled toggle lets you customize whether its available for your project.

Alternatively, the ZAP tool lets you use project secrets to make authenticated requests during scanning. The interpretation of project secret data is tool dependent - ZAP, for example, will ignore any secrets with missing username or password fields.

Customize Add-In Tools - ZAP

The Config box shows any project-specific TOML configuration for the tool you selected. You can see in the above ZAP example how it includes authentication settings and a target field that determines where ZAP begins scanning.

You should avoid using the Default enabled feature available on the Admin page for tools with project-specific TOML configuration unless you have a process for specifying project configuration before a tool runs. Tools with insufficient configuration details typically fail when run.

You must click Save to persist any changes, and you must save any changes you want to keep before selecting another tool.

Customize Checkmarx Add-In Tool

The Code Dx Checkmarx Add-In has the following project-specific configuration.

baseUrl = ""
projectId = 0

checkScanStatusDelay = 60

Use the Customize Add-In Tools feature to specify values for the above configuration before enabling the tool. Refer to the following table for an explanation of the configuration parameters.

Parameter Description Example
baseUrl the base URL endpoint for the Checkmarx scanner (default="") "https://cxprivatecloud.checkmarx.net"
projectId the Checkmarx-assigned ID of a project created by the Checkmarx software at the base URL (default=0) any integer value greater than 0
checkScanStatusDelay the delay in seconds between requests to fetch scan status (default=60) 60

Note that you may have constant values for both checkScanStatusDelay and baseUrl, so you can use the Admin page to specify values that will not vary by project.

You must also provide an account credential that authorizes use of the Checkmarx software at the base URL. The Checkmarx Add-In Tool expects to find a project secret named checkmarx-project-credential that includes both a username and a password field. The credential must grant permission to start a new scan in the configured Checkmarx project and to generate a Checkmarx report with scan findings.

Customize ZAP Add-In Tool

The ZAP Add-In has the following project-specific configuration.

target = ""

runActiveScan = false

minRiskThreshold = 0
minConfThreshold = 0

type = "none"
loginIndicatorRegex = ""

formURL = ""
formUsernameFieldName = ""
formPasswordFieldName = ""
formAntiCrossSiteRequestForgeryFieldName = ""
formExtraPostData = ""

authenticationScriptContent = ""

Use the Customize Add-In Tools feature to specify values for the above configuration before enabling the tool. Refer to the following table for an explanation of the configuration parameters.

Parameter Description Example
target the URL where the scan starts (default="") "http://host.docker.internal/contosou"
runActiveScan the decision to run an active scan (default=false) true
minRiskThreshold the minimum risk code for ZAP report findings (default=0) 1
minConfThreshold the minimum confidence for ZAP report findings (default=0) 1
type the authentication type: none, formAuthentication, or scriptAuthentication (default=none) formAuthentication
loginIndicatorRegex the regex to to indicate a successful login request (default="") '\QSet-Cookie: .AspNetCore.Identity.Application=\E'
formURL the URL of the login form for forms authentication (default="") "http://host.docker.internal/contosou/account/login"
formUsernameFieldName the login form's username field name (default="") "Email"
formPasswordFieldName the login form's password field name (default="") "Password"
formAntiCrossSiteRequestForgeryFieldName the anti-XSRF token field name (default="") "__RequestVerificationToken"
formExtraPostData the extra data to include with login request (default="") "RememberMe=false"
authenticationScriptContent the ZEST script for script authentication (default="") See ZAP documentation

When you have ZAP authentication configured, you can provide account credentials by creating project secrets that include both a username and a password field. The ZAP scanner will send authenticated requests using each credential it finds. Be sure to specify the correct username and password with each credential so that ZAP can log on successfully.

Customize Burp Suite Add-In Tool

The Burp Suite Add-In must be specialized before use by adding your licensed, activated copy of Burp Suite to the base Docker image that Code Dx provides. Note that your use of the Burp Suite Add-In Tool must comply with your Burp Suite license agreement. Take the following steps to specialize the Burp Suite Add-In.

  1. Copy your Burp Suite JAR file to a new directory named burp-suite
  2. Build a Docker image that derives from what's provided by Code Dx
  3. Activate Burp Suite and save your changes in a new Docker image

You can accomplish the above steps for Burp Suite Professional by first creating a new burp-suite directory.

C:\>mkdir burp-suite && cd burp-suite

Download your Burp Suite JAR file into the burp-suite directory. In this example, the JAR file is named burpsuite_pro_v2.1.03.jar.

  Directory of C:\burp-suite

09/19/2019  11:12    <DIR>          .
09/19/2019  11:12    <DIR>          ..
08/13/2019  11:28       301,048,274 burpsuite_pro_v2.1.03.jar

Here's a Dockerfile you can use to build a Docker image derived from what Code Dx provides.

FROM codedx-burpsuiterunnerbase:v1.0
COPY burpsuite_pro_v2.1.03.jar /opt/codedx/burpsuite/bin/burpsuite_pro_v2.1.03.jar

Use the above Dockerfile contents to create a file named "Dockerfile" in the burp-suite directory.

  Directory of C:\burp-suite

09/19/2019  14:00    <DIR>          .
09/19/2019  14:00    <DIR>          ..
08/13/2019  11:28       301,048,274 burpsuite_pro_v2.1.03.jar
09/19/2019  14:00               189 Dockerfile

Create the Docker image by running the following command from the burp-suite directory.

C:\burp-suite>docker build -t codedx-burpsuiterunner-unactivated:v1.0 .
Sending build context to Docker daemon  301.1MB
Step 1/2 : FROM codedx-burpsuiterunnerbase:v1.0
 ---> 03be80c2bc9f
Step 2/2 : COPY burpsuite_pro_v2.1.03.jar /opt/codedx/burpsuite/bin/burpsuite_pro_v2.1.03.jar
 ---> e3d6aa0106b0
Successfully built e3d6aa0106b0
Successfully tagged codedx-burpsuiterunner-unactivated:v1.0

Next, you must use your new Docker image to start a container where you can run Burp Suite. Follow the instructions to activate your license. Do not shut down the container.

C:\burp-suite>docker run -it --name burpsuite codedx-burpsuiterunner-unactivated:v1.0 sh
$ java -jar burpsuite_pro_v2.1.03.jar

When you have installed and activated your license, run a separate command to create a snapshot with the license activated.

C:\burp-suite>docker commit burpsuite codedx-burpsuiterunner-licensed:v1.0

Add your new codedx-burpsuiterunner-licensed Docker image to a private Docker registry that Code Dx can access. Now you can shut down and remove the burpsuite container you created.

C:\burp-suite>docker stop burpsuite -t 0 && docker rm burpsuite

The last step is using the Admin page to edit the Burp Suite TOML config by finding the request.imageName parameter and replacing the codedx-burpsuiterunnerbase:v1.0 value with <private-registry-name>/<repository-name>/codedx-burpsuiterunner-licensed:v1.0, a value based on the above example where you would replace <private-registry-name> and <repository-name> with appropriate values.

The Code Dx Burp Suite Add-In has the following project-specific configuration.

name = ""
urls = [""]
includeSimpleScope = []
excludeSimpleScope = []
namedConfigurations = ["Never stop audit due to application errors"]
apiPort = 2727

Use the Customize Add-In Tools feature to specify values for the above configuration before enabling the tool. Refer to the following table for an explanation of the configuration parameters.

Parameter Description Example
name the name associated with your scan (default="") must be the empty string for Burp Suite Professional
urls the URL(s) to scan, use a command to separate multiple URLs (default=empty list) "https://host.docker.internal/contosou"
includeSimpleScope the list of items to include in the scan's scope (default=empty list) "https://host.docker.internal/"
excludeSimpleScope the list of items to exclude from the scan's scope (default=empty list) "https://host.docker.internal/contosou/department"
namedConfigurations the Burp Suite named configurations to use with the scan (default="Never stop audit due to application errors") any configuration name that Burp Suite recognizes
apiPort the port number where the Burp Suite API will be made available (default=2727) any unused, valid port number

Note that you can use the Admin page to specify values for parameters, like apiPort, that will not vary by project.

You must also provide an API key and a hashed API key value that authorizes use of Burp Suite's REST API. The Burp Suite Add-In Tool expects to find a project secret named burp-suite-api-key that includes both a key and a hashed-key field. You can create an API key and its related hashed key value by using the Burp Suite application (see API Keys under User Options, Misc, REST API). Create an API key and then save your user options to a file (Burp menu, User options, Save user options) so that you can find the "hashed_key" value for the key you created. Here's an example of a hashed_key for the API key named 'key':

-->                    "hashed_key":"F/mmTIwXcY/YkZm4SYyyFgglu82zBDeesm8LD7IQNtM=",

Orchestrated Analyses

When the Tool Orchestration Service is enabled, the Orchestrated Analyses page can be opened from the Config menu by selecting Tool Orchestration and View Analyses. Here you can view the portion of an analysis that's orchestrated on your Kubernetes (k8s) cluster. Keep in mind that a Code Dx analysis may include bundled tools for which k8s support is unavailable - the Orchestrated Analysis page will not include information about those tools.

Orchestrated Analyses

Every orchestrated analysis for a project will appear in a list on the left. When a project has no orchestrated analyses, you will see the message pictured below.

No Orchestrated Analyses

Code Dx will automatically select the most recent analysis when you visit the page. Selecting an analysis lets you see summary information, to include analysis status and start time. An orchestrated analysis has a unique, numerical identifier and executes as a multi-step workflow running on Kubernetes. Under the status information, you will find collapsed sections that represent each workflow step. Steps labeled with an ID and tool name represent tools running on your cluster. They differ from system steps, like prepare, which support the overall workflow.

An orchestrated analysis that completes successfully will show a Success status, also represented with a green checkmark icon in the analysis list. Failed analyses will show a Failed status and a red exclamation icon. The summary information for failed analyses will include a Status reason field that may provide further information. Failed steps may also include a Message field describing why a step failed to complete successfully.

Orchestrated analyses abandoned by previous Code Dx instances continue to run to completion. Code Dx will show you the message pictured below when there's an orchestrated analysis whose results will be entered into Code Dx as a brand new analysis.

Orchestrated Analysis Not Tracked

Viewing Logs

Every workflow step includes one or more logs, and you can expand a step section to reveal a log viewer with support for live updates showing log data available from the Kubernetes API. Code Dx shows you the main log by default, but you can view log data from other available sources using the dropdown pictured below.

View Orchestrated Analysis Logs

For tool completed steps, you can click Download Logs to fetch every tool log referenced by the tool's registration data. Download Logs will be unavailable when a tool run is in progress. Keep in mind that add-in tool authors may write log data that's unavailable via the Kubernetes API, so downloaded logs may include data that's not included in what's shown with live updates on the Orchestrated Analysis page.

Some steps of an orchestrated analyses may repeat in an attempt to recover from unexpected failures. How often they repeat and with what delay in between is step dependent. When log data is available for multiple tries, you will see a tabbed log viewer. Each tab will show you the log details for a specific attempt.

Tabbed Log Viewer


Code Dx lets you stop orchestrated analyses from running to completion. Click Terminate to submit a request to cancel an analysis.

Cancel Orchestrated Analysis

It may take a few moments before an analysis displays a terminated status, but you will see immediate feedback indicating that your termination request has been submitted, and you will not be able to submit additional termination requests.


This section explains the analysis capabilities of Code Dx. Both the Code Dx Enterprise and Stat! products come with bundled tools to scan the applications of interest to you. The languages we support and expected inputs for the built-in scanners are described in the Built-in Code Scanners and the Built-in Dependency Scanners sections. In addition to the bundled tools, Code Dx Enterprise can import the results of several commercial and open source tools. The supported tools and generic input formats are described in the Importing Scan Results section. There are a number of different options to configure and run analyses for Code Dx: manually using the web interface; from the IDE or Jenkins plugins; automatically (such as from your continuous integration server) using the API. These are all detailed in the Starting Analyses section.

Incremental Analysis

As of Code Dx version 2.0, analysis is done incrementally. This means that as new analysis inputs (files) are added to a project, any findings associated with them are added to the project. Prior to version 2.0, the entire set of files was replaced with the new set of inputs.

With this change to incremental analysis, the life of a finding becomes tied to the inputs in which it was reported. When the last input contributing to a finding is archived, the finding itself is marked as 'Gone' and hidden by default (see View Options).

Analysis inputs can be archived manually or automatically. For more information on archival, see Auto-Archival.

Built-in Code Scanners

Code Dx analyzes C/C++, Java, JavaScript, JSP, .NET (C#, VB), PHP, Scala, Python, and Ruby on Rails applications. For all supported languages, Code Dx will analyze the source using bundled tools built specifically for a target language. For applications built with any combination of the supported languages, Code Dx will run the appropriate checkers on the provided source.

For Java applications, Code Dx supports scanning compiled bytecode. In fact, the preferred approach for Java projects is to upload both source and bytecode to Code Dx in the supported file format described in the bullets below. This yields the best coverage for issue detection.

For .NET applications, Code Dx supports scanning compiled DLLs. It is also recommended that the source be uploaded. This will provide better source location information and will allow for viewing the source while looking at finding details. Note: If you choose to upload an entire Visual Studio solution folder, there may be duplicates of the build DLLs and third-party DLLs. This will cause a longer analysis time and possibly incorrect results if some DLLs are stale. To achieve the best results, upload a zip that contains only the DLLs and PDB files for the binaries you wish to analyze. Upload the source as a separate zip.

Code Dx accepts application inputs in the following zip archive formats:

Note that Code Dx enforces a single source zip archive per analysis. So even though Code Dx supports multiple languages, the expectation is that they will all be packaged in a single .zip archive to enable consistent path correlation across all the checkers. Although source and bytecode inputs can be uploaded in separate files, they do not have to be split up. A single .zip file containing C/C++ source, Java source, Java bytecode, .NET DLLs, .NET source, PHP source, Scala source, Ruby on Rails source, Python Source, and JavaScript source is perfectly acceptable.

Bundled Tool Versions

Tool Version Release Date
Brakeman 4.3.1 6/7/2018
CAT.NET (user-installed) 6/26/2009
Checkstyle 8.32 4/26/2020
Cppcheck 1.88 6/29/2019
Dependency-Check 6.0.3 11/3/2020
ESLint 7.15.0 12/5/2020
FxCop (user-installed) 10+ N/A
Gendarme 2.11.0 N/A
JSHint 2.10.2 3/13/2019
PHP CodeSniffer 3.4.2 4/10/2019
phpcs-security-audit 2.0.0 2/20/2018
PHPMD 2.8.2 2/24/2020
PMD 6.20.0 11/29/2019
PMD GDS Security 2.22.0 11/30/2019
Pylint 2.4.4 11/13/2019
Scalastyle 2.12-1.0.0 8/20/2017
SpotBugs 4.0.3 5/12/2020
SpotBugs Find Security Bugs 1.10.1 10/29/2019

Built-in Dependency Scanners

Code Dx also scans input to check for dependencies with known vulnerabilities. The following are checked:

Importing Scan Results

Code Dx Enterprise supports importing the results of commercial and open source application security testing tools (AST), in addition to a couple of generic tool result listing formats. The list of supported tools for scan imports includes the built-in ones mentioned in the previous section. If one of the tools you want to import is not supported, please let us know. However, in the meantime, you can convert your data to the generic Code Dx Input XML format. The schema definition for this format and an example can be accessed via the download icon in the Code Dx header.

SAST Tools

The following are the supported SAST tools and import formats supported by Code Dx Enterprise:

DAST Tools

The following are the supported DAST tools and import formats supported by Code Dx Enterprise:

IAST Tools

The following are the supported IAST tools and import formats supported by Code Dx Enterprise:

Mobile Tools

The following are the supported Mobile tools and import formats supported by Code Dx Enterprise:

InfraSec Tools

The following are the supported Infrastructure tools and import formats supported by Code Dx Enterprise with the InfraSec add-on:

Threat Modeling Tools

The following are the supported Threat Modeling tools and import formats supported by Code Dx Enterprise:

Component Tools

The following are the supported Component tools and import formats supported by Code Dx Enterprise:

Container Tools

The following are the supported Container tools and import formats supported by Code Dx Enterprise:

Cloud Infrastructure Tools

The following are the supported Cloud Infrastructure tools and import formats supported by Code Dx Enterprise:

AppDetective Pro Support

When generating a Check Results Report in AppDetective Pro, you will be given options for which fields to include. For best results, we recommend including every field. However, at a minimum, the following fields are required:

If any of these required fields are excluded, you will receive an error when uploading the report to Code Dx and analysis of the file will not be allowed.

AppSpider Support

Code Dx accepts the VulnerabilitiesSummary.xml file from AppSpider. This file is output as part of the report generation process within AppSpider. The following instructions describe how to generate a report and locate the summary XML file:

  1. Run a new scan or open an existing scan in AppSpider
  2. Generate a report by clicking the Generate Report button on the scan toolbar Generate Report
  3. Locate the generated report on disk - the default location is Documents/AppSpider/Scans, however this option is configurable within AppSpider
  4. Within the report folder, there will be a VulnerabilitiesSummary.xml file - this is what should be uploaded to Code Dx for analysis

CodeSonar Support

The preferred means of importing CodeSonar result into Code Dx is to use the CodeSonar Tool Connector. But in situations where the machine running Code Dx and the machine running CodeSonar cannot communicate with each other, the CodeSonar-Scrape utility helps bridge the gap.

CodeSonar-Scrape is a command-line utility that you can use to generate a Zip file that Code Dx understands as CodeSonar results. You provide it the URL of your CodeSonar server, the name of the project you want to import into Code Dx, and optionally your username and password. It will then find all of the "warnings" associated with that project, downloading them into a Zip file which you can then upload to Code Dx. Results imported in this manner will include descriptions (tracing) information, and links back to CodeSonar's hub for warning details and category documentation. Detailed instructions for this tool can be found in the CodeSonar-Scrape User Guide. If you need CodeSonar-Scrape or have questions on the topic please contact us.

Parasoft Support

Code Dx accepts the XML SATE reports generated by Parasoft tools, which can be generated using both the GUI and CLI.

To generate the report from the GUI:

  1. Run a scan
  2. Click the Test Progress and summary tab and click the Generate Report button in the toolbar
  3. The Report & Publish dialog will open, select the Preferences button
  4. In the next dialog, click the Format dropdown and select XML SATE (Static Analysis Tool Exposition)
  5. Click Apply
  6. Click OK
  7. In the Report & Publish dialog check the option to open in browser
  8. Click OK
  9. The generated report will appear above the Test Progress and summary tab, the location of the file on disc will display in the report tab

To generate the report from the CLI you will first need to create a file containing the proper report preferences, one setting per line. The bare minimum settings are:

You can also complete steps 1 through 5 above and export your report settings. To export the settings:

  1. Select Parasoft -> Preferences from the toolbar
  2. In the dialog that opens, select Parasoft (the root)
  3. In the section titled "Configure settings", click the share link
  4. In the new dialog, enter a filepath into the text box. This will be where your settings file will be located
  5. Check the "Reports" option
  6. Click OK

Once you have your report settings file, you will need to run the CLI and add the following options:

The report to upload to Code Dx will be at path/where/report/should/go/filename_report.xml or path/where/report/should/go/filename_sate.xml

SARIF Support

Code Dx strictly supports the v2.1.0 SARIF spec as outlined here and detailed here. New formats will be added explicitly; support for v2.1.0 does not imply support for v2.1.1, etc.

Note that all ingested SARIF results will be detected as "SAST", regardless of whether a "Container Analysis" tool (etc.) generated it.


Code Dx support for SARIF currently does not include the following notable features:

Results with Multiple Locations

SARIF results containing multiple locations will be split into duplicate results, one for each location. If multiple codeFlows are specified and none have a sink matching the result location, the codeFlow sinks will be treated as the effective locations and the result will similarly be split. This may cause a mismatch between the location reported by the SARIF result and the location used by Code Dx.

Empty/Undetected Tool Results

Some tools will output empty files if no results were found, which cannot be detected by Code Dx as any particular format. This will prevent resolution of findings in Code Dx if the tool had previously generated results. This can also occur if your results file begins with mostly build errors, which Code Dx cannot use to recognize a given file format. For tools that may output empty results files or files with many errors, you can add a Code Dx-specific header to the file:

##tool = X

This will force Code Dx to recognize the given file as though it came from the specified tool X. The name of the tool is case-sensitive. This is supported for the following tools:

Tool Header Value
ErrCheck ErrCheck
Error Prone error-prone
GoCyclo GoCyclo
GoLint GoLint
GoSec GoSec
IneffAssign IneffAssign
Jlint Jlint
JSHint JSHint
Microsoft Code Analysis Microsoft Code Analysis
Pylint Pylint
Staticcheck Staticcheck
Vet (go vet) Vet

For example:

##tool = GoCyclo

This file will always be detected as a gocyclo results file.

Starting Analyses

There are a number of different ways to prepare and initiate an analysis within Code Dx:

Note that only users with the create role for projects can initiate new analyses.

Starting Analyses Manually from the Web Interface

Analyses can be prepared and initiated manually from the Code Dx web interface. To do so, the first step is to go to the Project List page, find the project that you want to run the analysis for, and click the New Analysis button.

This will take you to the New Analysis page.

To add a file to the page, you can use the Add File button. A file picker dialog will open and you may select one or more files, as is shown in the next image.

Alternatively you can drag the files over the same button area. When dragging and dropping, the page will change to display the drop region:

Please note the drag-and-drop functionality is not supported by all browsers.

As you add files to the page, they will be uploaded to the Code Dx server for identification. Once the server has identified the file contents, the page will update to display the detected content along with any errors or warnings about the contents.

In the image above, a zip file containing Java .class files was added, and tagged as a Java Library. Based on this content, Code Dx identified Dependency-Check and SpotBugs as the tools to run on this file. Each tag in the Detected Content and Tools to Run sections can be disabled. If desired, click the checkbox on the tag to disable (or re-enable) that tag. Sometimes, disabling a content tag will make Code Dx decide that a certain tool is no longer applicable to that file. Disabling a tag in the Tools to Run section will tell Code Dx not to run that tool, even though it is applicable to that file.

In the image above, a second zip file was added, containing .java files as well as some C# source files and .NET (CLR) compilation artifacts. The file was tagged as C# Source, Java Source, and CLR Binary. Code Dx identified five different tools to run on that file. Additionally, since both files have been tagged as a "Library", Code Dx won't allow an analysis. This can be solved by disabling the CLR Library tag on the new file. In this example, since we are only interested in the Java-related contents of the project, we disable the C# Source tag as well.

With the two tags unchecked, the warnings and tools that were only applicable to those tags have disappeared, and Code Dx will once again allow the analysis to start.

Once ready, click the Begin Analysis button at the bottom of the files area to start the analysis of those files. If for some reason there is a problem with the files, the Begin Analysis button will be replaced by one or more messages indicating what is wrong. You'll have to address whatever problems are mentioned there before starting an analysis.

Analysis is conducted as a "job". The work order is placed in the job queue and will be executed once enough resources are free. Often, the time spent in the queue is negligible, but you may still see a brief flash of a message stating that the analysis has been queued. Once the analysis job is finished queueing, the analysis will begin. The page will display a timer to indicate the current duration of the analysis.

The actual duration of the analysis depends on many factors.

Once the analysis has been queued, it is safe to leave the page. The analysis will continue in the background. It is still advisable to keep the page open in order to see any warnings or errors that might occur during the analysis. Occasionally a tool will fail, or some other unexpected problem will arise. Depending on the problem, the analysis might fail, or a message will be added to the New Analysis page.

If the analysis completes successfully, the analysis timer will become a link to the Findings page. Any currently-opened Findings pages will be updated to reflect the latest analysis results.

Inputs from Git Repositories

If you set up a Git configuration for a project, the New Analysis page will automatically include the latest contents of the configured branch of the configured repository as an input.

Normally, Code Dx will update the local clone and check out the appropriate branch before sending the files to the analysis. If you set up your configuration to use the master branch, it will fetch the latest changes from master. As development is done on that branch, analysis of that branch will change along with the contents. But if you want to analyze a specific point in the repository, you can tell Code Dx to use a specific tag or commit by clicking on the underlined section of the input.

Fill in the field with a tag name or a commit hash, and click the Use this button.

Starting Analyses Manually from the IDE Plugins

Code Dx offers plugins for Visual Studio and Eclipse. These plugins offer many features to view and interact with the results of Code Dx analyses within the comfort of developers' familiar development environment. Among the features offered by the IDE plugins is the ability to initiate a scan directly from the development environment. This simplifies the process of packaging the relevant source and compiled artifacts (when applicable) since it is largely an automated process beyond some basic configuration options. For more details on how to initiate analyses from the IDE plugins, please see the Plugins Guide's relevant sections for the Visual Studio and Eclipse plugins.

Starting Analyses Automatically Using the API

Code Dx offers an expanding API to interface with the system's functionality programmatically. The ability to push files for an analysis by Code Dx is exposed by the API. This enables automated integration scenarios such as continuous integration. In a continuous integration scenario, a post-build step can be added to the build jobs to automatically push the source and compiled artifacts to Code Dx for analysis. This type of setup is strongly recommended for development teams to catch potential issues within their codebases early for quick remediation. "Test early and often" is advice that most certainly applies to static analysis. Code Dx does offer a Jenkins plugin, to facilitate use in a continuous integration context.

In order to use an API key for automated analyses, the key must be assigned the create role for the project. The API call to push the files and initiate the analysis is documented in the API Guide.

Tool Orchestration

When the Tool Orchestration Service is enabled, Code Dx can orchestrate analyses that run in whole or in part on your Kubernetes (k8s) cluster. See the Tool Orchestration Configuration section of the Install Guide for instructions to enable this feature.

A Code Dx analysis may run one or more built-in code scanners. Many of those tools can run on your Kubernetes cluster when you enable the tool orchestration feature. Those that cannot, like Dependency Check, will continue to run on the Code Dx web server.

The following table shows which bundled tools Code Dx can run on your k8s cluster.

Bundled Tool Tool Orchestration Support
Brakeman Yes
CAT.NET (user-installed) No
CheckStyle Yes
CPPCheck Yes
DependencyCheck No
ESLint Yes
FxCop (user-installed) No
Gendarme Yes
JSHint Yes
PHP Code Sniffer Yes
Pylint Yes
Retire JS No
ScalaStyle Yes
SpotBugs Yes

Code Dx also includes the following tool orchestration capabilities that run only on your k8s cluster.

A single Code Dx analysis can have tools running on both the webserver and on multiple nodes of your k8s cluster. All tool outputs get combined into one analysis that either succeeds or fails as a whole, provided the Code Dx web server remains online throughout the analysis.

If the Code Dx web application unexpectedly restarts, a built-in fail-safe lets Code Dx receive k8s analysis results from abandoned orchestrated analyses. Code Dx will lose any results from bundled tools in this case, so a restart of the Code Dx web application is one scenario where Code Dx may process results from a partially completed analysis. When Code Dx detects an orchestrated analysis that it is not tracking, you will see the message pictured below on the Orchestrated Analyses page.

Orchestrated Analysis Not Tracked

You can configure Code Dx to run additional tools by implementing other add-in tools.

Resource Requirements

When the Tool Orchestration Service is enabled, Code Dx can create orchestrated analyses that run one or more application security testing tools where each tool has access to its host's memory and CPU resources. Using Kubernetes (k8s) tools, you can control the memory and CPU capacity available to analyses. You can also improve k8s scheduling outcomes by requesting CPU or memory capacity for specific tools or projects. Resource requirements can also include a node selector and pod toleration, with taint effects NoSchedule and NoExecute, to influence further where tools run on your cluster.

The resource requirements feature cannot be configured using the Code Dx user interface, but you can use the k8s kubectl command to define configuration maps (configmaps) that cover specific scope determined by a special naming convention. The tool service will look for and read optional configmaps to determine how resource requirements apply to a specific tool run.

Resource requirements containing CPU and memory instructions translate to k8s resource requests and limits and fit with any other related k8s configuration, such as a resource limit defined for a k8s namespace. You can specify resource requirement data by using the following configmap field names:

There are four types of configmaps that can contain resource requirements:

The Code Dx deployment creates the Global Resource Requirement, which provides default resource requirements for tools across every Code Dx project. Global Tool requirements override Global requirements for specific tools. Project requirements override both Global and Global Tool requirements by providing default resource requirements for tools associated with a given project. Lastly, Project Tool requirements override other requirements by specifying values for a specific tool in a specific project.

Here's an example of how the scopes can overlap:

Global Resource Requirement:

Global Tool Resource Requirement:

Project Resource Requirement:

Project Tool Resource Requirement:

Here are the effective resource requirements resulting from the above:

Effective Resource Requirement:

The naming convention determines the scope of a resource requirement configmap:

where ProjectID is the integer value representing the Code Dx project identifier and ToolName is the tool name converted to an acceptable k8s resource name by the following rules:

Example 1 - Project Resource Requirement

To create a resource requirement for all tool runs of a Code Dx project represented by ID 21, create a file named cdx-toolsvc-project-21-resource-requirements.yaml and enter the following data:

apiVersion: v1
kind: ConfigMap
  name: cdx-toolsvc-project-21-resource-requirements
  requests.cpu: "1"
  limits.cpu: "2"
  requests.memory: "1G"
  limits.memory: "2G"

Note: You can find a project's ID at the end of its Findings page URL. For example, a project with ID 21 will have a Findings page URL that ends with /codedx/projects/21.

Run the following command to create the configmap resource (replacing the cdx-svc k8s namespace, if necessary).

kubectl -n cdx-svc create -f ./cdx-toolsvc-project-21-resource-requirements.yaml

Example 2 - Global Tool Resource Requirement

To create a Global Tool resource requirement for ESLint, create a file named cdx-toolsvc-eslint-resource-requirements.yaml and enter the following data:

apiVersion: v1
kind: ConfigMap
  name: cdx-toolsvc-eslint-resource-requirements
  requests.cpu: "2"
  limits.cpu: "3"
  requests.memory: "4G"
  limits.memory: "5G"

Run the following command to create the configmap resource (replacing the cdx-svc k8s namespace, if necessary).

kubectl -n cdx-svc create -f ./cdx-toolsvc-eslint-resource-requirements.yaml

Example 3 - Node Selector

To create a Global Tool resource requirement for running a tool named MyTool on cluster nodes labeled with canrunmytool=yes, create a file named cdx-toolsvc-mytool-resource-requirements.yaml and enter the following data:

apiVersion: v1
kind: ConfigMap
  name: cdx-toolsvc-mytool-resource-requirements
  nodeSelectorKey: canrunmytool
  nodeSelectorValue: yes

Run the following command to create the configmap resource (replacing the cdx-svc k8s namespace, if necessary).

kubectl -n cdx-svc create -f ./cdx-toolsvc-mytool-resource-requirements.yaml

Scan Request File

An add-in tool is based on a scan request file that you define and register with Code Dx. A scan request file contains the instructions that the tool service needs to invoke an application security testing tool on the k8s cluster and ingest its output into Code Dx. Scan request files use the TOML file format. You can specify any valid TOML content in your tool's scan request file provided you specify the request table, which is a reserved section with the following parameters.

Key Description Required?
imageName The name of the Docker image containing your add-in tool Yes
workDirectory The work directory where your add-in tool can find tool inputs Yes
shellCmd The Bourne shell command to invoke your add-in tool Yes
resultFilePath The output of your add-in tool Yes
logFilePaths An array of log files produced by your add-in tool No
preShellCmd An optional command to run prior to invoking the shellCmd No
postShellCmd An optional command to run after invoking the shellCmd No

A tool run ends in an error when either shellCmd, preShellCmd, or postShellCmd return a non-zero exit code. When the tool service runs an add-in tool, it creates the following directory structure at the path specified by the value of the workDirectory key.

Content Description
/ca-certificates A directory containing zero or more certificates that should be considered trusted certificate authorities
/config/request.toml A copy of the tool's scan request file, including any project-specific configuration
/input A directory containing an optional input file
/volume-secret A system directory required for storing tool outputs
/workflow-secrets Zero or more workflow secrets associated with an add-in tool's project configuration

When the tool service invokes an add-in tool, it provides the tool with a copy of its scan request file, so the file is a convenient place to store configuration data. After you register an add-in tool, Code Dx lets you edit TOML content outside the request table on a per-project basis, so you can have key values that vary by project. For example, a DAST tool might have a scan request file with a key whose value indicates the URL from which to start a scan; the URL can vary from one Code Dx project to the next.

Walkthrough: Add Tool

You can add an application security testing tool to the list of tools that Code Dx can run on a Kubernetes cluster by completing the following tasks:

  1. Implement a command/script/application that automates a tool
  2. Package the capability into a Docker image that can be invoked from a Bourne shell
  3. Define a scan request file, specifying (at a minimum) the key values of the Code Dx request table
  4. Register the add-in tool
  5. Enable the tool for specific projects, configuring any project-specific key values defined in the scan request file

This walkthrough will show you how to create, register, and enable an add-in tool that automates Security Code Scan, a static code analysis tool for .NET. The Security Code Scan add-in tool is automatically installed when you enable the Code Dx Tool Orchestration feature, but you can use this walkthrough to learn how to add a new add-in tool whose output must be transformed to the Code Dx XML Schema.

Tool Automation

Your first task will be to create a PowerShell Core script that can automate Security Code Scan. We will use a script that defines two parameters, a path to an input archive containing C# source and a path to an output file with findings that Code Dx can ingest.

Create a directory called SecurityCodeScan. Download SecurityCodeScan.ps1 to the directory. The PowerShell Core script you downloaded takes the following steps to automate Security Code Scan.

  1. Unpack the source code in the input file provided by Code Dx
  2. Add the SecurityCodeScan.VS2017 project reference to each source code project file
  3. Run dotnet build
  4. Translate the findings from the build results into the generic Code Dx XML format.

The last step is required because Code Dx does not support ingesting Security Code Scan findings directly. If you were automating Checkmarx, a tool whose output Code Dx can read, then Step 4 would be unnecessary. You will handle Step 4 with a separate script, so download SecurityCodeScan-Results.ps1 to your SecurityCodeScan directory.

Tool Packaging

To package the Security Code Scan automation, you must create a Docker image capable of running both PowerShell Core scripts and compilations of .NET Core 2 code. Adding PowerShell Core to a Docker image based on microsoft/dotnet:2.2-sdk creates a suitable environment.

Download Dockerfile.txt to your SecurityCodeScan directory, and run the following command from that directory to generate a Docker image that can automate Security Code Scan.

docker build -t codedx-securitycodescanrunner:v1.0 -f ./Dockerfile.txt .

Scan Request File

The docker build command from the previous section created a Docker image named codedx-securitycodescanrunner:v1.0 that contains the SecurityCodeScanner.ps1 script in the /opt/codedx/securitycodescan directory. The following scan request file content describes how to run SecurityCodeScanner.ps1 on an input provided by Code Dx.

imageName = "codedx-securitycodescanrunner:v1.0"
workDirectory = "/opt/codedx/securitycodescan/work"
shellCmd = '''
source=$(ls /opt/codedx/securitycodescan/work/input)
  pwsh /opt/codedx/securitycodescan/script/SecurityCodeScan.ps1 \
      "/opt/codedx/securitycodescan/work/input/$source" \
resultFilePath = "/opt/codedx/securitycodescan/work/output/securitycodescan.output.xml"

The value of the imageName key is codedx-securitycodescanrunner:v1.0, the Docker image you created. The workDirectory key value is /opt/codedx/securitycodescan/work, a directory that already exists because your Dockerfile established a /opt/codedx/securitycodescan/work/output directory to store SecurityCodeScan.ps1's result. Code Dx uses the work directory to store add-in tool data.

The shellCmd key value is the Bourne shell script Code Dx will run to invoke your add-in tool. SecurityCodeScan.ps1 requires two parameters, an analysis input file, and an output file. Code Dx puts the analysis input file in the input directory, which is a sub-directory of the work directory. The analysis input file parameter will come from a search of that directory, and the output file will be /opt/codedx/securitycodescan/work/output/securitycodescan.output.xml. The value of the resultFilePath key directs Code Dx to the add-in tool output and will match the script's output file parameter.

In this example, you did not use the optional scan request file keys. The logFilePaths key is unnecessary because SecurityCodeScan.ps1 writes log information to stdout, and the shellCmd does not require any pre or post commands that you could accomplish with preShellCmd and postShellCmd.

Code Dx Registration

Registering your add-in tool with Code Dx is the next step. Log on to Code Dx as an administrator, open the Admin page, and find the Add-In Tools section shown below.

Admin Page - Add-In Tools

Click Create New Tool to open the Add-In Tool Registration window. Click the New Tool label in the window title area, replace the text with a meaningful name, like Security Code Scan, if that name is not in use, and click OK. Security Code Scan is a SAST tool requiring an analysis input, so you must associate your add-in tool with one or more types of content that Code Dx can detect. Select Source Code for Tag type, C# for Language, and click Add Tag so that Code Dx will offer to run Security Code Scan whenever it detects an analysis input file containing C# source. Lastly, specify the contents of your scan request file in the TOML Spec section. If your window looks like what follows, tool name aside, click Done to save your add-in tool registration.

Note: Once created, the name of an add-in tool can not be changed. As an alternative, you can create a new add-in tool with the desired name, copy the TOML declaration and tag bindings, and delete the old add-in tool. Any project-specific properties or any other customization from the old tool will not be carried over automatically, and must be manually reapplied for the new tool.

Admin Page - New Add-In Tool Example

Enable Add-In Tool

Your add-in tool is now registered in a disabled state. To enable your add-in tool for a specific project, open the Tool Service Configuration page, find your add-in tool in the Customized Add-In Tools section, toggle the Disabled/Enabled switch to enabled, and click Save. You can also use the Default enabled toggle to enable a tool for every project, excluding those where it was explicitly disabled.

Below the Disabled/Enabled toggle is a box where you could edit any project-specific TOML content, which is scan request file content outside the request section (Security Code Scan has none).

Tool Service Configuration - Customize Add-In Tool

Code Dx will offer to run enabled add-in tools whenever it detects an analysis input containing C# source code in the project where you enabled the add-in.

New Analysis - Security Code Scan

Users will have the option to deselect your add-in tool when they start a new analysis. For example, Code Dx does not distinguish between C# source code for .NET Core and .NET Framework, and since your add-in tool runs on Linux, it supports .NET Core code only. A user configuring a new analysis with .NET Framework source code (not .NET Core source code), could deselect your add-in tool in that case.


The Findings page gives an overview of the findings in a project, focusing on a powerful filtering system, triage workflow, and issue tracking, with links to drill into more details via the Finding Details page. To access the Findings page, click the "Findings" button next to a project in the Project List page.

To access a version of the Findings page which aggregates all projects, click the "Findings" link in the top navigation bar (next to the Code Dx logo and the "Projects" link).

To access a version of the Findings page which aggregates all projects within a specific Project Group, there are two routes:

This section is structured around the various user interface elements on this page that contribute towards the triage process.

CWE Support

The Common Weakness Enumeration (CWE) is a community effort lead by MITRE to provide a common language to express software weaknesses.

Code Dx leverages the CWE to provide correlation across the diverse set of testing tools it supports. Code Dx also allows you to define your own correlation logic via the Rule Set page. This allows you to correlate based on a group of CWEs or tool specific rule codes.

Code Dx uses the CWE identifier specified by the tool or in cases where the tool does not provide a CWE, the Code Dx team has done that mapping for you.

CWE information is readily available in Code Dx. On the Findings page you can search by CWE or filter by CWE. This includes grouping CWEs by various standards such as OWASP Top 10 or CWE/SANS Top 25. The CWE identifier is also shown in the Findings Table and you can hover on that identifier to get the full CWE name.

CWE information is also provided on the Finding Details page. There you can see the full CWE name for the aggregated finding. For each individual tool result, the CWE used for each tool is also provided. In both cases a link to MITRE's CWE website is provided.

Finally, all reports (CSV, XML, PDF, Nessus, and AlienVault/NBE) contain CWE information.

CWE Version

This version of Code Dx uses CWE Version 4.0. As new CWE versions are released, Code Dx will include it in it's next version update. Note: Since the CWE represents root weakness in code rather than exploited vulnerabilities, the taxonomy is not updated as frequently as the CVE.

Filtering Findings

The filters are interactive bar charts that show the distribution of various properties of all findings in the displayed project. Each bar has a check box next to it that lets you filter on that value. Some filters have a tree structure, where certain elements can be expanded to reveal more elements. These elements will have a triangle next to them which you can click to expand or collapse them.

As you check and uncheck boxes, the entire page will update to match the current filter state. When the page first loads, all filters are in an "off" state, and the page displays data for every finding in the project.

When the page is first loaded, certain filters will be expanded by default while others will be in a collapsed state. Clicking the arrow to the left of each filter will toggle the collapse or expand state.

Expanded filters have sorting options as well. Clicking the sort button in the filter header will open a menu containing the possible sort choices.

Filters can be resized vertically by dragging the bar at the bottom of the widget

Filter vertical resize example

Filters can be resized horizontally by dragging the vertical bar between the filters area and the table area. While dragging, a shadow of the vertical bar will follow the mouse, indicating how wide the filters area will be. Once you release the mouse, the filters area will change size accordingly.

Filter horizontal resize example, after

Filter horizontal resize example, after

Filter Breadcrumbs

As you activate filters in the Findings page, the page will update and filter breadcrumbs will appear. The breadcrumbs show an overview of what your current filter state is; they also let you turn off bits of the filter by clicking the X in each orange box.

Type Filter

The Type Filter shows which types of findings are contained in the project.

Project Filter

The Project Filter shows which projects the findings are contained in. This filter only appears on "aggregated" versions of the Findings page, i.e. for "All Projects", or for a project group with its members. The Project Filter has two grouping modes; one to display a flat list of projects, and another to show a tree view of the projects. You can switch the grouping mode by selecting it from the first dropdown menu in the filter's header.

Project Filter - 'flat' grouping on the left, 'tree' grouping on the right

Tool Filter

The Tool Filter breaks down findings by their associated tool results' types. The tool result type hierarchy typically follows a hierarchy of "Tool" » "Category" » "Name", following the same hierarchy as in the Tool Config page.

Detection Method

The Detection Method Filter categorizes findings based on the method used to detect them. Only the categories that apply to your project will be displayed. The detection methods currently supported are:

Severity Filter

The Severity Filter shows the distribution of findings by how severe they are reported to be. Code Dx maps all reported severities to a scale of Info, Low, Medium, High, and Critical. Some tools don't report a severity, so they are represented as Unspecified.

Location Filter

The Location Filter shows where each finding is located, reflecting the directory and file hierarchy of the codebase. Location categories that may apply to your project are files, URLs, and logical locations.

For .NET results, in some cases (especially if PDB files are not uploaded), source locations may not be available. Instead, a Logical Locations item will be shown. Under it will be locations organized by namespace, class, and method.

Container Image Filter

The Container Image Filter shows the names of container images that were discovered in Container Analysis results. Images without an associated name are not shown in the filter.

Age Filter

The Age Filter shows how old each finding is, i.e. how long since the finding was first seen in an analysis. The Age Filter displays a set of pre-defined age ranges, although users of the REST API may customize the ranges for their own use.

Tool Overlaps Filter

The Tool Overlaps Filter breaks down findings based on the degree of correlation of its associated tool results. For example, was a finding detected by 1 tool, 2 tools, or more? Were the 2 tools SpotBugs and PMD, or JSHint and PMD? Actual correlation logic is determined by the project's Analysis Configuration.

Standards Filter

The Standards Filter shows the distribution of findings based on several industry standards. Various standards are supported and can be selected using the standard button in the filter header.

Sample of the standards menu on the Standards Filter

Note: the Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) versions 3.10 and 4.0, Health Insurance Portability and Accountability Act (HIPAA), MISRA (Motor Industry Software Reliability Association) C (2012) and C++ (2008), National Institute of Standards and Technology (NIST) 800-53, OWASP Mobile Top Ten, and Payment Card Industry Data Security Standard (PCI DSS) version 3.1 standards are available in Code Dx Enterprise only.

Trace Session

The Trace Session Filter appears once you have collected some execution trace data in the current project. It shows how many findings were encountered during a particular trace session, grouped by tracer agent. Note that because findings may (and often will) be encountered by multiple trace sessions, the count displayed for the agent will typically not be equal to the sum of the counts of the trace sessions belonging to it.

Status Filter

The Status Filter shows the distribution of each finding's triage status. Initially, all findings in a project are set to New, but findings' statuses can be changed.

Issue Tracker Association

If a project has an Issue Tracker Configuration, the Issue Tracker Association Filter will be shown. You can filter on findings that have no issues and those that do. Findings are broken down by whether or not an issue is associated, then which issue tracker type (Jira, Azure DevOps, ServiceNow, GitLab, etc) the issue is associated with, then the issue's status, then the specific issue. Note that terminology can differ between different issue trackers (e.g. "issue" vs "work item", and "status" vs "reason"), but Code Dx tries to use "issue" and "status" when a generic term is needed.

Issue Tracker Resolution

If a project has an Issue Tracker Configuration, the Issue Tracker Resolution Filter will be shown. You can filter on findings that have or do not have issues and findings with issues that have or have not been resolved. Findings are broken down by whether or not an issue is associated, then which issue tracker type (Jira, Azure DevOps, ServiceNow, GitLab, etc) the issue is associated with, then whether or not the issue is resolved, then by resolution, and finally by the specific issue.

Predicted Status Filter

The Predicted Status filter is only shown if machine learning is enabled on the Machine Learning Control Panel. Filtering options include filtering against findings with Predicted Status of Escalated, False Positive, or Unknown, as well as filtering against Prediction Confidence, which ranges from 0 to 100 percent. Selecting multiple predicted statuses to filter on will include any finding that has any one of the selected predicted statuses. Selecting a subrange for prediction confidence will include any finding that has a predicted status matching one of the selected statuses as well as a prediction confidence that exists in the selected subrange (inclusively).

Note: This filter is only available in Code Dx Enterprise with the Machine Learning Triage Assistance add-on.

Time Filters

A Time Filter is a special type of filter which groups findings by analysis, i.e. the filter will decide which analysis each finding belongs to, and display the groupings as a barchart. The analysis number or analysis date will make up the X axis, while the finding count makes up the Y axis.

Unlike the other filters, a Time Filter may not be resized vertically. Instead, as more analyses are displayed it will grow horizontally, eventually adding a horizontal scrollbar.

Making a filter selection with a Time Filter is done by drag-selecting a range in the selection area above the barchart:

A time filter, before and after a selection has been made

The X axis of each Time Filter can be toggled between "Ordinal" and "Time".

Ordinal scale uses the analysis number (e.g. 1st, 2nd, ...) to determine where each analysis is placed on the X axis. It allots the same amount of horizontal space for each analysis, making it a reliable way to visually separate one analysis from another. Ordinal scale is the default mode for each Time Filter when the page loads.

Time scale uses the start time of the analysis to determine where each analysis is placed on the X axis. Since the lifespan of a project may be very long, and several analyses may be clustered close together, bars may overlap when using time scale mode. This scale mode is most useful when you want to highlight a particular date range, and separating individual analyses is not desired.

The different time filter scale modes

As you hover over the selection area of a Time Filter, you will see a tooltip indicating the X value that your cursor is hovering over. In time scale mode, the X value is a date and time. In ordinal scale mode, the smaller text indicates the "physical" selection range, while the larger text indicates the rounded range, which will be used as the actual filter selection. For example, in the image below the physical selection is "7.2 to 10.8", which encapsulates a filter selection of analyses "8 to 10". Once you click and drag to make a selection, the tooltip will expand to show you the "min" and "max" of your selection.

An existing selection can be altered by clicking and dragging either of the paddles (the purple and green shapes at either end of the selection) to resize, or the area between the paddles to pan. Double-clicking a paddle will move it all the way to the beginning or end of the chart. Double-clicking the area between the paddles will clear the selection (or you can simply click the "clear filter" button just above it).

Time filter selection tooltips

As you hover over bars in a Time Filter's chart area, a tooltip will appear to display information about the currently-hovered analysis. The information includes the start time, duration, number of associated findings (the height of the bar), the analysis number (e.g. the 10th analysis in that project), and the Name (see below for more details about analysis names).

Time filter analysis tooltips

Note that when an analysis has a name, the circle below its bar in the barchart will be filled in; when it has no name, the circle will only have a dotted outline. Users with the update role can edit analysis names by clicking the pencil icon next to the name. Naming an analysis can be useful if you want to associate it with a particular release version of your software, a Git commit, a Jenkins build, or any number of things.

As an added convenience, analysis names allow you to write Markdown-style links, e.g. [link text](link url):

Time filter analysis name using a link

Note that the contents of the analysis tooltip are rearranged depending on the scale mode; in ordinal mode the analysis number is shown as the title while the analysis date is shown in the body, and in time mode the analysis time is shown as the title while the analysis number is shown in the body.

Time filter analysis tooltip arrangement is different in time scale mode

First Seen

The First Seen filter is a Time Filter which groups findings based on the analysis during which the finding was first seen, i.e. the analysis that introduced the finding. This filter can be useful for answering questions like "how many findings were introduced by release 2.1.0?" Note that the First Seen filter is similar to the Age Filter, except that instead of grouping by age ranges, it groups by analysis.

The "First Seen" filter

Last Modified

The Last Modified filter is a Time Filter which groups findings based on the analysis during which the finding was last modified. (For findings modified by users or other non-analysis interactions, it picks the most recent analysis at the time of the modification.) An example usage of this filter could be in combination with the Status Filter to answer the question "How many findings have been fixed since release 2.1.0?" In this example, you would unhide the "completed" triage statuses by using the View button.

The "Last Modified" filter

The search area in the Filters section allows you to search for findings by CWE ID, Finding ID, Finding Location, or Type/Tool. Searches made via the search widget will affect the page in the same way that Filters do. The default search option is Finding Location.

Search widget location

To search, select the search type from the dropdown, enter your search criteria in the text box, then press Enter (or click the magnifying glass). Note that if you type but do not trigger the search for several seconds, a popup message will appear, reminding you to do so.

Searching by Location

The default "search by" option is Location and it is case-sensitive. When searching by Location, the criteria can be any part of a file path. For example, to look for Findings in the webapp/javascript folder, enter webapp/javascript. To search Findings in files with the .java extension, enter .java. You can use * to indicate a wildcard, e.g. a search for src/*.java will match locations like src/main/java/Example.java. If you want to have the literal asterisk (*) as part of your search, use \*. If you want to have a literal backslash (\) as part of your search, use \\.

Searching for findings in the webapp/javascript folder

Searching by CWE ID

When searching by CWE, the criteria should be a number, or a comma-separated list of numbers. For example, to search for findings with a CWE of 91, simply enter 91. To search for findings with a CWE of either 91 or 94, enter 91, 94. Note that ranges (e.g. 100 - 200) are currently not supported.

Searching for findings with CWE 133

Searching by Finding ID

When searching by Finding ID, the same formatting rules apply as with the CWE search. To search for Finding 123, enter 123. To search for Findings 123, 456, and 789, enter 123, 456, 789. Note that the search will not look for Findings from other projects.

Searching for finding 11005

Searching by Type / Tool

When searching by Type / Tool, the criteria can be any text (case-insensitive) which may appear in the name or grouping of a Rule or Tool descriptor. For example, searching for "inject" by Type / Tool can match Rules like "SQL Injection", and Tool descriptors like PMD / Security / Possible SQL Injection. This search is case insensitive. Wildcards are not supported.

Searching for "inject"

Searching by Host

When searching by Host, the criteria can be any text (case-insensitive) which may appear in the Host Info section of a Finding's Evidence section. This of course includes the value displayed in the Host column of the Findings Table. For example, searching for "123" by Host can match Findings whose Host IP Address is "", or whose set of Ports includes port 123, or port 1234. Furthermore, searching for "MYCOMPANY" by Host can match Findings whose Host's Hostname is "internal.mycompany.com".

Searching for "ab:cd" by Host

Searching by Tool-Specific Fields

Results from some enterprise tools will have tool-specific data attached to them. For example, Veracode provides a "Flaw ID". Many of these fields may be used as criteria in the search area, as long as they are present in the project. If you upload data from an enterprise tool, you may notice one or more options in the "search by" dropdown, related to that tool. When searching with these options, you must enter an exact match; e.g. if the Veracode Flaw ID is 123, you must enter "123" (without the quotes) in the search input. Note that if any tool-specific metadata is present on a finding's results, you can see it on that finding's Details.

Searching for tool-specific metadata

Bulk Operations

Bulk operations are actions that affect all findings that are currently selected. When the checkbox in the top left of the findings table is selected, these actions will apply to all findings currently displayed.

Note that bulk operations (aside from reporting) are not available on an aggregate version of the Findings page. In addition, the checkbox selection column in the findings table will be hidden.

Change Status

The Change Status dropdown menu is available to users with the update role. It allows users to change the triage status of all findings currently selected.

Bulk Operations - Change Status

Generate Report

The Generate Report button opens the Generate Report dialog. The dialog is used to select and customize a report. Several report types are supported, including PDF, CSV, XML, Nessus, and AlienVault/NBE.

To generate a report, select one of the report types and configure the associated option(s), then click the Generate Report button. Code Dx will trigger a background task to create your report.

Bulk Operations - Report Generating

When the background task finishes generating your report, a link will be provided to download and view it.

Bulk Operations - Report Complete

PDF Report

You can customize the PDF report in several ways. There are options to include or exclude a simplified or detailed executive summary section; finding details (with or without source code); tool details; and comments that appear in the Activity Stream (on the Finding Details page). The "Result details" section contains these options: "Include result provided details" and "Include HTTP requests and responses". Please note the "Include result provided details" option must be selected if you want to include the HTTP requests and responses in the PDF report.

If you'd like your company logo to appear on the cover sheet, please contact your Code Dx administrator to configure it for you.

Bulk Operations - PDF Report Options

CSV Report

The CSV report provides options allowing you to select which columns will be included in the generated file.

Bulk Operations - CSV Report Options

XML Report

Customizations for the XML report include the option to enumerate standards violations for each finding, provide source code snippets, and whether to include copies of the rule descriptions for each finding.

Note: There is a limit of eight lines of code per source snippet for each finding. When the limit is exceeded, no source code is provided.

Bulk Operations - XML Report Options

Nessus Report

Code Dx Enterprise users will be able to select the Nessus report. This reporter generates a report in the Nessus format, which can be imported by many applications.

The default host and MAC address fields are required, while the operating system and NetBIOS name fields are optional. When exporting a finding that doesn't contain any request data, the default host value will be used.

Bulk Operations - Nessus Report Options

AlienVault/NBE Report

Code Dx Enterprise users will be able to select the AlienVault/NBE report. This reporter generates an NBE report that is compatible with AlienVault.

The report options require that a host address (IPv4) be specified for inclusion in the report.

Bulk Operations - NBE Report Options

Issue Tracker Integration

If a project has an Issue Tracker Configuration, the Issue Tracker dropdown menu will be available, allowing users with the update role to interact with the configured issue tracker. Code Dx currently supports Jira, Azure DevOps, ServiceNow, and GitLab. For Jira and GitLab users, the options are create issue, associate with existing issue, and remove association. For Azure DevOps users, the options are create work item, associate with existing work item, and remove association. For ServiceNow users, the options are create incident, associate with existing incident, and remove association. The examples below assume Jira is the currently configured issue tracker.

Creating New Issues

To create a new issue, click the Jira dropdown menu and select the Create issue... option.

A dialog will open.

All of the fields are editable. Required fields will have a red asterisk by their name.

If you have Code Dx Enterprise, you will use the template expressions that were defined when configuring the issue tracker to pre-populate the relevant fields with data from the active findings. Code Dx provides default templates for the Summary and Description fields.

The Description field will be pre-populated with a brief description for each Finding. Jira descriptions can be set to allow for the use of WikiMarkup. Code Dx takes advantage of that to make the descriptions more readable from within Jira.

Associating with Existing Issues

For Jira users, associate a finding with an existing issue by clicking the Jira dropdown menu and selecting the Use existing issue... option. For Azure DevOps users, associate a finding with an existing work item by clicking the Azure DevOps dropdown menu and selecting the Use existing work item ... option. For ServiceNow users, associate a finding with an existing incident by clicking the ServiceNow dropdown menu and selecting the Use existing incident ... option. For GitLab users, associate a finding with an existing issue by clicking the GitLab dropdown menu and selecting the Use existing issue... option.

Enter the issue key, work item, or incident number that you want to associate with the finding(s). Clicking outside the textbox or pressing Enter will cause Code Dx to look up the issue, work item or incident in question. If it's able to find it, and if it's part of the same Jira, Azure DevOps, ServiceNow, or GitLab project you selected when you configured the issue tracker for this Code Dx project, or the same ServiceNow instance configured for this Code Dx project, the issue's or work item's summary will be displayed, allowing you to confirm that you've entered the issue or work item you want. Click OK to associate the finding (or findings) with that issue or work item.

Refreshing Issue Status

Code Dx will regularly check the Issue Tracker server to refresh the status for all of the issues, work items, or incidents associated with findings in a given project. The interval at which the check is done is configurable in the Issue tracker configuration. However, you can also manually trigger a refresh of all the issues, work items or incidents on the Findings page, by clicking the Refresh Issues, Refresh Work Items, or Refresh Incidents button.

Removing Issue Associations

You can remove the issue, work item or incident associations for all of the findings in the current filter by using the Jira, Azure DevOps, ServiceNow, or GitLab dropdown menu and selecting the Remove association option. Note this only removes the association in Code Dx; it doesn't change the issue, work item, incident in the Issue Tracker.

Findings Table

The Findings Table shows a concise representation of each individual finding. The number in the ID column is the unique identifier assigned to each finding and the text for the ID doubles as a link to the finding's details.

Users with the update role in a project can use the dropdown menu in the Status column to change the current status of a finding.

Projects often have more findings than can be displayed in the Findings Table all at once. Because of this, the table is split into pages. By default, each page shows 25 findings. Users can change the number of findings per page using the Show button, seen below.

The Findings Table columns can be hidden or displayed using the dropdown menu in the upper right corner of the table. This is done by toggling the column name.

In the menu, visible columns have a check mark to the left of the column name. Hidden columns can be made visible again by selecting them in the menu.

Flow Viz

The Flow Viz is a categorical breakdown of the findings in a project. By default the Flow Viz is collapsed to the left side of the Findings page. Clicking the "Flow Viz" button will pull the view out from the side, bringing it in front of the Filters. Clicking the button again will collapse it.

Each row represents different values in a category. In this example, the severity category has values for Low, Info, Medium, and High.

Each path (colorful, curvy lines) represents a set of findings that have values matching each category value that the path passes through. Hovering the mouse over one of the paths will reveal more information about that path.

The black boxes with white circles at the side of each row are draggable. You can use them to re-order the rows in the Flow Viz, updating the visualization in real time.

Analysis Inputs List

The Analysis Inputs List is a widget on the Findings page that shows the files that were provided to Code Dx for analysis. It can be shown by clicking the Show Inputs button found in the Findings page header.

Analysis Inputs Example

The Analysis Inputs List is broken down first by analysis, then by file. For example, when viewing a project in which two analyses had been performed, there would be a section for each analysis. Analyses are ordered by date, with the most recent analysis shown at the top of the list, and the oldest analysis at the bottom.

Within each section, individual entries represent files. For example, if a "spotbugs-results.xml" file had been uploaded to Code Dx during one analysis, a corresponding entry would appear in the section for that analysis. Each entry has three main parts: input name, tool result summary, and archive button.

Input Name

The first part of an entry shows the file's name and the name of the tool it came from. For auto-generated tool outputs (i.e. files generated by Code Dx's bundled tools), the name of the analyzed file will be shown instead of the name of the auto-generated temporary file. Next to the names, a download link allows users to download a copy of the file.

Tool Result Summary

The second part of an entry shows a summary of the tool results originating from that file. Note that due to result correlation and other factors, the total tool result count will not necessarily match the total finding count. Next to the tool result count for each entry, a bar chart shows a breakdown of the tool results by severity. The highest-severity results are shown in red, while the lowest-severity results are shown in gray. You can hover over each bar to see the severity it represents, and see the number of tool results belonging to that severity.

Archive Button

Users with the create role for a project have the ability to archive an analysis input using the Archive button located on the far right of each entry. Tool results from archived inputs will be removed. Any finding whose last tool result was removed in this manner will have its triage status automatically changed to Gone. Normally archival is done automatically (see Auto Archival).

When you click the Archive button for an analysis input, you will be prompted to confirm your choice.

Analysis Input Archive Confirmation

When you confirm, the archival will be performed. The page will update to reflect the updated tool result and finding counts.

Analysis Inputs After Archiving

Adding Manual Results

Code Dx Enterprise users with the create role for a project will have access to the Add Result button located in the page header. This allows you to add manual results to Code Dx (during a manual code review for instance), as opposed to the ones automatically discovered by tools. Clicking on the Add Result button will trigger the following form to appear.

Information entered under the Contextual Information section describes the result itself. Expanding the General Information section of the form will allow values to be specified that will be shared among all manual results of the same name. Contextual information will override general information if specified. Note that this form creates results, which can be thought of as "evidence" for a finding. Multiple results may be correlated to a single finding. As with tool results, two manual results will typically be correlated if they have the same CWE, Location, and Detection Method, even if their names are different.

If the result name entered matches a rule in the current rule set, then the manual result will be associated with the general information for that rule. In this case, the general information can only be changed by revising the rule set. Both the general and contextual information will be included on the details page.

The Tool field allows the user to state that the manually-entered result actually came from a tool. The options available to this field are configured on the admin page, in the Allowed Tools section.

The Host field allows the user to describe the "host" on which the result was discovered. This normally will only pertain to results with the Network Analysis detection method, but could also relate to Dynamic Analysis. Host data entered on this field is considered "raw" data, (as opposed to the "normalized" data seen on the Hosts page). Raw host data may be joined with "normalized" host data through a process called "host normalization". By default, the "Include Host data for this result" checkbox is unchecked. Check it to expand the host data editor.

The CVE field allows the user to enter any number of CVEs that correspond to the result. By default, no CVEs are included. To start adding CVEs, click the Add a CVE button. When typing in a CVE text box, you can optionally start by only typing the numbers; the text box will fill in the rest for you. If your Code Dx server is able to access the internet, it can check whether the CVEs entered by the user are real CVEs in the CVE database. This verification comes in the form of a checkmark or an "x" on the CVE textbox. Blank or invalid CVEs will be ignored when submitting the form.

Once you’ve completed the form, clicking the Add Result button at the bottom will dismiss the form and update the Findings page with the new finding. A notification will appear, indicating the ID of the finding to which the result was correlated. To delete or edit a manually added finding, click on the finding's ID in the Findings Table to access its details view. The result will appear in the Evidence section, where there will be buttons to edit and delete it.

View Options

The View menu in the header provides options for how the Findings page will look. The options are "Color-blind friendly mode" and "Hide findings marked as ..." the "completed" triage statuses.

View Menu

Color-blind friendly mode

Users with colorblindness should have little trouble with most of the widgetry on this page, possibly with the exception of the Flow Viz. The Color-blind friendly mode switch in the View menu will change the color scheme of that visualization to a palette with fewer colors and higher-contrast.

Comparison of the Flow Viz with and without "Color-blind friendly mode" turned on

Hide findings marked as...

There are several options in the View menu labelled "Hide findings marked as...". The purpose of "hiding" or "unhiding" findings is to exclude or include the associated findings from the Findings page. The "completed" triage statuses in these options are "Gone", "Fixed", "Mitigated", "False Positive", and "Ignored". When turned ON, each setting will cause the findings marked with the particular triage status to be excluded from the page. This affects the table, filters and counts throughout the Findings page. When turned OFF, the findings associated with that status will be included on the page. The default settings are ON.

Note: Findings marked as "Gone" will generally have no tool results associated with them. This can lead to a somewhat confusing scenario where the "total findings" count will be greater than the "total tool results" count when the "Hide findings marked as Gone" setting is off.

Machine Learning

Note: This section is only applicable to Code Dx Enterprise users with the Machine Learning Triage Assistance add-on and requires that machine learning is enabled.

Users of Code Dx may review findings and change their statuses. When a finding's status has been changed, we say that that finding has been actively triaged. The act of actively triaging a finding is considered a past triaging decision. Code Dx is capable of learning from users' past triaging decisions in order to make predictions about findings that have yet to be actively triaged. More details will be described in the sections that follow.

Actionability of a Finding

We use the terms Actionable and Non-Actionable to denote findings that are “real” issues and “not-real” issues, respectively. A finding is said to be Actionable if it was actively triaged to be Fixed, Escalated, Mitigated, or Assigned, if it has a status of Gone, or if it has an issue tracker association. A finding is said to be Non-Actionable if it was actively triaged to be False Positive or Ignored.

Training a Prediction Model

In order for Code Dx to make predictions for findings, users will need to train a prediction model. Training a prediction model will collect all relevant data for findings that have been actively triaged and use that data to learn from users' past triaging decisions. See Machine Learning Control Panel for more information about how to train a prediction model.

Predicted Status and Prediction Confidence

When Code Dx is making a prediction for a finding, we mean that Code Dx is determining a Predicted Status for it. A Predicted Status for a finding is its Actionability. If Code Dx predicts that a finding is Actionable, then we say that its Predicted Status is Escalated, since Code Dx thinks it's a real issue. If Code Dx predicts that a finding is Non-Actionable, then we say that its Predicted Status is False Positive, since Code Dx does not think it's a real issue. Every prediction that Code Dx makes has a Prediction Confidence. A Prediction Confidence for a Predicted Status represents how certain Code Dx is of its Predicted Status relative to the one it did not predict. Note that this is a prediction of a finding's Actionability. That being said, Code Dx's prediction may not be correct.

Requirements for Making Predictions

Code Dx will only attempt to make predictions for findings if a prediction model has been trained. See Machine Learning Control Panel for more information about how to train a prediction model.

When will Code Dx Make Predictions?

Code Dx will makes predictions for findings during the following situations:

In these situations, all predictions are being made automatically. During the first and third situations, predictions are automatically made for every finding in Code Dx. During the second situation, a prediction is only made for the single manually created result. Since predictions are made automatically, a user may note that predictions for findings might differ between reviewing sessions.

Predicted Status Column

Every value in this column consists of a Predicted Status and a Prediction Confidence.

Finding Details

To access the details for a single finding, navigate to the Findings page, locate a finding in the Findings Table, and click the link in the ID column.

Details Summary

The Details Summary in the header gives a quick overview of the finding and the file where it is located. If the finding is associated with a CWE, the CWE is noted, with a link to the official CWE Mitre site.

The summary area also has "jump links". One link will scroll the source viewer to the location of the finding in the file. The other link (which appears once you scroll down the page) will bring you back to the top of the page.

Severity Override

Code Dx does its best to pick a reasonable severity for each finding, but if a user (with the update role) disagrees with an individual finding's severity, they have the ability to override it. The severity override popup can be accessed by clicking the severity icon for a finding, either in the Finding Details page's header, or in the Findings page's findings table.

The Severity Override Popup on the Finding Details Page

Once the popup is opened, simply click one of the options to set the override. The popup will close and the new setting will be applied. When a finding has an overridden severity, the white border around its severity icon will be green instead.

A Finding with Overridden Severity

Here's the same finding shown with its severity override popup on the Findings page.

Severity Override Popup on the Findings Page

Train Now Button

Code Dx integrates with Secure Code Warrior by linking developers to training modules that they can use to learn secure coding practices. Code Dx provides a button that when clicked will redirect the user to a relevant training module.

If no training module is available for a finding, then the button will be disabled.

Activity Stream

The Activity Stream area has widgets that let you change the status of the finding as well as comment on it. As users change the status and comment on a finding, messages appear in the activity stream, with newer messages at the top. Users may edit or delete their own comments.


The description information shown by Code Dx can come from a variety of sources, with varying levels of detail. At a high level, descriptions are divided into "general" and "contextual".

The main "Description" section of the details page is a "general" description. Most of the time, the main description comes from a Rule Set. When a finding matches up to a rule, the main description section will use that rule's description. For findings created by observed tool results (i.e. types of findings that Code Dx doesn't already know about - see the Tool Configuration section), if the tool result does not match a rule, the general description may be created from that tool result, as long as the tool result provides one. This will often be the case with enterprise-grade tools such as Fortify and Veracode.

The "Description" section

The finding itself will not have a "contextual" description. This will instead be found on the individual results shown in the Evidence section. The "general" and "contextual" descriptions for results will be shown in the Tool Rule Description and Contextual Description sections of their display area, respectively (see below).

Training Video

Code Dx integrates with Secure Code Warrior by providing training videos that developers can use to learn secure coding practices. Code Dx will present these videos on the Details page when they are available.

Training Video

NOTE: The training videos use the video/mp4 MIME type. Some browsers do not support it and the user may see errors or controls that do not function. Please refer to your browser's documentation for a possible solution.


The Evidence section of the Finding Details page shows the raw results that make up a finding. (Note: results can originate both from analysis tools and from manual entry). Each result in the Evidence section will be displayed in its own subsection, with the result's "type" as the header. The screenshot below shows two results from two different tools which both describe a SQL Injection vulnerability in the same location.

The Evidence section showing two SQL Injection results

Each result in the evidence section has a handful of fields shown:

Some enterprise-grade tools report additional information that may appear in this section. One example of this is Veracode's Flaw ID. When these additional fields are present in a project, some of them will also become available in the Finding Search on the Findings page.

HTTP Activity

HTTP Activity section in Finding Details

The Http Activity section shows any detail Code Dx knows about the HTTP request and response associated with a DAST result. If tracing was active while the request and response were created, Code Dx may have additional information in the form of Traced Execution Details.

The table at the top of the HTTP Activity section enumerates the "variants" of request/response that were reported with the result. Some tools will attack the same URL with different variations of query parameters and form parameters to try and find vulnerabilities, then report each variant as part of the same result. Other tools will report each variation as its own result, but if Code Dx sees that everything else is the same, it may join them together under a single result. Often times, there is only one variant reported, as is the case in the screenshot above. In cases where there are multiple variants, click on the different rows of the variants table to show the details for that variant in the sections below.

For each variant, the details are as follows:

Traced Execution Details

If tracing was active when the HTTP request was made, Code Dx will have information about which methods were run during that request. This information is fairly complex, so it gets its own page; this section will link to it.

Request Tab

The details of the HTTP request are broken down here:

Response Tab

The details of the HTTP response are broken down here:

Metadata Tab

Some tools will report extra "metadata" with their HTTP activity. When applicable, this data will be shown in the Metadata Tab as a table.

Source Display

The Source Code area shows the contents of the file where the finding is located. The "on line 100" link (shown in the screenshot below) will scroll the source display so that it shows the exact lines of the finding, which are highlighted in dark grey in the line number gutter. The presence of severity markers in the gutter denote other findings in the same file. When multiple findings are present in a single line, the severity marker will show the highest-level severity at that line. If you hover your mouse over any highlighted lines, a popup containing links to the Finding Details pages for the other findings will appear.

Source Search

Searching within the Source Code area is separate from your browser's default search function. (For performance reasons, the Source Code view does not render the entire source file at once, and so the browser might not be able to find lines that are not currently in view.) Click in the Source View first. Note: for keyboard shortcuts, use Ctrl and Cmd interchangeably depending on whether or not you use a Mac. Use Ctrl+F to open the search dialog. Type in your search and press Enter. You can jump to the next result by pressing Enter again, or by pressing Ctrl+G. You can jump to the previous result by pressing Shift+Enter, or by pressing Ctrl+Shift+G. You can also jump to a specific line using Alt+G, typing a line number, and pressing Enter.

Issue Tracker

If a project has an Issue Tracker Configuration, the Create issue and Use existing issue buttons will be shown for Jira and GitLab users, Create work item and Use existing work item for Azure DevOps users, and Create incident and Use existing incident for ServiceNow users. Users with the update role are allowed to interact with the configured issue tracker.

Creating an Issue

You can click the Create issue, Create work item, or Create incident button, which will open a dialog.

The dialog functions the same way as the dialog opened from the Bulk Operations area of the Findings Table, except the Description field will be pre-populated with information about this finding.

Associating with Existing Issues

Click the Use existing Issue, Use existing work item, or Use exiting incident button to associate this finding with an existing issue, work item, or incident. A dialog will open.

The dialog functions the same way as the dialog opened from the Bulk Operations area of the Findings Table.

Refreshing Issue Status

You can select the refresh icon to manually trigger a refresh of the issue or work item.

Removing Issue Associations

Clicking the trash can icon removes the association between the finding and its related issue or work item. Note this only removes the association; it doesn't touch the issue or work item itself.

Predicted Status

Note: This section is only applicable to Code Dx Enterprise users with the Machine Learning Triage Assistance add-on.

A finding’s prediction is included on the Finding Details page only if machine learning is enabled on the Machine Learning Control Panel. Each prediction is presented as a Predicted Status and a Prediction Confidence. Users can set a finding’s Status to its Predicted Status by clicking on the Use Prediction button, which is next to the finding’s prediction. Note that a finding's prediction may not be correct.

Project Dashboard

Dashboard Overview

The Project Dashboard provides a managerial overview of a project, displaying a set of analytic and trend data which are automatically updated as you use Code Dx. To reach the Project Dashboard page, click the "Dashboard" link on the project from the Project List page.

When viewing the Project Dashboard for the parent of grouped projects, you will have the option to include data from the child projects by using the roll-up feature. To do this, enable the Include child projects switch on the top right corner of the Project Dashboard page.

Roll-up Feature

You can also access the Project Dashboard for "all projects" by clicking the "Dashboard" tab while on the aggregate "All Projects" Findings page.

The Project Dashboard is broken down into several sections:

Code Dx Risk Score

Code Dx Risk Score Overview

The Code Dx Risk Score section of the Project Dashboard provides a letter grade to indicate the overall "quality" of the project. The letter grade is based on a percentage score, which is the average of the Custom Code Score, Component Score, and Infrastructure Score. Each score is weighted evenly, but note that an Infrastructure Score is not available for all projects.

Each of the "num_" values mentioned above refer to findings in the project which haven't been triaged (i.e. findings whose triage statuses haven't been marked as one of the "resolved" statuses like "Fixed" or "False Positive"). In the case of "volume", they refer to the number of findings. In the case of "variety", they refer to the number of distinct types of findings. Only critical, high, and medium severity findings are counted against the Code Dx Risk Score.

Next to the letter grade, the specific percentage score is displayed alongside a spark-line that shows the general trend of the project's Code Dx Risk Score over the past week.

The individual scores for the Custom Code Score and Component Score are shown by a pair of "fill bars" next to the letter grade, below the overall score percentage.

Open Findings

Open Findings Overview

The Open Findings section shows the overall "triage status" of the project.

A waffle chart is used as a severity-age breakdown of the untriaged findings in the project. Different colors indicate different severities, as indicated by the legend. The number of dots of each color indicate the percentage (rounded) of findings in the project which have that specific severity. I.e. if there are 19 purple dots, it means 19% of the untriaged findings have "critical" severity. Transparency is used to indicate the relative age of the findings, as indicated by the legend. A lighter (more transparent) version of the severity color indicates findings of that severity which are relatively new. A darker (more opaque) version of the severity color indicates findings of that severity which are relatively old.

Clicking on the severity labels in the waffle chart's legend will cause the chart to focus on that severity, fading the other severities from view. Clicking again on the same label will reset that focus, returning the visualization to its normal state.

Hovering the mouse cursor over the severity labels in the waffle chart's legend, or over the colored dots in the waffle chart itself will cause the chart to temporarily focus on that severity. This effect is similar to the click effect described in the previous paragraph, but the effect does not persist if the mouse leaves the area that caused the focus. Hovering the mouse over the chart will also show a tooltip containing a summary of the respective hovered severity.

Open Findings Hover Tooltip

Below the waffle chart is a fill-bar indicating the percentage of findings which have been triaged (i.e. set to Fixed, Mitigated, False Positive), out of the total number of findings in the project, excluding findings that are marked "Gone".

Findings Count Trend

Findings Count Trend Overview

The Findings Count Trend section of the Project Dashboard shows a breakdown of findings by "detection method" over time.

The Findings Count Trend visualization uses a stacked area chart, with "date" as the X axis, and total finding count as the Y axis. By default, an area for each detection method is shown, so that the stacked areas' total height indicates the total number of findings at a given date. Clicking one of the detection method labels in the legend will cause the visualization to focus on the respective detection method, hiding the other areas and moving the focused area to the bottom of the visualization. Clicking again on the same detection method label in the legend will remove the focus effect, returning the visualization back to its default state.

Hovering the mouse cursor over the visualization will cause a vertical line to snap to the nearest date, updating the legend to reflect the finding counts at that date. While the mouse cursor is not over the visualization, the vertical line will snap to the latest date, causing the legend to reflect the most recent finding counts.

Findings Count Trend Hover

On the top-right of the trend graph is a calendar icon, which can be clicked to bring up a menu for selecting a date range.

Findings Count Trend; Date range selector

Selecting one of these range values will automatically refresh the graph to the selected range. For larger date ranges, each point in the graph can represent multiple dates by taking the average of data samples involved.

Findings Count Trend; Date range year selection

Average Days to Resolution

Average Days to Resolution Overview

The Average Days to Resolution section of the Project Dashboard shows the average number of days it takes for a new finding in the project to be resolved. In this context, resolution means the finding either becomes "Gone" (because developers fixed the issue, and a new analysis did not encounter the same finding), or its triage status was set to one of the "resolved" statuses: False Positive, Fixed, Ignored, or Mitigated.

For each severity, the average number of days it takes to resolve a finding of that severity is displayed in a badge. Initially, each badge will display "N/A"; since no findings have been resolved, there is no "average" time. A colored bar below the badges acts as a legend, and hovering the mouse cursor over a badge causes it to become highlighted with that severity's respective color.

As a rule of thumb, teams may wish to prioritize addressing higher-severity findings, so team leads will want to see a lower number of days-to-resolution for higher-severity findings.

Code Metrics

Code Metrics Overview

The Code Metrics section of the Project Dashboard displays a set of metrics for the project's codebase, broken down by language.

On the left of the section, a legend shows:

The colors assigned to each language are purely aesthetic, and are chosen using the same color scheme that Github uses.

By default, the "Overall" group is selected, so the metric areas to the right will show stats for the whole codebase. Clicking one of the languages, or the "Other" group in the legend will cause the metric areas to display language-specific stats. Clicking on the "Overall" group will return the display to its default state.

Code Metrics, with a specific language selected

When focused on a particular language, each metric will show an "X / Y" value instead of the usual "Y". The "Y" indicates the metric's value for the entire codebase, and the "X" indicates the metric's value for the subset of the codebase which is written in the focused language.

Each metric area will also show a sparkline indicating that metric's trend over the past week. The sparklines will be colored blue for "good" changes, and red for "bad" changes.

List of Code Metrics

Analysis Frequency

The Analysis Frequency section of the Project Dashboard offers a summary of the project's most recent analyses.

At the top of the section, a text blurb describes when the latest analysis occurred, and how long it took. The rest of the section is broken down into three tabbed sections:

Activity Monitor

Activity Monitor overview

The Activity Monitor section of the Project Dashboard shows a "calendar heatmap" which represents the analysis activity on the project over the past year. The far left represents dates from a year ago, and the far right represents recent dates. Stepping down a column of the chart, each bubble represents a day of the week, with Sunday at the top, and Saturday at the bottom. Hovering the mouse cursor over any of the bubbles in the chart will cause a tooltip to display the bubble's respective date, and the number of analyses that were run that day.

The analysis activity is broken down by different types of analyses, e.g. Static and Dynamic. The legend items below the visualization represent these different analysis types (i.e. "Detection Methods"). Note that any given analysis may result in findings of different detection methods, depending on what files were uploaded. Clicking the legend items below the visualization will cause the visualization to focus on the legend item's respective detection method. This can cause the number of analyses shown in the tooltip to change. For example, three analyses may have been run on a given day, but only two of those analyses resulted in data from Dynamic Analysis. In this case, if the "Overall" legend item was selected, the tooltip would show "3 analyses on {date}", but when the "Dynamic Analysis" legend item was selected, the tooltip for that same bubble would show "2 analyses on {date}".

Activity Monitor; selected "overall" with hover tooltip

Activity Monitor with a different analysis type selected

The visualization uses brightness to indicate more or less analysis activity for each given day, as indicated by the legend above the visualization. A darker shade of color indicates more analyses, and a lighter/whiter shade of color indicates fewer analyses.

Created vs. Resolved

Created vs. Resolved overview

The Created vs. Resolved section of the Project Dashboard shows the dueling trend of new findings that are added to the project, findings that are resolved by the team, and the difference between the two.

This section is broken into two pieces; the graph, and the table. Both represent the same data.

The graph is broken into two pieces; the "duel", and the "trend".

The "duel" section shows the number of created findings (in red) versus the number of resolved findings (in green). By default, the graph will show an accumulation of these numbers, starting from date at the far left of the graph. The icon in the upper-right corner of the Created vs. Resolved section opens a menu which allows you to toggle between "accumulated" and "daily" counts in the "duel" section. "Daily" counts show the exact number of created and resolved findings for any given day. The colored area between the lines in the "duel" section of the graph indicates which line is higher. A green fill means more findings were resolved as of that day (if using "accumulated" counts), or resolved on that day (if using "daily" counts).

The "trend" section of the graph shows the difference between the red and green lines of the "duel" (in blue). The "duel" and the "trend" graphs have their own separate Y axes representing cumulative finding counts, and count difference, respectively. The two graphs share the same X axis, which represents the date.

When hovering over the graph with the mouse cursor, a vertical line will snap to the nearest date to the mouse, causing the legend above the graph to update its numbers to reflect that date. The corresponding row in the table to the right of the visualization will be highlighted, and the table will auto-scroll to that row if necessary. Similarly, hovering over the table will cause the same changes, depending on which row in the table is hovered.

By default, the Created vs. Resolved section shows the accumulated number of findings since the beginning of the summary time window. Click the graph icon in the upper-right corner of the section, and select "Show daily counts" to switch the graph to Daily mode. Daily mode shows the change in values on a day-to-day basis. Accumulated mode can be considered the Integral of Daily mode, and Daily mode can be considered the Derivative of Accumulated mode.

Created vs Resolved; Daily mode

On the top-right of the graph is a calendar icon, which can be clicked to bring up a menu for selecting a date range.

Created vs Resolved; Date range selector

Selecting one of these range values will automatically refresh the graph to the selected range.

For larger date ranges, each point in the graph can represent multiple dates by taking the sum of data samples involved.

Created vs Resolved; Year range selection with accumulated counts

Created vs Resolved; Year range selection with daily counts

Top Finding Types

Top Finding Types overview

The Top Finding Types section of the Project Dashboard shows the top 10 types of findings in the project, by number of open findings.

The visualization uses a Stream Graph to represent the relative volume of the top finding types (Y axis) over time (X axis). Each stacked area of a given color represents a specific type of finding, e.g. "SQL Injection". The height of each area represents the number of findings of that type on a given day.

The table to the left of the visualization acts as a legend, where each of the finding types is labelled, and has a colored fill-bar indicating the respective finding type's percentage share of the project.

Hovering the mouse cursor over an item in the table to the left of the visualization will highlight the corresponding area in the visualization. Similarly, hovering the mouse cursor over an area in the visualization will highlight the corresponding item in the table. Clicking an item will cause that item to become "focused". Click the item again to undo the focused state, or click another item to change to another focused state.

Top Finding Types with a focused selection

As with many of the other dashboard sections, hovering the mouse cursor over the visualization will cause a vertical line to snap to the date nearest to the mouse. When this happens, the table to the left of the visualization will update to reflect the percentages for that day.

Click the graph menu in the upper-right corner to access the "layout" options. By default, the graph uses "stream" layout. Switch to the "stack" layout to rearrange the items into a stack, such that the bottom of the stack aligns with the "0" on the Y axis. Note that with the "stream" layout, the Y axis's meaning differs from date to date, so no axis numbers will be displayed.

Top Finding Types, stacked layout

On the top-right of the graph is a calendar icon, which can be clicked to being up a menu for selecting a date range.

Top Finding Types; Date range selector

Selecting one of these range values will automatically refresh the graph to the selected range.

For larger date ranges, each point in the graph can represent multiple dates by taking the average finding counts of data samples involved.

Top Finding Types; Year range selection with stream display

Top Finding Types; Year range selection with stacked display

Hybrid Correlation

Hybrid Correlation, at its core, enables results from DAST tools to be correlated with results from SAST tools. This gives better visibility into how findings may actually be exploited in the wild as well as help identify test cases for those findings. Code Dx currently has support for two forms of Hybrid Correlation: one that uses a runtime agent for the JVM, and one that can infer possible execution patterns.

Trace Based Hybrid Correlation

Trace based hybrid correlation tracks actual code execution as DAST tools are running, and uses that information to correlate based on if a matching SAST result was executed while fulfilling a request made by a DAST tool. This correlation mode requires the use of a runtime JVM agent. Java and JSP are the only languages supported for tracing at this time.

Instrumentation Page

The Instrumentation Page is the hub where you manage instrumentation (i.e. tracing) of your applications during DAST scans. To reach this page, you must first enable hybrid analysis for your project, then run an analysis including a zip (or zips) containing binary and source files.

Note: Currently, Code Dx supports instrumentation of Java apps, so a binary zip should contain .class files (typically found inside .jar or .war files), and a source zip should contain .java files.

Once you have analyzed the appropriate files with Enable hybrid analysis turned on, an Instrumentation link should appear on your project, next to the New Analysis link. Click it to reach the Instrumentation Page.

Showing the link to the instrumentation page from a project

Application Inventory

The Application Inventory is a view representing the structure of your project. The information collected during analysis about the methods and classes you uploaded will be displayed here. Classes and methods are displayed in a hierarchy, starting with generalized groups like "Classes", "JARs", and "JSPs", then drilling into packages. Packages may contain subpackages, which will be shown if you expand the package by clicking the + button to its left. Expanded packages can be collapsed by clicking the - button. Each group/package will display a number in the method count column, indicating how many methods are present in that item. A % Coverage column will show "0%" for each group and package, but the number will grow once you start instrumenting your application.

Application Inventory for OWASP Benchmark, with no active selection

Code Treemap

Further detail (i.e. classes and methods) is made available by selecting items in the Application Inventory. All of the groups, packages, classes, and methods will be rendered in the Code Treemap area as a treemap.

Application Inventory for OWASP Benchmark, with a treemap for the selected items

Take care when selecting groups and packages with a very high method count, as rendering a large number of items can cause your browser to become sluggish or even unresponsive. The application shown in the screenshot above is OWASP Benchmark, excluding its third-party dependencies. There are about 12,000 methods shown - it took about 4 seconds on a relatively powerful computer for the initial render to complete.

Each of the blue items in the Code Treemap represent a method. They are organized into groups by their parent class, which are in turn organized by groups by their parent package, and so on. The gray items with titles represent packages or groups.

Hovering over items in the Code Treemap with your mouse will cause a tooltip to appear, showing the hierarchy of the hovered item.

The tooltip shown while hovering over an item in the Code Treemap

You can click on nodes in the Code Treemap to open a source view similar to the one found on the Finding Source Display. Note that because the treemap is generated primarily from binary files, this will include files from third-party libraries included in the zip file that you analyzed. Typically, you won't have the corresponding source files for these libraries; clicking a node that you don't have source for will do nothing.

Showing related source from a method that was clicked in the Code Treemap

Click the "X" at the upper-right corner of the source view to close it, to show the treemap again.

View Menu

You can control certain aspects of the Code Treemap using the View menu in the upper-right corner of the window.

The Instrumentation Page's View menu

The size of items in the treemap can be based on either the number of lines of code or number of bytecode instructions. Most of the time, this option won't matter, but it can help to distinguish "dense" methods which have many operations packed into few lines of code.

The level of detail shown in the treemap can be shifted to either methods or classes. Shifting to classes can help the browser render more of the Application Inventory at once, since it will no longer need to render each method. Classes are represented as green boxes in the Code Treemap, but otherwise have the same behaviour as methods.

Agent List

The Agent List is your starting place for interacting with tracing/instrumentation of your applications. Each agent represents a single application, and has a unique ID. When you configure your application to use tracing, you provide the ID of an agent which you created in the Agent List. It will send that ID alongside the trace activity it sends to Code Dx, and that's how Code Dx can tell what project the traced activity belongs to.

When you first visit the Instrumentation Page, there will be no agents in the Agent List. Most projects will have only 1 agent, but there's nothing preventing you from creating more.

To create your first agent, just click the New Tracer Agent button, enter a name, then press Enter. The new agent will appear, pushing the New Tracer Agent button down.

A newly-created tracer agent

You'll notice a message saying there are no active sessions.

Trace Sessions

A trace session serves two purposes: to signify that Code Dx is interested in trace activity from an application, and to delimit a round of tracing. For example, if you run a DAST tool like OWASP ZAP, you might start a trace session, run ZAP, then finish the trace session. Trace data sent to Code Dx during a trace session will be associated with that trace session. An agent can have only one session active at once; to start a new session while one is already active, you must first finish the active session.

A newly-started trace session

A newly-created session will have a scrolling background to indicate that it is "active". When you finish the session, the background will become plain and unanimated. The "eye with slash" button on the left side of the session will toggle selection of that session, related to trace coverage, which is elaborated below.

The text in the middle of the session describes the start and end time (or "in progress"), and the percentage of known code in your application that has been encountered during that session. The "coverage" property is explained in more detail below.

The "wi-fi" icon in the lower-right corner of the session is a connection state indicator. When you run your application with a tracer agent properly connected, the connection state indicator will change. Hold your mouse over the icon for a tooltip explaining the current state. The states are as follows:

The menu in the upper-right corner of the session is used to rename, delete, or finish the session. When you finish a session, you are given the option to immediately create the next session, or simply finish it, leaving no active session.

Dialog for finishing a trace session

Once a session is finished, it may be archived. An Archive option will appear in the menu in place of the Delete option, allowing you to manually archive the session. Sessions may also be automatically archived, if configured to do so.

Archival of a session is intended to reduce the amount of "useless" data stored within Code Dx. Tracing causes huge amounts of data to be generated and stored, as Code Dx collects trace activity (i.e. information about what methods/classes were encountered, and when). Since the primary purpose of this data is to associate DAST findings (i.e. weaknesses discovered in relation to certain URLs/http requests) with their corresponding code, once you upload the DAST findings associated with that session, Code Dx can discard any of the trace activity data that it collected for requests unrelated to those DAST findings.

Aggregate coverage data for a trace session is saved even once that session is archived, but when you upload a new binary zip, that coverage data becomes outdated, since it was associated with the old binary zip.

Archived sessions will be hidden by default, but can be revealed by selecting the Show archived sessions option in the agent's gear menu. An archived session will have a "box" icon where the agent connection indicator used to be.

Showing archived trace sessions


Aside from enabling Hybrid Correlation between DAST and SAST findings, tracing is also used to collect run-time code coverage statistics. All of the coverage information collected by Code Dx is based on binary methods, i.e. the compiled versions of methods/functions in your running application. As you run your application with a tracing configured, it will report back to Code Dx with information about which methods were run. The set of encountered methods is compared to the set of "known" methods (those methods that Code Dx was able to discover by analyzing a binary zip that you uploaded and analyzed) to create a "percent coverage" stat.

Note: the set of "known" methods will change any time you upload and analyze a new binary zip file, causing the coverage percentages to reset to 0%. The reason is that if you upload a new version of your application, the coverage you had before is only applicable to the old version of your application.

Coverage percentages are available on the Instrumentation Page in several forms:

Coverage: Total Code

The "Total code coverage" blurb in the page header indicates the total aggregate percentage of code covered for all tracing of the latest uploaded version of your project. In this case, "code" refers to all binary methods from the zip file you uploaded to Code Dx. If you included any of your third-party libraries in that zip, their methods will be included in this count. Note that a relatively simple app may have a greatly-inflated number of "total methods" due to its third-party dependencies.

Coverage: Custom Code

The "Custom code coverage" blurb attempts to exclude third-party methods from the "Total code coverage" number. Since there is no single way to distinguish "custom" code from "third-party" code once it is compiled into a zip and uploaded to Code Dx, Code Dx makes an educated guess; any methods for which you uploaded the corresponding source code will be considered "custom code". Typically, this number will be larger than the number shown next to "Total code coverage", and will never be smaller. In the case of the screenshot below, no third-party libraries were included in the uploaded zip file, and source was included for every method, so the numbers of "total" and "custom" methods are the same.

Coverage: By Session

Code Dx will track coverage by trace session as well. This can be helpful when using DAST tools to see how much of your application they discover. If you start a session before running a DAST scan, then finish the session, the coverage for that session will perfectly indicate the code that the scan caused to run. While the session is running, its coverage percentage will not live-update (performance concerns), but you can click the "refresh" icon next to the percentage to update it on demand. When you finish the session, its coverage percentage will automatically update. A finished session's coverage percentage will not change (until you upload a new version of your application, which will cause all coverage percentages to reset to 0%).

Coverage: Application Inventory

The "% Coverage" column of the application inventory will show the total code coverage for each group/package in the inventory. By default, this means it will show all coverage, regardless of which trace session during which that coverage occurred. If you click the "eye" icon for a trace session, that trace session becomes highlighted. If any trace sessions are highlighted, the coverage percentages shown in the application inventory will be related to those specific sessions. In this way, you can drill down into the details of your application, e.g. to see how much of a specific package was covered by one of your DAST scans.

Coverage: Treemap

Once you select some items in the Application Inventory to render a treemap, you can use select trace sessions to cause their coverage to be highlighted in that treemap. In the image below, the "Quick Scan" session is highlighted, causing several of the methods in the treemap to be darkened. The darkened methods are ones that were encountered by tracing during the highlighted session.

Coverage: Summary

The following image points out the locations of the different coverage indicators described above. It depicts the "OWASP Benchmark" application, after scanning a small handful of its several hundred pages.

Overview of the various coverage indicators

Trace Execution Page

The purpose of the Trace Execution Page is to show a summarized view of all code that was executed during tracing, for a single HTTP request/response, related to a specific finding. You can reach this page by following the Trace link from the HTTP Activity section of the Finding Details Page.

The data shown on this page is known as a "trace execution stack tree", or "stack tree" for short.

A stack tree is a tree structure where the "root" is the method that starts handling the HTTP request, and the child nodes are any methods called by that method. For any given method call, e.g. "method A calls method B", there will be a node in the stack tree for method A, with a child node for method B. If, in this example, method B proceeded to call method A, a new node representing method A would be added as a child node to method B. If method A calls method B many times, method B will still only be present once in method A's list of child nodes.

Another way to describe a stack tree is to superimpose the "stack trace" of the program at all steps, removing duplicates.

The Trace Execution Page shows two views of the trace execution stack tree; a graphical visualization, and a textual tree-list.

Overview of the trace execution page

Trace Execution Visualization

The visualization in the top half of the Trace Execution Page shows the "root" node at the far left, with its child nodes branching off to the right. Each node is colored based on the presence (or relevance) to findings.

Many nodes will also show a colored bar (or bars) across their left edges. These bars indicate that their respective node is "on the path" to a node of that color. I.e. if a node has an orange bar on its side, then one of its children, or one of its children's children (and so on) will be orange.

You can find this explanation by hovering your mouse over the "info" icon shown at the upper-right corner of the visualization.

Often, the stack tree will have more levels than can be shown on the screen at once. (In fact, that's the main purpose of the colored bars on the side). You can navigate between levels of the tree by clicking on nodes in the visualization. Clicking a node will cause that node to become the new "focus", rearranging the visualization to behave as if that node was the root of the tree. Clicking a node will also auto-expand the list view and highlight the corresponding item, and open a source viewer for that node.

Trace Execution List

The view shown in the bottom-left quarter of the Trace Execution Page is a tree view of the stack content. Instead of the rendered boxes shown by the visualization, each node in this view is a textual representation of the corresponding method signature. This view should feel relatively familiar to developers, as it more closely resembles a stack trace than the visualization does. Each method shown in this view will have a "file" icon next to it. The "blank file" icons mean there is no corresponding source available in Code Dx - this is typical of third-party library methods and framework methods. The "text file" icons (dark background with some lines in them) mean there is source available. Clicking these icons will open the source viewer on the right, to show the source of the corresponding method.

Execution Source View

The source view shown in the bottom-right quarter of the Trace Execution Page will automatically open to the source of the methods you click in either the visualization or the tree list. It behaves similarly to the one on the Finding Details Page, showing related findings in the line number gutter. It also puts a green highlight on any line of code that was executed during the specific request/response cycle that the page is focused on. This finer-grained level of code coverage can give a better insight into what happened in more complex methods, where "this method was called" may not suffice.

Java Tracer Agent

Trace Based Hybrid Correlation requires the use of a runtime tracer agent while DAST scans are being run. This agent traces code execution and collects data to determine what sections of code have been encountered. It also injects a unique identifier into every HTTP response sent by servlets in the JVM that the agent is installed in. This enables Code Dx to determine what source locations were hit for each DAST result and make correlation decisions accordingly.

The Java agent currently supports Apache Tomcat versions 6, 7, 8, and 9, and Eclipse Jetty versions 7, 8, and 9; running on Java 8 or Java 9. Windows, Linux, and macOS are supported.

For best results, you should be working with a version of the application that hasn't had any sort of debug information stripped (specifically, file names and line numbers).

Performance Impact

Due to the nature of keeping track of all code executed, the Code Dx tracer agent may cause a significant reduction in performance. The impact will be much more noticeable when there is an active trace session. It is not recommended to use the tracer agent on production systems, and it may be advisable to only operate with the trace agent installed when running DAST scans you're intending to use with hybrid correlation.


The configuration for the agent is provided as part of the -javaagent argument to Java. At a minimum, there must be configuration to give an identifier to the agent as well as tell it how to contact your installation of Code Dx.

These options are provided as key=value pairs, separated by a semicolon. For example, a configuration string of id=e40fe9aa-4b86-48d3-ac9f-c0a3d1ef025b;codedx=https://codedx.local/codedx/ would configure the agent with an identifier of "e40fe9aa-4b86-48d3-ac9f-c0a3d1ef025b" and tell it to report to the Code Dx installation at https://codedx.local/codedx/.

Configuration Options

There are a handful of options that can be provided for the agent, some of which are required:


Installing the agent requires placing the Code Dx Agent JAR file in an accessible location on the web server and adding a -javaagent argument to the Java process used to run the servlet container.

The agent can be downloaded by clicking "Download Java Tracer Agent" on the download menu.

Download Java Tracer Agent

codedx-agent.jar should be placed in a location readable by the user that the servlet container is executed as.

The argument to add to the Java process takes the form of -javaagent:/path/to/codedx-agent.jar=configuration (e.g., -javaagent:/opt/codedx-agent.jar=id=e40fe9aa-4b86-48d3-ac9f-c0a3d1ef025b;codedx=https://codedx.local/codedx/). This argument should be placed along with any other options provided to Java; see the Java documentation for more information. You may need to wrap this argument in double quotes or escape special characters (such as spaces or semi-colons) if they will be passed through a command line interpreter.

SSL Configuration

If you're accessing your Code Dx installation over HTTPS with a self-signed certificate, that certificate must be added to the keystore of your Code Dx bundled Java for the agent to successfully connect. The need for this would be characterized by a "PKIX path building failed: unable to find valid certification path to requested target" error at agent startup.

You may do so with the keytool utility provided with Java:

  1. Locate the cacerts file for the Java installation that the agent will run on. This is found at [Java-installation-folder]/lib/security/cacerts.
  2. Open a command prompt or terminal. It may be necessary to run the command prompt as an administrator (on Windows) or to execute the following command as root (on Linux or macOS) to allow the cacerts file to be modified.
  3. Run keytool -printcert -rfc -sslserver [your Code Dx server] | keytool -import -keystore "/path/to/lib/security/cacerts" -storepass changeit -alias "Code Dx" -noprmopt.
  4. The agent should now be able connect to Code Dx successfully.

If you receive a "java.security.cert.CertificateException: No name matching [hostname] found" error, your SSL certificate may be for the wrong host name (e.g., example.com). In this case, you will need to generate a proper certificate for the Code Dx server.

Apache Tomcat

To configure Tomcat, make the appropriate changes, depending on your platform, and then restart Tomcat. For further details on configuring Tomcat, please refer to the Tomcat Documentation.


To configure the agent for Tomcat running in a *nix environment (such as Linux or macOS), modify (or create, if it doesn't exist) <tomcat_folder>/bin/setenv.sh to add the Java agent argument to CATALINA_OPTS. For example, add the following line:

export CATALINA_OPTS="$CATALINA_OPTS \"-javaagent:/path/to/codedx-agent.jar=id=...;codedx=...\""

If you are not running Tomcat as a service on Windows, modify (or create, if it doesn't exist) <tomcat_installation_folder>/bin/setenv.bat to add the Java agent argument to CATALINA_OPTS. For example, add the following line:

set CATALINA_OPTS=%CATALINA_OPTS% "-javaagent:/path/to/codedx-agent.jar=id=...;codedx=..."
Windows Service

The recommended way to configure Tomcat if it is installed as a Windows Service is to use the Tomcat Service Manager. This can be found at Apache Tomcat › Configure Tomcat on the start menu. Go to the Java tab, and in Java Options, add the Java agent argument on its own line.

Windows Tomcat Service Configuration

Eclipse Jetty

Via Command Line

You may add the Java agent argument to the command line when executing jetty. For example:

java "-javaagent:/path/to/codedx-agent.jar=id=...;codedx=..." -jar start.jar
Via Configuration

Jetty supports the use of configuration files to control startup as well. You may edit start.ini or add a new ini file in start.d/ (e.g., start.d/codedx.ini), depending on how you prefer to organize your configuration.

You will utilize the --exec option to specify the java agent as a JVM argument. For example, your ini file may contain the following lines:


After making this change, restart Jetty. Please see the Jetty documentation for more information on utilizing and organizing the start configuration files.

Known Limitations

Complex Constructors

If there is any branching logic before the call to the superclass constructor (<init>), the runtime agent will be unable to instrument that constructor. This means that hybrid correlation will be unavailable for any findings occurring in that constructor.

The standard javac Java compiler typically will not allow such constructors to be created (the first statement of the constructor in Java must be the super(...) call). It appears that such constructors are primarily generated by the compilers for other languages that run on the JVM, such as Groovy.

If a constructor is detected that is too complex to instrument, a DEBUG log will be emitted stating the class and constructor signature(s) that were not instrumented. TRACE logs will also be emitted with exact details of what was encountered that is causing problems. Enabling logging at one of those levels will allow you to see these details.

Large Methods

Large methods may fail to instrument, if they become larger than 64KB. If this occurs, an ERROR log will be emitted and the entire class will not be instrumented. This means that hybrid correlation will be unavailable for any findings occurring in that class.

Data Flow

Scans from certain static analysis tools provide the code path traversed for a given result. Trace Based Correlation will compare data flow, if it is available, with the actual runtime execution to help cut down on false positive correlations.

Agentless Correlation

Code Dx is also capable of performing Hybrid Correlation without the use of execution tracing. Unlike tracing, which uses a dynamic analysis approach, agentless correlation uses its own static analysis approach on source code and binaries to correlate SAST and DAST results. It can expand upon correlations made through tracing, and create correlations in the absence of any tracing session. No configuration steps are required to make use of agentless correlation. The only requirement is uploading source code at some point in an analysis.

Correlation Performance Impact

Agentless Correlation greatly expands the set of possibilities that must be considered to create a hybrid finding. Since exact code paths aren't provided, many inferred paths are created and evaluated during correlation. This can greatly impact the speed of correlation during analysis.


If Hybrid Correlation is enabled, Agentless Correlation is automatically applied for any project with correlation enabled and with uploaded source code.

Source Code

Agentless Correlation relies on the availability of source code to detect endpoints and their locations within a codebase. From this alone, DAST and SAST results that occur at an endpoint handling function can be correlated.

Only source code declaring and implementing endpoints are required. Source code for dependencies and utility libraries are not necessary, unless they declare and implement endpoints.

Endpoint detection is supported for a specific set of languages and web frameworks. These are:

Effectiveness of endpoint detection can vary depending on the use of plugins and unconventional endpoint routing methods within the source code.


Binaries for your application can also be uploaded to improve Agentless Correlation. If binaries are available, a call graph can be generated and explored to find code paths to SASTs from a detected endpoint. All relevant binaries for your application - the compiled application and its dependencies - should be uploaded with debug symbols for the best results.

Hybrid Correlation through call graph analysis is supported for binaries on the following runtime environments:

Known Limitations

Agentless Correlation explores a set of possible execution paths from an endpoint to find correlations with a code location. These explored paths may be inactive or incomplete due to undetected endpoints, inheritance and strategy patterns, anonymous functions, or other features for a given language and web framework.

The tracing approach finds correlations through real examples of execution paths to a given code location. Agentless Correlation is more convenient to use than tracing-based correlation at the cost of accuracy and capability to correlate.

Rule Sets

The Rule Set Page is accessed via the Rule Set Associations section of a project's Analysis Configuration dialog. When you access the Rule Set page, you will be able to view and sometimes edit a set of rules that can be used to determine how different types of findings will correlate with each other.

Each Rule Set has Rules, and each Rule has Criteria and identifying information.

Rule Sets are, as the name implies, a set of Rules. Each Rule acts as a strategy for combining results from different tools and providing standard information for the finding. Within a Rule, a set of Criteria can be defined, forming the underlying logic for the Rule. The identifying information for a Rule can optionally include a Severity, CWE, and Description which will be shared by Findings created from that rule. For example, a general "SQL Injection" rule may be created to capture specific results from multiple tools and provide a shared description, making it easier to locate and recognize standard vulnerabilities.

When result data is uploaded to a Code Dx Project, as long as that project's Prevent Correlation setting is not enabled, its associated Rule Set will be responsible for determining which types of results represent the same types of problems. In this case, Rules will be applied during ingestion, when findings are created from tool results. If there are multiple tool results belonging to the same rule and they occur at the same location, they will all be associated with the same finding. Whether a tool result "belongs" to a Rule is determined by that rule's Criteria.

After Changing Rule Sets

Since a project's configured Rule Set determines the manner in which results are correlated, changing that configuration necessitates an update of the correlation. This happens when the configured Rule Set for a project is modified in any way, or the Analysis Configuration is changed to use a different Rule Set. When this happens, the Findings page will display a notification prompting users to do so.

Trigger Re-Correlation

Rule Identifying Information

The identifying information for a rule includes severity, CWE, and a description. These fields are all optional; when provided, they will alter the corresponding values for findings associated with that rule.

Each rule's identifying information is collapsed by default. To expand it, click the button to the right of its name.

Expand/Collapse Rule identifying information

If you have the admin role, you can edit an existing rule's identifying information (aside from the read-only Code Dx Rule Set).

To rename a rule, click on its name to open a renaming input. Enter a new name then press Enter.

Rename Rule

To change the severity, CWE, or description for a rule, expand the identifying information section, then click the pencil icon next to the corresponding header. This will activate an inline form allowing you to make changes to the value. Once you've set the desired value, click the Save button to apply the change. Click Cancel to discard your changes without saving.

Edit Rule Details

You can add criteria from editable rules via the forms at the bottom of each rule's criteria list.

Rule Criteria

A rule's criteria control which tool results will be matched with a rule. Note that each criterion can only appear once in a Rule Set. If you attempt to add a criterion that already exists in a different rule, you will be given the option to move the criterion out of that rule, or just cancel. Users with the admin role can edit the criteria for each rule.

Criteria can be created for rules using the add criterion buttons for that rule. These buttons are located at the bottom of the criteria list.

Add Criterion Buttons

Criteria can be deleted from rules using the delete button for that criterion. The button is hidden until you hover over the criterion in a rule's criteria list.

Delete Criterion Button

Tool Criteria

The Add Tool Criterion form allows you to create criteria that operate on a tool result's type. An individual tool criterion specifies a tool, category, and code. It will match tool results whose raw values match the values specified by the criterion.

Example Tool Criterion

Note: The exact values for the tool criterion fields vary depending on what is reported by the tool. One way to discover these values is to look at the Finding Details page for existing findings in Code Dx. The Tool, Tool Category, and Tool Code are displayed in the Tool Details for each associated tool result.

Example Tool Details

The category and code fields are both optional. Omitting both will create a criterion that matches all results from the specified tool. Omitting just the code will create a criterion that matches all results from the specified tool marked as part of the specified category. Some tools do not specify a tool category, in these cases the tool category field will need to be left blank. Note: leaving the tool category field blank does not act as a wildcard, so if the tool specifies categories, they must be included in all rule criteria.

CWE Criteria

The Add CWE Criterion form allows you to create criteria that operate on a tool result's CWE. By specifying a CWE ID value, a CWE criterion will match tool results with that CWE value.


Note: this section is only applicable to Code Dx Enterprise users with the InfraSec add-on.

When Code Dx ingests Network Security results, the location of those results is typically expressed in terms of a "host", with the level of detail varying from tool to tool. The Hosts page is Code Dx's location for interacting with host data directly, outside the context of Findings or Projects. Users will be able to access the Hosts page but the Associated Projects column will only populate for projects they belong to. Only the Code Dx user with admin privileges will be able to create, edit, update, or delete host information. You can access the Hosts page via the link in the top navigation area.

Navigate to the Host Page

Host Scopes

What is a Host Scope?

A Host Scope is effectively just a set of projects that share host information with each other. A Host Scope can be used to model a network where each project in a Host Scope contains vulnerability information for a vulnerable application that is housed on a potentially vulnerable host. Alternatively, one can simply use Host Scopes to isolate host information to particular sets of projects so that overlapping pieces of host information between Host Scopes don't interact with each other during Host Normalization and Finding Correlation. Each Host Scope can have multiple projects attached to them but each project can only be linked to one Host Scope. See Host Scope Associations for more information on how to set up projects with Host Scopes.

Managing Host Scopes

The Manage Host Scopes button is in the top right corner of the Host Scopes page.

Clicking on this button will bring down a menu that will allow you to manage each Host Scope that you have created.

By default, the Global Host Scope will be the only one available. However, you can create new Host Scopes by clicking New Scope.

Clicking on New Scope will replace the button with a text field, enabling you to name the Host Scope that you are creating.

Clicking OK will create your Host Scope and it will appear alongside the existing Host Scopes in the Host Scope Management menu. You may also click Cancel if you do not wish to create a Host Scope at that time.

On the right-hand side of the Host Scopes management menu, you will see the Import, Export, and Delete buttons.

Clicking Import will allow you to import a custom set of host information into the Host Scope for which you clicked Import. Code Dx currently only supports importing hosts defined in a .json file. Hosts are expected to be provided as JSON Objects of the form: field-type: [values...]
Code Dx currently supports the following field-types: Hostname, FQDN, NetBIOS Name, IP Address, MAC Address, Operating System, Ports. Every value for a field type is simply a string, except for Ports, which is a special case expecting each value to be another JSON Object with the following structure:

"Ports": {
    "Port": <port_number>
    "Protocol": <port_protocol>
    "State": <port_state>

Note that the Import button for non-selected Host Scopes will be disabled by default as you can only import hosts into the selected Host Scope.

Clicking Export will provide you a .json file containing all the normalized hosts in the Host Scope for which you clicked Export. The structure of the .json file matches that of the structure required for importing hosts into a Host Scope.

Clicking on the trash bin icon will bring up a window allowing you to confirm that you would like to delete the relevant Host Scope. Deleting a Host Scope will delete all normalized host information belonging to that Host Scope. To delete a Host Scope, you will first need to delete any projects associated with that Host Scope.

You can confirm that you would like to delete the relevant Host Scope by clicking OK. Clicking Cancel will bring you back to the Host Scope management menu and will not delete the relevant Host Scope.

Clicking on the radio button next to the name of your Host Scope will allow you to focus the Hosts page to the host information present in that Host Scope. Your selection is saved, so if you were to navigate to another page and back, the Hosts page will still be focused on the information from the Host Scope that you had selected prior to leaving the page.

Hosts Table

Selecting a Host Scope in the Host Scope management menu will populate the Hosts Table with normalized host information from the selected Host Scope. Normalized hosts are sets of hosts reported by different tools that are correlated to each other. Thus the information that the table is populated with is an aggregation of the host information from various tools that are referencing the same host.

Viewing Host Information

Each row in the table contains all of the host information that Code Dx is aware of for a particular host, and each column in the table is a set of values appropriate for a particular field of interest. Currently, Code Dx only displays FQDN, NetBIOS Name, IP Address, MAC Address, Operating System, Open Ports, Environment, and Associated Projects.

Clicking View in the top right corner of the page will bring down a menu consisting of each column name with a corresponding switch.

If a switch in this menu is disabled, that column will not be displayed in the table. For example, below we disabled the switch for the "Associated Projects" column.

Consequently, we can no longer see this column in the table.

Manually Adding and Editing Hosts

Code Dx allows you to manually add new hosts to the selected Host Scope. The Create Host button is to the top right of the Hosts Table.

Clicking on it will bring up an interactive table where you can define the host that you are adding.

Clicking Add Value will produce a text field where you can enter a new value for the column you're editing. Note that a default value will be shown if the text field is empty and serves as an example of a valid value.

There is some validation applied to IP Address, MAC Address, and Open Ports. Typing in an invalid value for that field will cause the text field to be highlighted in red. Each invalid value is considered non-existent when clicking OK to create a Host with the specified values, and consequently will not appear in the Hosts Table when a host with invalid values is successfully created.

If you do not wish to include a value on a host, you may click the trash bin located at the far right of the cell to delete the value from consideration.

You can include any number of values for any particular column in the editor. Note that Associated Projects is not present in the editor. This is because Associated Projects is a derived field, found by determining if a host exists in a particular project. Also note that any columns that are missing in the Hosts Table as a consequence of disabling them in the "View" menu will still be shown while editing the hosts.

Code Dx also allows you to manually edit existing normalized hosts in the Hosts Table. If you click on the button in the right most column of the Hosts Table, a drop down menu will appear. The first element in the drop down menu is "Edit Host".

Clicking on "Edit Host" will bring up the same interactive table that appears when you're manually adding a new host, except now you will see it in the table, as opposed to above it.

Code Dx also allows you to delete existing normalized hosts. In the same drop down menu that "Edit Host" appears, you will also see "Delete Host".

Clicking on it will prompt you with a message detailing the consequences of deleting a host.

You may confirm the delete by clicking the "Delete" button. Clicking Cancel will bring you back to the Hosts Table without deleting the host. Note that only the normalized host is deleted when clicking Delete. No host information acquired from results during an analysis will be lost.

When creating or editing a host, you may end up introducing values for field-types that Code Dx considers "identifying". Identifying fields for Hosts are the FQDN, Hostname, NetBIOS Name, IP Address, and MAC Address fields. If you introduce a value for an "identifying" field-type and it already exists on a host in the current Host Scope, clicking on "OK" will cause the editor to expand to include two new non-interactive tables.

The first new table will show you the host you tried to add, and the second new table will show you all hosts that already have some of the values for "identifying" field-types the host you tried to add has. Clicking Merge will cause the host you tried to add to be joined together with the other hosts that shared values for "identifying" field-types with that host. Clicking Cancel will bring you back to the editor and will not join the host with any other hosts. Note that you will be unable to edit the host you're trying to add until you either click Merge or Cancel.

Filtering Hosts

Code Dx also allows you to filter the hosts that appear in the Hosts Table. Above the table, you should see a text field.

This is the "Generic" filter. Any value provided here will be used to display only hosts for which at least one field value for any of the field-types satisfies the filter defined by the provided value.

Clicking advanced, which is next to the "Generic" Filter, will bring up the Advanced Filters sidebar.

These filters are specific to a host field-type, and providing a value for any of these will display only hosts in the table for which the specified value exists in the specified column for that host. Note that each filter available will only filter by one value, so attempting to filter by multiple values for a particular field-type (or in the "Generic" filter) won't work.

Visual Log

Overview of the visual log page, with example log data

The Visual Log Page provides a helpful UI for certain events and errors that administrators might be interested in, for auditing purposes. It is important to note that the log file generated by a running Code Dx installation is not the same as the visual log. Most notably, arbitrary exceptions that appear in the log file will typically not appear on the Visual Log Page. This document will provide details about what does appear in the visual log.

To reach the Visual Log Page, Admin users and Project managers can use the gear menu to find a link directly to the page.

Navigation to the visual log page

Visual Log Messages

An entry in the visual log contains several useful parts:

An expanded visual log entry with labels for its various parts

As noted in reference to the Timestamp, the visual log is ordered in reverse-chronological order, so that the newest events will be at the top. As you scroll through the log, you'll eventually encounter a Load More button which will load the next chunk of the log. You can keep scrolling and clicking Load More until you reach the end of the log (the earliest event). While you are on the Visual Log Page, if new events happen, rather than interrupting your view of the log by immediately appearing and pushing the UI around, a notification will appear at the top of the page, prompting you to click it to reload the log from the beginning. You are free to ignore the prompt. Once you click it, it will reload the log as if you refreshed the page (any Load More progress will be reset).

Dismissed Entries

Clicking the Dismiss button on a visual log entry will "dismiss" it, sending it into a semi-ignored state. At first, the dismissed entry will switch to an alternate appearance, but won't be immediately removed from your view.

A dismissed visual log entry

Once a reload is triggered (e.g. by refreshing the page, clicking the "click to reload" prompt, changing filter states, or changing view menu settings), dismissed entries will be hidden from view (assuming the Include dismissed log entries setting in the View menu is switched off). The Include dismissed log entries setting in the View menu can be toggled to show or hide dismissed log entries, causing them to appear with the checkered background style.

Visual Log Page View menu

Visual Log Filter

The Visual Log Page offers a filtering capability that allows users to easily select a subset of the log.

The Visual Log Filter, blank

By default, the Log Filter section will contain a single blank filter block. Each filter block contains three optional criteria:

For example, you could set a filter in order to only view failed-login events where someone attempted to log in as the "admin" user.

Visual Log Filter, example

Clicking the Or... button below the filter will add an extra filter block, allowing you to set alternate criteria. In the image below, the filter will select failed-login events related to the "admin" user, OR any event related to "Project A" and the "John Doe" user.

Visual Log Filter, example 2

Although all log Types will be available for selection in the filter, those types may not always be present in the log. For example, non-admin users will only be allowed to view log events that are directly related to a project they manage, and so they inherently won't be able to see e.g. failed-login events because those events are never associated with projects. Also, the successful-login event is not recorded by default. See the Visual Log Configuration section in the install guide to enable recording of that event type.