Once your Dataset has been created you will want to action to extract valuable insights on model performance and accuracy.

With the Dataset Detail page, you can:

View your Dataset Assets

You can view and take action on all assets in your Dataset by viewing the Assets tab on your Dataset Detail page.

Using the checkbox next to each asset, you can perform various bulk actions, such as:

  • Bulk Delete: Deletes the selected assets

  • Bulk Annotate: Opens the Annotate tool with your select assets

You can also hover over each asset row in order to perform individual actions on that item

Annotate and Label Dataset Assets

Each asset in your Dataset must include a Ground Truth file that is used as a reference for what is truly accurate when evaluating your model's performance. The Benchmark application contains a powerful Annotation and Labeling tool that allows you to quickly and effortlessly label your own data and save that data in the VTN standard for evaluation.

Inside the Dataset Detail, you can access this tool by clicking the Annotate and Label icon at the top of your page or select individual assets from your Assets tab.

Learn More about the Annotate and Label Tool

Export Dataset and PDF Report

If you would like to export your entire dataset and annotations as a CSV, select the Export option at the top of your page and select Export Dataset CSV.

If you have conducted an Experiment on your Dataset, you can export the aggregated results of all your Experiments into a PDF report by clicking the Export PDF Report under the Export option.

Editing Dataset

Select the Pencil option at the top of your Dataset to open the Dataset Edit popup. This popup will allow you to change various information on your dataset, including:

  • Dataset Name and Description

  • Tags

  • Data Type, Class, and Capability

  • Modify Dataset Assets and Features

Delete Dataset

Use the Trashcan option in your Dataset option bar if you would like to permanently delete this Dataset. Please note that this action can not be done, and all information will be lost.


Experiments allow you to select from available Cognitive Engines in your account and compare and evaluate metrics across your dataset. Each Experiment you create for this Dataset will be available in the Experiments Tab.

Clicking an Experiment will open the Experiment Detail Page

View Experiment Results

Create an Experiment


You can view aggregated and historical metrics on model performance cross capability by using the Metrics page within the Dataset Detail. This page displays a summary of all Experiments that have been run on this Dataset and allows you to easily view and understand model performance over time.

Accuracy or Speed

You can control the metrics available on each chart by using the toggle at the top of the Metrics page. When switched each chart will show you aggregated metrics based on model Accuracy of Speed.

Score Cards

Score Cards at the top of the page will quickly look into the average, highest, and lowest-performing model across both Accuracy and Speed.

Overall Performance

This chart displays an aggregated accuracy value per model for all experiments run on this dataset.

Note that additional chart metrics may be available based on the capability of your dataset in the dropdown.

Performance by Record

This chart can help you evaluate models based on individual assets from within your Dataset. By selecting an asset group, you can view the accuracy percentage of each model.

Selected Engine Details

This section displays a list of all models that have been historically run against this dataset.

Did this answer your question?