Docs
  • Solver
  • Models
    • Field Service Routing
    • Employee Shift Scheduling
  • Platform
Try models
  • Timefold Platform
  • How-tos
  • Comparing runs

Timefold Platform

    • Introduction
    • Planning AI concepts
    • Getting started with the Timefold Platform
    • Platform concepts
    • Models
      • Model catalog and documentation
      • Model versioning and maturity
      • Trialing Timefold models
    • How-tos
      • Run lifecycle
      • Interpreting model run results
      • Configuration parameters and profiles
      • Searching and categorizing runs for auditability
      • Comparing runs
      • Member management and roles
      • Using the maps service
    • Job-oriented guides
      • Balancing different optimization goals
      • Validating an optimized plan with Explainable AI
      • Uncovering inefficiencies in operational planning
      • Responding to disruptions with real-time replanning
      • Designing better routing plans with (just enough) traffic awareness
    • API integration
      • API usage
      • Webhooks
    • Changelog
    • Feature requests
    • Self-Hosted
      • Self-Hosted vs. Timefold Cloud Platform
      • Installation instructions
      • Troubleshooting
    • Trust
      • Risk profile
      • Product security
      • Data security
      • Legal and privacy
      • AI legislation compliance
      • Trust center

Comparing runs

The Comparison UI of the Timefold Platform helps you assess and compare multiple model runs at a high level, enabling better-informed decisions about your planning problems. Whether you’re testing new goals, configurations, or different scenarios, the comparison view brings clarity to the impact of each change.

When to use the Comparison UI

The comparison UI supports several key use cases:

Goal alignment

Solve the same planning problem with different optimization goals, and compare the outcomes. This helps you understand the trade-offs between competing priorities and make informed decisions about what matters most.

Learn more: Balancing different optimization goals.

Benchmarking

Compare solutions under different termination settings, or hardware configurations. This helps you assess the quality and speed of the model and identify potential areas for improvement.

What-if scenarios & scenario testing

Make simulated changes to your planning problem, such as increasing workload or modifying resource capacity, and compare the outcomes. This helps you make strategic operational decisions with confidence.

Follow-up over time

Compare multiple runs for the same business unit over time, to track how workload or service quality evolves. This helps you monitor operations and spot trends.

The Comparison UI is designed for high-level analysis. It’s not intended for comparing specific employee metrics or resource assignments. For detailed plan visualizations, use the Model Run Visualizations instead.

Starting a comparison

You always start a comparison from a Model Runs Overview. Select two or more of your runs and click Compare at the bottom.

Selecting runs to start a comparison
Figure 1. Selecting runs to start a comparison

From the Comparison UI you can:

  • Re-order columns by dragging the handle icon in the column header.

  • Edit or remove a column by clicking the pen icon in the column header to change which runs are used for the column, and give columns a different name.

  • Click Add column at the top right of the table to add more columns to the comparison.

Comparison UI with add/edit column options
Figure 2. Comparison UI with add/edit column options

How comparisons work

Each column in the comparison represents either:

  • A single run: Best when you want to see detailed metrics for each run.

  • A run set: Best when you want to look for patterns and trends across multiple runs. When a column represents multiple runs, the values shown will be averages. Click the pen icon on a column and select multiple runs to aggregate them in a single column.

Use tags to categorize your runs and scenarios. When editing a column in the Comparison UI you can then filter on these tags to quickly find the relevant runs.

Learn more here: Searching and categorizing runs for auditability.

Dialog to configure a run set, showing filtering by tag
Figure 3. Dialog to configure a run set, showing filtering by tag

You can compare up to:

  • 10 columns in total.

  • 50 runs per column.

What we compare

Each comparison shows different types of metrics to help you interpret and analyze your results.

  • Output metrics:
    These are the main results of your planning problem and are usually the most important to compare. They are ordered by the priorities defined by your model.

  • Input metrics:
    These describe the size or scope of the problem (e.g. number of tasks, employees, vehicles). They help you put the output metrics in context.

  • Score:
    The hard, medium, and soft scores of the runs.

  • Calculation metrics:
    These relate to the solving process (e.g. number of moves, move evaluation speed), and help you assess solver performance.

  • Run info:
    Displays metadata about each run, such as the Run ID and dates.

Options and tips

  • Only show differences:
    Toggle "Only show differences" to hide rows where metrics are the same across all columns.

  • Difference versus baseline:
    Differences from the first column are shown in dimmed text. To compare against a different baseline, reorder the columns. Hover over a difference to see it as a percentage.

Example comparison showing some differences and hover state
Figure 4. Example comparison showing some differences and hover state
  • Distribution tooltips:
    If you’re comparing a run set, you’ll see underlined average values. Hover to reveal the min, max, median, and average.

Example of tooltip showing min, max, average, and median values
Figure 5. Example of tooltip showing min, max, average, and median values
  • Sharing results:
    You can generate a PDF export of your comparison. (We recommend using landscape orientation for better readability.)

Example questions the Comparison UI can answer

Here are some example questions the Comparison UI can help you answer:

  • How did the staffing efficiency change this month compared to last?

  • What would happen if we added more part-time employees?

  • How does the total driving time change when we prioritize early deliveries more?

  • Is the solution quality significantly better when using a longer solving time?

  • What’s the impact on service SLA’s if we lost a service team?

Still rolling out

This feature is currently still being rolled out. If it’s not available in your workspace yet and you’d like access, don’t hesitate to reach out to us. We’re happy to help you get started.

  • © 2025 Timefold BV
  • Timefold.ai
  • Documentation
  • Changelog
  • Send feedback
  • Privacy
  • Legal
    • Light mode
    • Dark mode
    • System default