Docs
  • Solver
  • Models
    • Field Service Routing
    • Employee Shift Scheduling
  • Platform
Try models
  • Timefold Platform
  • How-tos
  • Interpreting model run results

Timefold Platform

    • Introduction
    • Planning AI concepts
    • Getting started with the Timefold Platform
    • Platform concepts
    • Models
      • Model catalog and documentation
      • Model versions and maturity
      • Trialing Timefold models
    • How-tos
      • Interpreting model run results
      • Configuration parameters and profiles
      • Balancing different optimization goals
      • Searching and categorizing runs for auditability
      • Member management and roles
    • API integration
      • API usage
      • Webhooks
    • Changelog
    • Feature requests
    • Self-Hosted
      • Self-Hosted vs. Timefold Cloud Platform
      • Installation instructions
      • Maps service
      • Troubleshooting
    • Trust
      • Risk profile
      • Product security
      • Data security
      • Legal and privacy

Interpreting model run results

In this document you will learn how to see the results of a Model Run using the Platform’s UI, interpret its metrics, analyze the scores, and possible next steps on how to use and tweak the proposed planning solution.

Model run overview page

When a run has completed, the Model Run Overview page gives you a summary of the results from the run. Find this page by clicking the tile of a model, and then picking a run from the Run overview table.

The overview page has the following sections:

  • Sidebar on the right, with (in this order):

    • The run’s status.

    • The run’s metrics.

    • The run’s properties.

  • Main section, with (in this order):

    • Optionally, any error or warning messages.

    • The input metrics

    • The score graph and the hard, medium and soft scores.

    • The list of constraints with their score analysis.

Sidebar

Run status and errors/warnings

A run’s status is indicated at the top right in the sidebar.

The solver could be in one of several states:

  • Scheduled (SOLVING_SCHEDULED): The data has been received and is in the queue to be solved.

  • Started (SOLVING_STARTED): The input data is being converted into the planning problem and augmented with additional information, such as distance matrix (if applicable).

  • Solving (SOLVING_ACTIVE): The planning problem is currently being solved.

  • Incomplete (SOLVING_INCOMPLETE): A full solution was not found before the run was terminated.

  • Completed (SOLVING_COMPLETED): Solving has completed and no further solution will be generated.

  • Failed (SOLVING_FAILED): An error has occurred and solving was unsuccessful.

If there were any errors or warnings related to the run (e.g. input validation) the overview page will show them.

Run timeline

The timeline widget shows the different stages the run has been in, and for how long. Hover over each of the stages to see when they started and how long they took.

Run timeline example
Figure 1. Run timeline example

Run metrics and optimization gain

Each model defines its own metrics. These are metrics that reflect the problem domain. Metrics give an indication of the quality of the provided solution.

Example 1. Field Service Routing metrics
For the field service routing model, the metrics might be the mileage driven, the overall time vehicles spent travelling, or the number of visits that were left unassigned. Read more.
Example 2. Employee Scheduling metrics
For the employee scheduling model, the metrics could include the number of assigned and unassigned shifts. Read more.

The sidebar will show the values for metrics of the final solution, but also indicate the optimization gain.

Field Service Routing metrics and optimization gain example
Figure 2. Field Service Routing metrics and optimization gain example

Optimization gain is defined as the difference between the last solution and the first solution. When looking at optimization gain, it’s important to not look at specific metrics in isolation, but put them in context of other metrics.

Run properties

Below the run’s metrics we show the other properties of the run:

  • Any tags added to the run. You can easily add more tags or edit existing ones. Use tags to make runs easier to find and compare.

  • The move speed. This is an indicator of how quickly Timefold is exploring different solutions and Timefold’s performance.

  • The ID and the runtime.

  • The time the run was submitted, started, and completed.

Main section

Score graph and scores

The score of a model run is an indication of its quality. The higher the score, the better the constraints are met, and the more optimal the provided solution is.

We distinguish between hard constraints, medium constraints, and soft constraints and compute scores for each.

Hard constraints

Hard constraints are the basic rules of the domain and must never be broken. If they are broken, the solution isn’t even feasible.

Medium constraints

Medium constraints usually incentivise Timefold to assign as many entities as possible. They are used by Timefold to allow for overconstrained planning.

Soft constraints

The soft constraints of a model represent the optimization objectives. They can be broken, but the more they are satisfied, the more optimal a solution is.

Timefold optimizes for a higher hard constraint score first (to find a feasible solution), then a higher medium constraint score (to assign as much as possible), and then a higher soft constraint score (to optimize the solution). The scores are the sums of each of the constraint scores, grouped by type.

The graph below that shows the evolution of the scores for hard, medium, and soft constraints during the model run. You can click the expand button on the right of the chart to see each score on their own graph with Y-axis values.

Score Graph
Figure 3. Score Graph

When you hover over the score graph, you’ll see the values for each of the scores and the metrics of the solution at that time and the difference to the first solution. By exploring the evolution of scores and metrics, you’ll get a glimpse into the dynamic of the model - how it balances all of the different constraints and what the effect on the metrics is.

Input metrics

Below the score graph is an overview of metrics related to the input. They give an indication of the size of the planning problem you’ve submitted. These help put the metrics in context.

Score analysis

Below the score graph is a list of all constraints defined by the model. The constraints that aren’t fully satisfied are presented first, ordered by type and then score.

By default, constraints that are fully met are hidden. Click Show satisfied constraints to reveal all constraints.

For each constraint we show:

  • Its name.

  • Its type: hard, medium or soft.

  • The impact: whether it’s a penalty or a reward.

  • The matches: How often this constraint wasn’t fully met.

  • The weight: How much weight this constraint was given. See Balancing different optimization goals to tweak this.

  • The associated score.

The constraints are grouped logically, so it’s easier to understand which constraints are related.

Employee Scheduling Constraint List
Figure 4. Employee Scheduling Constraint List

The image shows constraints that aren’t fully satisfied for a run of the employee scheduling model.

Planning solution output and visualization

A visual representation of this plan can be found on the Visualization page. The goal of this visualization is to spot-check the quality of the output.

The full details of the solution can be found under Output. You can download the output as a JSON file with the full details of the plan.

Using the API

The information from this overview page is also available by using the Model’s API.

  • The /{id} endpoint returns the best solution, including its metrics.

  • The /{id}/run endpoint returns the status of a run and any validation errors or warnings.

  • The /{id}/score-analysis endpoint returns a list of the constraints, their scores, matches, and justifications.

For more information about the API endpoints, go to a model’s API Spec page.

What’s next? Tweaking the planning solution

Now that Timefold has provided you with an optimized plan for your planning problem, there are several ways you can further tailor the solution to your business needs.

Changing the optimization goal

When there is a feasible solution (meaning all hard constraints are met), Timefold further optimizes for soft constraints. By default each of these constraints are given the same importance, but you can change the optimization goals.

Use Configuration parameters and profiles to change the optimization goals for your run.

Compare to other runs

The Model Runs Overview page, shows a table with the latest runs of a model. By default we show the scores of each of the runs, as well as the first metrics. Use the search functionality to compare specific runs.

Click Manage columns to customize which columns are shown on the overview page. You can pick which of the model metrics to compare.

Plan around fixed segments

Timefold models allow you to pin certain segments, so you can fully customize a plan. Maybe there is an exception where you want to make sure a certain shift is done by a specific employee, or a certain visit is done by a specific vehicle. If you provide Timefold with pinned segments, it will honour those while planning around them.

Solve for longer, or shorter …​

When a run ends is determined either by a time limit, or when the score of a run no longer changes. The score graph of a run gives an indication whether it is worth solving for longer. Whether the extra time spent solving is worth it depends on your business needs.

  • © 2025 Timefold BV
  • Timefold.ai
  • Documentation
  • Changelog
  • Send feedback
  • Privacy
  • Legal
    • Light mode
    • Dark mode
    • System default