AI legislation compliance
As the European Union’s Artificial Intelligence Act comes into effect and other countries are preparing similar legislation, business leaders are increasingly seeking clarity on how AI solutions align with existing regulations.
At Timefold, we understand these concerns. That’s why we want to transparently explain how our technology complies with these acts. This document focuses on the European AI Act (EU AI Act) as it is a binding law.
No training data, no learning models
Unlike generative AI or predictive ML models, Timefold does not learn patterns from large training datasets. Instead, it relies on pre-programmed algorithms and rules to find optimal solutions in a fully deterministic way.
Because Timefold does not require “training data” as defined in Article 3(29) of the EU AI Act, many compliance obligations under the AI Act simply do not apply. There are:
-
No data governance issues tied to biased or insufficient training datasets.
-
No black-box behaviors stemming from learned parameters or opaque statistical models.
-
No need for documentation or audits of training data quality, since none is used.
This inherent transparency and determinism eliminates a major concern many organizations have when adopting AI tools.
The only data involved is the input data for a specific planning problem, which are provided by the user (e.g. a list of jobs to schedule and the available resources). How we cover the security and privacy of that data is described in Legal and privacy.
Transparent and explainable by design
Timefold is built for explainability. Every decision made by our solution can be traced back to pre-programmed rules. With our explainability features, you can always understand why a particular schedule was generated. This makes Timefold a powerful tool for organizations that need to maintain transparency and accountability in decision-making.
The AI Act emphasizes transparency as an important characteristic of an AI system. With Timefold:
-
All constraints and scoring logic are explicitly configured.
This logic does not change post-deployment based on user input. -
Outputs are consistent and reproducible.
-
Users are able to fully analyze the outputs with our explainability features.
This aligns perfectly with the AI Act’s transparency considerations.
Is Timefold an "AI system" under the act?
The EU AI Act defines AI systems broadly to include rule-based, logic-driven, and optimization algorithms. Timefold qualifies because it performs reasoning and decision-making based on input provided by the user and hard-coded constraints when solving a planning problem.
However, while Timefold fits within the scope of the Act, the deterministic, rule-based nature of the solver algorithms ensures that it avoids the Act’s strictest regulatory burdens typically reserved for more opaque or adaptive AI systems.
Be mindful of human implications
Most Timefold use-cases fall into operational optimization such as logistics, manufacturing, and resource planning which are not considered high-risk under the AI Act.
However, if Timefold is used in certain HR applications, like allocating shifts in a way that significantly affects workers’ rights, it may fall under the high-risk AI category defined in Annex III of the Act.
In those specific contexts, Timefold users may be subject to additional obligations, such as:
-
Performing a risk assessment.
-
Ensuring human oversight of AI-driven decisions.
-
Providing documentation about the system’s design, constraints, and outputs.
Timefold makes this process straightforward by providing a transparent, configurable platform that simplifies compliance. Our technology exists to support human planners, giving them the tools to override decisions and steer the optimization process, it is not meant as a replacement of human intuition and empathy.
Why Timefold is a compliance-friendly AI solution
Whether or not your Timefold implementation falls under the high-risk category, our platform is inherently aligned with the AI Act’s principles:
-
No hidden biases: No training data used.
-
Full transparency: Clear logic, rules, and traceable decisions.
-
Non-adaptive: No self-modifying behavior post-deployment.
These attributes significantly reduce the compliance burden and allow organizations to focus on outcomes without worrying about unpredictable AI behavior.