Articles in this section

Our Approach to AI Act Compliance

You have likely heard about the EU AI Act and are wondering how it applies to ActivTrak. We understand that, and so we’ve prepared some detailed FAQs below to answer the questions we expect you’ll have.  

If you’re looking for our top-line view on the AI Act, though, it’s this: we believe the AI Act is the best thing ever to have happened to our industry. Why? Because we wholeheartedly endorse the Act’s objectives to promote human-centric and trustworthy AI innovation.  

To that end:

  • ActivTrak does not develop and has no intention to develop AI technology prohibited by the AI Act.
  • ActivTrak is fully confident we can (and will) comply with the AI Act’s requirements as they come into effect over the next couple of years.
  • ActivTrak believes in best-practice AI development standards and implementing in-product controls that support our customers’ compliance.

That’s the overview, but if you’re interested in additional details, continue reading.

Contents

What is the AI Act?

The AI Act (EU Regulation 2024/1689) is a new law adopted by the European Union that lays down rules to promote the uptake of human-centric and trustworthy AI while protecting against the risks AI can present, such as risks to individuals’ health, safety or fundamental rights (such as individuals’ rights to privacy, freedom of expression, and equality of opportunity).

Is the AI Act already in force?

Yes - in part. The AI Act entered into force in August 2024, and its requirements take effect over a two-year period. Notable milestones include: 

  • February 2025: the Act’s requirements for organizations to roll out AI literacy programs and cease prohibited AI practices took effect. 
  • August 2025:the AI Act will introduce new rules for general-purpose AI models. 
  • August 2026: the rest of the AI Act will come into force (mostly), including rules on certain types of high-risk AI systems and AI systems with heightened transparency requirements.

How does the AI Act regulate AI?

The AI Act creates rules for four different tiers of AI. These are:

  • Prohibited AI: Certain categories of AI systems that are considered to present unacceptable risks and are therefore banned. They include using AI systems that deploy subliminal techniques, purposefully manipulative or deceptive techniques, AI systems that exploit vulnerable groups, and social scoring systems, among others. It also includes AI systems that use biometrics to infer emotions in the workplace - more on that later. Article 5 of the AI Act sets out a full list of prohibited AI.

  • High-risk AI systems: These are AI systems that could present risks to health, safety, or individuals’ fundamental rights unless they are developed and used with careful controls in place. The Act recognizes two broad types of high-risk AI systems: product-based AI systems, such as AI used in cars, machinery or radio equipment, which are already subject to specific product regulatory laws listed in Annex I to the AI Act, and other types of AI systems that may also present risks, which are listed in Annex III of the AI Act. This second category includes AI systems that use biometrics for emotional recognition and AI systems for monitoring or evaluating staff or allocating tasks based on staff behavior. 

  • AI systems with transparency requirements: The AI Act imposes transparency requirements to inform individuals when interacting with, are subject to, or are exposed to generative content from certain AI systems. This can include, for example, requirements to label synthetic content produced by generative AI systems. Transparency requirements also apply to AI systems that use biometrics for emotion recognition or biometric categorization. Article 50 of the AI Act sets out the rules for when and how these transparency requirements apply.

  • General-purpose AI models:  The AI Act imposes specific requirements for developers of general-purpose AI models, like large language models, including requirements to publish a summary of the training data used to train the model and to have a copyright compliance policy. These requirements are set out in Chapter V of the AI Act.

Any AI system or AI model that does not fall into one of the above tiers is outside the AI Act’s rules.

Who does the AI Act apply to?

The AI Act applies to any organization that develops AI systems and general-purpose AI models (the Act calls these organizations “providers”) and to any organization that uses AI systems (the Act calls these organizations “deployers”).  

The specific rules that apply depend on whether an organization is a provider or a deployer, and which risk tier of AI they are using (see the different tiers described above). Most obligations under the AI Act fall to AI providers, especially those that provide high-risk AI systems.

Does the AI Act apply to ActivTrak?

Yes. Our platform incorporates AI within its workforce management, productivity monitoring, and planning tools, and we sell it to customers in the EU. The AI Act will apply both to ActivTrak (as the “provider” of our platform) and to our customers who use it in the EU (as the “deployers” of our platform).

Does ActivTrack use prohibited AI?

Because our platform enables organizations to monitor their employees’ productivity, including spotting signs of staff disengagement or burnout, we are often asked if this use is contrary to the AI Act’s prohibition against “the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions.”  The short answer is NO: ActivTrak is not a prohibited AI practice under the AI Act.  

The European Commission’s guidelines on prohibited AI practices under the AI Act make clear that this prohibition applies only when a system processes biometric data to infer a person’s emotional state (e.g., monitoring their vocal or facial expressions). ActivTrak does not use biometric data. ActivTrak is a data-driven workforce analytics platform that tracks digital activity, analyzes work patterns, and identifies early signs of disengagement. Therefore, ActivTrak does not meet these criteria.  

In addition, the Commission’s guidance and the AI Act (at Recital 18) clarify that detecting physical states such as fatigue (relevant to detecting employee burnout) does not qualify as inferring emotions. 

Consequently, ActivTrak’s platform is not a prohibited AI system under the AI Act.

How does the AI Act apply to ActivTrak?

ActivTrak enables our customers to monitor and evaluate their employees’ productivity and engagement. It also supports workforce task allocation by helping customers identify when staff are under or overutilized. This potentially places our platform in high-risk AI systems under the AI Act.

However, even when an AI system is at face value high-risk, the Act says it will not in fact be high-risk if “it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making” (Art 6(3)) unless profiling is involved.  

Our platform does not make decisions but is instead designed to have a customer “human in the loop” who will review the data and analysis we provide and decide how to act on it. Therefore, we believe there are good arguments that our platform is not, in fact, high risk.

Currently, there is no regulatory guidance on interpreting the Act’s high-risk requirements or exemptions. For this reason, and because we recognize the need to provide certainty to our customers, we intend to operate on a “best practice” principles basis and implement many of the compliance measures necessary to fulfill the requirements of the AI Act as if we are high-risk, even though we think there are good arguments we’re not.

We will adapt our approach as necessary when further guidance is published.

What next steps are ActivTrak taking?

The AI Act’s requirements for high-risk AI don’t take effect until August 2026, so we are reviewing the measures applicable to high-risk AI and determining whether and how best ActivTrak will implement those measures.  

Among other things, we either have taken or intend to take the following measures will also be considered:

  • Implement a risk-management system to identify, evaluate, and mitigate any risks that might arise from using AI within ActivTrak, consistent with the requirements of Article 9 of the AI Act.

  • Implement good data governance standards to ensure our training, validation, and testing data are relevant, sufficiently representative, and, to the best extent possible, free of errors and complete because of the intended purpose, consistent with the requirements of Article 10 of the AI Act.
  • Prepare technical documentation for the ActivTrak platform, addressing the requirements of Article 11 of the AI Act. 

  • Enable event logging on our platform, consistent with the requirements of Article 12 of the AI Act.

  • Prepare instructions for use for customers of the ActivTrak platform and otherwise fulfill the transparency requirements of Article 13 of the AI Act.
  • Ensure ActivTrak has appropriate human-machine interfacing tools so that it can be effectively overseen by a human while in use, consistent with Article 14 of the AI Act.
  • Design and develop ActivTrak to achieve an appropriate level of accuracy, robustness, and cybersecurity throughout its lifecycle, consistent with Article 15 of the AI Act.
  • Otherwise implement a quality management system, documented in the form of written policies, procedures and instructions, to ensure the ongoing safety, quality and compliance of ActivTrak throughout its lifecycle, consistent with Article 17 of the AI Act. 

Who can I contact if I have further questions?

If you have further questions, we’re happy to help. Please contact us at support@activtrak.com.

Was this article helpful?
0 out of 1 found this helpful