6 things you should know and consider regarding the EU AI Act

The European Artificial Intelligence Act (AI Act) came into force on August 1, 2024. The law aims to promote the responsible development and use of artificial intelligence in the EU.

This blog is intended to provide companies with non-binding information about the EU AI Act, but should not be understood as an official statement or procedural instruction from Servcieware SE.

1. What is the EU AI Act?

The EU AI Act is a new European Union regulation intended to regulate the development and use of artificial intelligence (AI) in Europe. The aim is to promote innovation while protecting the fundamental rights and security of citizens.

Background: EU Regulation and EU Directive

In contrast to an EU directive, where it is up to the individual countries to enact legislation to achieve an objective, an EU regulation is a binding legal act that must be implemented in full by all EU countries.

 

The regulation sets out clear requirements that AI developers and operators must fulfill depending on the specific use of the AI. At the same time, the regulations are intended to reduce the administrative and financial burden for companies.

This makes the EU AI Act the first comprehensive set of regulations on AI that defines standards for the use of AI systems in order to identify, address and manage potential risks and ethical challenges.

2. For whom is the EU AI Act important?

The EU AI Act is aimed at AI developers and AI operators, i.e. both providers of AI technologies or AI-supported technologies and companies or organizations that integrate AI-based solutions such as automated AI assistants into their service offerings.

This means that even if you do not develop your own AI technology, you are responsible for its use as an operator or user of the technology in the context of the EU AI Act. The regulation applies to companies of all sizes, including small and medium-sized enterprises.

3. What are the most important regulations in the EU AI Act?

Basically, the EU AI Act is about protecting fundamental rights while at the same time ensuring equal opportunities for the EU in the global race for innovation.

The regulation follows a 4-stage risk-based approach. The higher the risk to security or fundamental rights and personal rights in relation to a field of AI application, the stricter the requirements. In the case of minimal risk, they range from no further obligation to the specification of transparency through to the absolute prohibition of certain fields of application. Depending on the risk level, different levels of documentation and transparency obligations also apply.

4. The four risk levels of the EU AI Act are defined as follows:

 

Risk Level 

Definition 

Examples 

Unacceptable Risk 

AI systems that pose a clear threat to people's fundamental rights are prohibited 

Systems that enable authorities or companies to assess social behavior (social scoring) 

High Risk 

Strict requirements apply to AI systems that are classified as high-risk, e.g. with regard to risk mitigation systems, high-quality data sets, clear information for users, human oversight, etc. 

AI-based medical software or AI systems for recruitment. 

Low Risk 

When humans and machines interact, users must be clearly informed that they are dealing with a machine. Content generated by AI must be labeled as such. 

Chatbots 

Minimal Risk 

Most AI systems are not subject to any specific obligations, but companies can voluntarily draw up additional codes of conduct. 

AI-supported IT security products and AI-supported video games 

Source: AI Act enters into force - European Commission

The regulation came into force on August 1, 2024. It is now up to the member states to define responsibilities and initiate the concrete implementation phase.

The bans on certain AI systems will come into force as early as February 2, 2025.

You can find a timetable for implementation here: Timetable for implementation | EU law on artificial intelligence (information without guarantee)

From August 2025, penalties may be imposed, particularly if the documentation requirements for the use of AI and the transparency rules are not complied with. In principle, penalties of up to 35 million euros or up to 7% of annual turnover are envisaged.

5. What is Serviceware's position on the EU AI Act as an AI provider?

Serviceware supports the goals of the EU AI Act to make AI safer and more transparent.

Serviceware has been developing leading AI-based service management solutions in cooperation with the Technical University of Darmstadt for six years. In addition to machine learning algorithms, the AI-native Serviceware platform also uses the leading large language model from OpenAI, which is hosted in Europe on the Microsoft Azure platform.

Generative AI is used both in dialogue design (keyword: digital assistants) and in the automation and optimization of processes and workflows as well as in knowledge management. The possibilities of process automation extend not only to IT service management, but also to other service areas such as HR service management.

Currently (as of November 2024), Serviceware classifies the technologies used in the AI-native Serviceware platform according to the application levels outlined above at the lowest, minimal or the second, low risk level (see below using the example of the chatbot). Serviceware does not currently see any special transparency or labeling obligations for its customers in the context of the EU AI Act that would be associated with the definition of a “high” or “unacceptable” risk.

However, when using the Serviceware Solution Bot or chatbot, Serviceware recommends indicating that it is an interaction with a machine.

How exactly such classifications are to be made in the future and which precedents, criteria or assessments that may not yet be obvious today could play a role in this cannot yet be clearly predicted. 

6. What should be done now? 

Serviceware customers should closely monitor the implementation of the EU AI Act in their respective countries and make their own assessments and classifications at an early stage. Documentation and measures to increase transparency should also be addressed at an early stage.

There are certainly still borderline areas and room for different interpretations in the assessment:

In the table above, for example, the EU Commission lists “AI systems for recruitment” as a high-risk area of application. This presumably refers to the risk of applicants being categorized or evaluated by AI in any automated and potentially discriminatory way. On the other hand, a semi-automated process supported by AI that makes the application, interview or onboarding phase of a new employee as direct and convenient as possible (e.g. through targeted information and answers available at all times) can be viewed less critically from a current perspective.

The Serviceware subsidiary Strategic Service Consulting (SSC) offers a comprehensive and modular consulting framework for the development and deployment of AI strategies. In the area of “Legal aspects and compliance”, the focus is on examining the extent to which the planned or current integration of AI complies with applicable regulations and how potential legal risks - including those arising from the EU AI Act - can be minimized.

SSC Managing Director Dr. Florian Meister: “The EU AI Act will now definitely require AI governance. This will ultimately increase the quality of and trust in AI solutions. This is how we move from 'experimental status' to mature, productive environments.”

Read more from SSC regarding the topic: Develop a sustainable AI strategy.

Conclusion

Looking at the potential penalties, the EU AI Act is reminiscent of the introduction of the EU GDPR in 2016, which threatened similarly draconian sanctions. The EU AI Act will certainly also bring additional burdens for AI users and AI providers. However, it is not yet clear exactly how the aim of reducing the administrative burden on organizations is to be achieved. Unlike the GDPR, which primarily gave individuals more control over their own data and essentially imposed new obligations on companies and organizations, the EU AI Act also offers new opportunities:

In addition to the operational benefits and potential savings that arise through the use of AI, companies can not only strengthen their competitiveness through a documented and demonstrably responsible approach to AI, but also underline a positive awareness of the challenges of artificial intelligence. 

Working with and implementing the EU AI Act will be a learning process, similar to the GDPR in 2016. The topic should be on the agenda of every CIO who currently uses AI or wants to expand its use at an early stage. At the same time, transparency and the idea of protection should be recognized as competitive advantages and actively used.  


Authors

Werner Lütkemeier / Dr. Florian Meister

Sources

Press release of the EU Commission: AI Act enters into force - European Commission

AI Act details from the EU Commission: AI Act | Shaping Europe’s digital future 

Note from the authors: Please note that this blog is based on the author's knowledge and interpretation as of November 25, 2024. It is not intended and suitable as a legally binding recommendation for action by Serviceware SE.

Serviceware

Written by Serviceware

Sharing expertise on excellent Enterprise Service Management.


Related posts

Subscribe to our newsletter and we'll keep you up to date!

I am interested in the following topics: