adesso Blog

The rise of AI – but at what cost?

‘Artificial intelligence (AI) is increasingly affecting our lives. It offers opportunities, but also poses risks – particularly for security, democracy, businesses and jobs.’

With these words, the European Parliament (EuroParl) introduces its article on the AI Act.

Whether in our private or professional lives, AI has become indispensable. Its applications range from GPTs, which can easily provide, modify or generate a wealth of knowledge, to AI agents that perform specific tasks automatically. There are a wide variety of AI applications that can protect and help us, both for businesses and for society as a whole.

Beyond the opportunities, the European Parliament also lists explicit risks that can be seen as reasons for adopting the European AI Regulation (AI Act): On the one hand, if we as a society do not engage sufficiently with AI, which could have made us safer or more efficient, we will miss out on opportunities. On the other hand, excessive use of AI can lead to wasted resources or AI being used for inappropriate tasks. It is our collective responsibility to maintain a balance here. This applies to us as citizens as well as to organisations.

However, the EuroParl also attaches particular importance to the threats to fundamental rights and democracy. AI can have a profound impact on our society – both positive and negative. Decisions made by AI systems are always based on design choices and data. If structural biases or blind spots are not identified, AI systems can reproduce or even reinforce discrimination – for example, in job recruitment, lending or law enforcement.

One case that has been widely reported in the US media relates to the launch of Apple's own credit card (Apple Card) in cooperation with the US bank Goldman Sachs. Various public figures were given different credit limits. Steve Wozniak, co-founder of Apple Inc., for example, received a different credit limit than his wife. In its investigation, the responsible regulatory authority, the New York State Department of Financial Services (NYSDFS), found that the underlying AI system was not sufficiently transparent. This led to unintended discrimination in the granting of credit.

Privacy protection is also a relevant issue in this context, as AI is increasingly being used for facial recognition, tracking and profiling. At the same time, however, personalised misinformation and realistic-looking deepfakes threaten public discourse and democratic opinion-forming. In authoritarian contexts, such technologies can undermine freedom of assembly and hamper civil society engagement.

This is where AI impact assessment comes in. As a central instrument of the AI Act and the ISO/IEC 42001:2023 standard, it serves to identify risks at an early stage, assess them systematically and limit them in a targeted manner. In addition to technical robustness, responsibility, transparency and human-centred design are of particular importance here.

What is an AI impact assessment anyway?

As already mentioned, AI impact assessment is a specific requirement of the AI Act and also of the ISO/IEC 42001:2023 standard. The AI impact assessment, also known as the fundamental rights impact assessment, is intended to evaluate the impact of an AI system on the fundamental rights, health and safety of individuals or society. It applies in particular to certain high-risk AI systems.

The exact procedure is still being worked out by the European Commission's AI Office, but should be divided into the following steps when implemented:

On closer inspection, it becomes apparent that the elements of an AI impact assessment are similar to the structure and purpose of a data protection impact assessment under the GDPR.

As in other areas, it makes sense in practical implementation to build on existing processes.

Why should AI impact assessments be carried out now?

The data protection experts among us are aware that, according to Art. 35(1) GDPR, a data protection impact assessment must be carried out before personal data (PBDS) may be processed. This requirement also exists in the AI Act. According to Art. 27 AI Act, a so-called ‘fundamental rights impact assessment’ must be carried out before an AI system is put into operation. This applies in particular to high-risk AI systems.


We support you!

We support companies in setting up an AIMS in a targeted and practical manner – from the initial maturity analysis to full integration into existing management systems. Our interdisciplinary team of experts in IT governance, IT risk and IT compliance management, and AI technology provides you with comprehensive support on your journey towards secure, compliant and trustworthy AI use.

Contact us now with no obligation


In which areas must AI impact assessments be carried out in particular?

Annex III of the AI Act lists the use cases of AI systems that fall under the term ‘high risk’. This applies in particular to areas where decisions made by AI have a direct impact on people's lives, safety or rights.

  • This includes systems for biometric identification, such as remote identification of individuals or the recognition of sensitive characteristics and emotions.
  • In critical infrastructure, such as energy supply control, digital networks or road traffic, the use of AI is associated with high risks.
  • In the education sector, AI systems fall into this category if they determine access, performance assessment or examination behaviour.
  • In the employment context, this includes AI systems for selecting applicants, evaluating employees or automating task allocation. It also affects access to essential services, including social benefits, loans, insurance and emergency call systems.
  • The use of AI requires particular sensitivity in law enforcement. This applies to the use of AI to assess the risk of reoffending, evaluate evidence or as a lie detector.
  • Caution is also required in migration and border management, for example in risk assessment or support for asylum decisions.

The judiciary and democratic processes may also be affected. AI systems that support judicial decisions or specifically influence voting behaviour are also subject to the strict requirements of the AI Act.

When considering these use cases or areas of application, it is striking that AI systems are certainly already being used in some areas. This means that, in accordance with the AI Act, at least ex post AI impact assessments must be carried out in these areas and, in certain cases, the use of the AI system must be suspended until the AI impact assessment has been carried out.

Side note:

In June 2025, the Federal Network Agency imposed a fine of around two million euros on a medical AI provider for failing to carry out a risk assessment. Although the sanctions of the AI Act will not officially apply until August 2025, the Market Surveillance Act already in force in Germany enables the Federal Network Agency to take early action.This national law supplements the AI Act and creates a legal basis for effectively punishing violations even before the European penalty framework comes into force.

Conclusion: AI impact assessment as a strategic tool

However, AI impact assessment is more than just a regulatory obligation. It is a strategic tool for building trust in the use of AI, systematically managing risks and fulfilling social and corporate responsibilities.

Its added value is evident not only in legal protection, but also in the targeted development of robust, fair and transparent AI systems that promote both corporate success and the common good.

To meet this requirement, a proactive and interdisciplinary approach is necessary. The specialist departments, IT, the legal department, data protection, ethics and information security must work together to establish effective assessment procedures. This makes it possible to identify critical areas of application at an early stage, avoid undesirable developments and safely exploit potential.

The implementation of AI impact assessment as a standard practice should therefore be the goal of every organisation that wants to use AI responsibly.

We support you!

adesso provides organisations with comprehensive support in establishing a sustainable AI governance framework. We accompany you:

  • in identifying relevant AI use cases,
  • in conducting and documenting AI impact assessments (based on ISO/IEC 42001 and the AI Act),
  • in setting up integrated AI management system processes along existing data protection, IT risk and IT compliance structures,
  • in setting up an AI management system in accordance with ISO/IEC 4200:2023, and
  • in training internal stakeholders to ensure consistent implementation across the organisation. (Here you will find training courses on the topic of AIMS implementers in cooperation with our partner qSkills.)

This ensures that AI impact assessment does not become a stumbling block, but rather an enabler for trustworthy innovation.

What now?

We support companies in setting up an AIMS in a targeted and practical manner – from the initial maturity analysis to full integration into existing management systems. Our interdisciplinary team of experts in IT governance, IT risk and IT compliance management, and AI technology provides you with comprehensive support on your journey towards secure, compliant and trustworthy AI use.

Contact us now with no obligation

Picture Kaan Güllü

Author Kaan Güllü

Kaan Güllü is deeply involved in IT risk management, information security and IT compliance, including data protection and the regulatory requirements of the AI Act. His core responsibilities include setting up and operating information security, data protection and AI management systems (ISMS, DSMS, AIMS). He has extensive experience in dealing with international standards such as ISO 27001, ISO 42001, BSI IT-Grundschutz, the EU General Data Protection Regulation (EU GDPR) and the German Federal Data Protection Act (BDSG). This combination enables him to take a holistic approach to security and compliance issues in modern IT environments.



Our blog posts at a glance

Our tech blog invites you to dive deep into the exciting dimensions of technology. Here we offer you insights not only into our vision and expertise, but also into the latest trends, developments and ideas shaping the tech world.

Our blog is your platform for inspiring stories, informative articles and practical insights. Whether you are a tech lover, an entrepreneur looking for innovative solutions or just curious - we have something for everyone.

To the blog posts