On August 1, 2024, the European AI Act came into force – possibly a critical turning point in the development of artificial intelligence: While this EU AI regulation, the first comprehensive AI regulation worldwide, sets new standards for safe and ethical AI, many companies see it as stifling Europe's innovative strength in the bud.
At the same time, the current Microsoft Digital Defense Report 2024 warns of a dramatic increase in AI-based cyber attacks – with Germany as a major target. The threats range from deceptively realistic deep fakes to automated hacker attacks that outsmart traditional security systems.
And then researchers at the Max Planck Institute for Human Development continue to raise alarming questions: How can we ensure that AI systems do not interfere with human behaviour in a manipulative way? And how do we prevent automated decision-making systems from reinforcing existing societal biases?
All these challenges require new job profiles or, above all, specializations at the interface of technology, ethics and security. Experts who understand both the technical intricacies and the ethical implications of AI are becoming key figures – because in the end, it is about feasibility, risks and economic viability.
AI Security in development
The EU AI Act establishes strict safety requirements for the development and use of AI systems. At the heart of this is a risk-based approach that classifies AI applications into four different risk levels: unacceptable risk, high risk, limited risk and minimal risk (Fig. 1). Companies must implement extensive technical and organisational measures, especially for high-risk applications, such as in the areas of critical infrastructure or personnel administration:
- Documentation of technical robustness and reliability
- Implementation of quality management systems
- Ensuring human supervision
- Proof of technical accuracy
While the basic idea of AI regulation is welcomed, the German AI Association points out critical weaknesses that particularly affect European start-ups and SMEs.
Central points of criticism of the KI association
One major concern relates to the overly broad definition of AI systems. The current definition would include almost any software that uses statistical methods or search and optimisation methods – even without any actual connection to AI. This creates considerable legal uncertainty for companies.
The definition of high-risk applications is also particularly problematic and, in the association's opinion, too broadly applied. Even applications with low risk potential are subject to strict conditions, which leads to disproportionate effort.
Competitiveness at risk
The AI association warns of a possible competitive disadvantage compared to American and Chinese competitors. The high complexity of the regulation and the associated bureaucratic requirements could slow down or simply hinder the development of European AI innovations.
Seizing opportunities through compliance
Despite the criticisms, the AI Act also offers opportunities. Companies that adapt to the requirements at an early stage can position themselves as pioneers in the field of secure and trustworthy AI. Well-thought-out and legally compliant development not only avoids the need for later improvements, but also strengthens the trust of one's own clientele.
While the EU AI Act regulates the security of AI system development and is the subject of heated debate, IT security faces a completely different challenge: AI-supported cyber attacks are increasing dramatically and require new defence strategies.
IT Security: The growing threat of AI-supported cyber attacks
Like every tool in human history, the development and use of artificial intelligence brings with it not only opportunities but also significant security risks. As mentioned at the beginning, the Microsoft Digital Defense Report 2024 shows a dramatic increase in AI-based cyberattacks, with the number of blocked attacks increasing from 35.7 billion in 2021 to 156 billion in 2023. In addition to the number of attacks, their complexity has also increased, and yes, unfortunately AI is literally taking them to the next level.
The current threat landscape
Cybercriminals are increasingly using AI technologies for advanced attack methods. Of particular concern is the use of AI to automate and scale cyberattacks. Criminals use advanced AI models to create deceptively realistic phishing emails, optimise social engineering attacks and automatically identify security vulnerabilities in systems. The creation of deceptively realistic social engineering content that can mislead even experienced users is critical.
This situation is further exacerbated by the increasing collaboration between cybercriminal groups and state actors, who exchange tools and techniques. Germany is a particular target for attackers, with an above-average number of targeted attacks on corporate infrastructures.
Challenges for the defence
The high speed and dynamics with which cybercriminals adapt and develop their attack methods pose a particular challenge. This requires companies to take a new, more agile approach to cyber security – they need to fundamentally rethink their defence strategies and implement AI-supported security systems for early detection.
Necessary countermeasures
An integrated approach is needed to effectively defend against these new threats. This approach is not new and is generally required in IT security. Companies must not only invest in modern security technologies, but also regularly train their employees on current threats and social engineering attempts. It is also important to continuously monitor suspicious activities and to strengthen international cooperation in cyber security. Only through coordinated efforts can the increasing professionalisation of attackers be countered.
Practical security measures
Companies can take concrete steps:
- Implementation of security-by-design principles
- Regular security audits and penetration tests
- Development of an AI risk management system
- Training employees in IT security
- Hiring security experts and developing IT security teams
While the technical security of AI systems is a fundamental requirement for their use, the ethical aspects of AI use raise even more complex questions. In the following chapter, we look at how companies can develop and implement ethically sound AI solutions. This involves fundamental questions of fairness, transparency and accountability in the use of AI technologies.
AI Ethics: The moral dimension of artificial intelligence
Let us now turn to the ethical challenges that AI brings with it. This is where profound ethical questions are raised that affect our society as a whole.
Basic ethical principles
Bernd Irlenbusch, an expert in this field, identifies five main areas of ethical challenges related to AI:
- Discrimination and fairness
- Transparency and explainability
- Accountability and control
- Privacy and security
- Reliability and protection
These areas often overlap and form the basis for many ethical discussions around AI.
Transparency is key
Transparency is a central theme in AI ethics. Valère Rames, a deep tech and algorithms expert, emphasises the importance of transparency in AI systems. He argues that the decision-making processes of AI algorithms should be as comprehensible and explainable as possible in order to strengthen public trust and promote more responsible use of the technology.
Diversity and Inclusion
Another important aspect is the consideration of diversity and inclusion when developing AI systems. Rames points out that AI systems trained with insufficient or one-sided data can reinforce cultural and social prejudices. To counteract this problem, it is essential to promote diverse development teams and apply inclusive practices throughout the development process.
The boom and bust of AI ethics
Interestingly, AI ethics has seen something of a rollercoaster ride in recent years. As reported by the Kalaidos University of Applied Sciences, there was initially a boom in AI ethics, triggered by cautionary voices such as that of Nick Bostrom and amplified by the introduction of ChatGPT. Prominent figures such as Elon Musk and Stephen Hawking called for strict regulations.
However, this peak was followed by a rapid shift towards ethical pragmatism. Practical benefits and economic interests came to the fore, often at the expense of ethical principles. One example of this is the use of AI in military applications, which, despite initial ethical concerns, has become increasingly accepted.
The future of AI ethics
Despite these challenges, there is hope for the future of AI ethics. Experts emphasise the need for stricter regulatory frameworks, the integration of ethical principles into the development process and the importance of public education. Independent audits and certifications for ethical AI, as well as international cooperation, are seen as important steps forward.
New and specialised AI job profiles
The growing demands for security and ethical responsibility in AI development are shaping the job market of the future: While established AI roles such as machine learning engineers and data scientists sometimes have to develop new expertise in cybersecurity and ethical AI development, completely new job profiles are emerging in parallel. There is demand for specialists who not only develop and implement AI systems, but can also ensure their security and set ethical guidelines.
AI Consultant
AI consultants act as a bridge between technical AI experts and decision-makers in companies. They conduct AI readiness assessments, develop tailored AI strategies and identify business processes that can be optimised through AI. In doing so, they must keep ethical implications and compliance requirements in mind. Successful AI consultants combine in-depth technical knowledge with excellent communication skills to make complex AI concepts understandable for all stakeholders.
Main responsibilities
- Entwicklung passgenauer KI-Strategien unter Berücksichtigung ethischer und rechtlicher Aspekte
- Schulung von Führungskräften und Mitarbeitern zu KI-Grundlagen und -Potenzialen
AI Strategist
An AI strategist is responsible for the long-term vision and planning of AI deployment within the company. They develop a comprehensive AI roadmap that is aligned with the company's overall strategy and prioritises AI initiatives based on business value and technical feasibility. They also ensure that all AI initiatives are ethically sound and develop KPIs to measure success. Successful AI strategists combine technical understanding with strategic thinking and a strong business acumen.
Main responsibilities
- Development of an AI roadmap that takes ethical aspects into account
- Continuous market observation and trend analysis in the field of AI
AI Manager / AI Project Manager
AI managers coordinate the implementation of AI projects. They plan and monitor the entire project life cycle, lead interdisciplinary teams and ensure compliance with data protection and security requirements. Professional AI managers combine technical expertise with strong leadership and communication skills to ensure smooth collaboration with all stakeholders.
Main responsibilities
- Project management and problem solving: Planning, monitoring and risk management for technical and ethical challenges.
- Compliance and data protection: Ensuring compliance with data protection and security guidelines.
AI product manager
AI product managers are responsible for the development and market launch of AI-based products or services. They analyse the market, define product visions and work closely with development teams. They always keep an eye on the user-friendliness and accessibility of AI products and take ethical aspects and potential biases (systematic distortions in data or algorithms that can lead to unfair or discriminatory results) into account. Successful AI product managers combine a deep understanding of AI technologies with knowledge of market dynamics and user behaviour.
Main responsibilities
- User-friendliness and ethical justifiability: Ensuring that AI products are intuitive and fair, without discriminatory biases.
- Continuous improvement: Optimising products based on user feedback and ethical considerations.
Data Governance Specialist
Data governance specialists play a critical role in the responsible management of data for AI systems. They develop and implement data management guidelines, ensure compliance with data protection laws, and monitor data quality and integrity for AI applications. They work closely with AI ethicists to ensure ethical data use. Good data governance specialists have a deep understanding of data management, compliance, and AI technologies.
Main responsibilities
- Implementation of data security measures to protect sensitive information
- Training of employees in data protection and responsible data use
AI Security Specialist
AI security specialists are the gatekeepers of AI systems, combining both technical expertise in AI and in-depth knowledge of cybersecurity. They develop and implement security frameworks for AI and machine learning systems, conduct risk analyses, and protect AI applications from manipulation such as adversarial attacks or data poisoning. At a time when AI-based cyber attacks are increasing dramatically, they are critical to protecting sensitive data and the integrity of AI systems. Successful AI security specialists should anticipate new threat scenarios and adapt security strategies accordingly.
Main responsibilities
- Development and implementation of AI-specific security protocols and protective measures
- Conducting security audits and penetration tests for AI systems, taking into account ethical guidelines
Deep Learning Engineer
Deep learning engineers develop and optimise complex neural networks for various application areas. They design and implement deep learning architectures, optimise models in terms of accuracy and efficiency, and work closely with domain experts. They must always take ethical aspects such as fairness and interpretability into account when developing models. Successful deep learning engineers have in-depth knowledge of mathematics, statistics and programming, as well as a solid understanding of neural network architectures. Please also read our article: The evolution of machine learning and its roles
Main responsibilities
- Optimisation of models taking into account ethical aspects such as fairness and interpretability
- Implementation of transfer learning and few-shot learning methods (both techniques in which models can learn with only a few training examples, instead of relying on large amounts of data).
AI Trainer
AI coaches (often also called machine learning coaches) specialise in effectively training and fine-tuning AI models. They select suitable training data, develop strategies to improve model accuracy and implement techniques such as data augmentation. They work closely with ethicists to identify and minimise bias in training data. Professional AI trainers need a deep understanding of machine learning algorithms, data analysis and ethical aspects of AI training.
Main responsibilities
- Detection and minimisation of bias in training data
- Documentation of training processes for transparency and reproducibility
AI ethicists
AI ethicists ensure that AI is developed and used responsibly. They develop ethical guidelines, conduct ethical impact assessments and advise development teams on ethical aspects throughout the development process. They also promote public dialogue on ethical aspects of AI. To do this, they need an interdisciplinary understanding of philosophy, technology and social sciences.
Main responsibilities
- Conducting ethical impact assessments for AI applications
- Training employees on ethical issues related to AI
(AI) Prompt Engineer
Prompt engineers specialise in optimising the interaction between humans and AI systems. They develop and optimise prompts for various applications, analyse and improve the output quality of AI models and work closely with UX designers. They take ethical aspects into account when designing prompts and interactions. They need extensive knowledge of natural language processing, cognitive science and human-machine interaction.
Main responsibilities
- Optimisation of prompts taking ethical aspects into account
- Continuous evaluation and adaptation based on user feedback
Conclusion
The growing demands for security and ethical responsibility in AI development will have a significant impact on the labour market of the future. New job profiles such as AI ethicist, AI security specialist or data governance specialist are emerging, while existing roles such as machine learning engineer or data scientist are expanding to include skills in cybersecurity and ethical AI development.
This development is being driven by two parallel trends: on the one hand, the EU AI Act is creating a binding framework for the safe and ethical development of AI systems, while on the other hand, the threat of AI-based cyber attacks is increasing dramatically. AI experts of the future will have a deep understanding of ethical implications and security aspects.
For companies, this means that they need to invest not only in technology, but above all in people who can develop and use AI responsibly and securely. Only in this way can they exploit the opportunities of AI while at the same time meeting the growing challenges and regulatory requirements in the areas of security and ethics.
Read more on the topic of AI:
Does AI create more jobs than it destroys?
Specialisation in the field of artificial intelligence – generalist vs. specialist
Learning and development in cybersecurity
Learning and development meets AI and machine learning
How to find cloud experts when it comes to the AI Era and cybersecurity