On February 2, 2025, the new provisions of theAI Act, the European Artificial Intelligence Regulation, which aim to ensure the ethical and safe use of AI in the European market. These regulations introduce specific prohibitions on practices considered high-risk and an important digital literacy requirement for organizations and companies, with the aim of promoting appropriate skills for the use of artificial intelligence in a conscious and responsible manner.

Prohibitions on high-risk AI practices.

Section 5 of the AI Act identifies several prohibited AI practices, including the use of systems that:

  • They employ subliminal, manipulative, or deceptive techniques to influence people’s behavior, impairing their ability to make informed decisions and causing, or threatening to cause, significant harm.
  • They exploit vulnerabilities related to age, disability, or socioeconomic conditions to alter the behavior of individuals or groups, causing or risking significant harm.
  • Evaluate or classify people or groups on the basis of social behavior or personal characteristics when this generates discrimination or unfair treatment through social scoring systems.
  • They predict the risk of a person committing a crime based solely on profiling or personal traits, except when such systems support a human assessment based on objective, verifiable facts related to criminal activity.
  • They use “real-time” remote biometric identification systems in public spaces for law enforcement purposes, with few strictly regulated exceptions.

These prohibitions reflect the AI Act’s commitment to ensuring that the use of artificial intelligence technologies complies with the principles of safety, fairness and respect for fundamental rights.

AI literacy requirement.

Article 4 of the AI Act introduces a requirement for suppliers and users of AI systems to take measures to ensure an appropriate level of competence in artificial intelligence. This applies not only to internal staff, but also to all people who work with the organization and are involved in the operation and use of AI systems.

Training measures must be calibrated to the technical knowledge, experience, education level, and work context of the people involved. In addition, it is crucial to take into account the characteristics of the people or groups of people on whom AI systems will be used. This approach aims to ensure that staff are prepared to use AI responsibly and inclusively, respecting the needs of all social groups and promoting equity.

Organizations must implement tools such as workshops, trainings, and other educational initiatives to raise awareness about how AI systems work and the possible risks. This is an important obligation that also sets the stage for more inclusive AI, reducing disparities and ensuring that systems are designed and used in a way that is accessible to all.

 

A step toward a more equitable and conscious future

The introduction of mandatory training makes the AI Act not only a regulatory framework, but also a tool to promote cultural change. Companies are required to invest in the skills of their staff, contributing to a more responsible and inclusive use of artificial intelligence. This ensures that AI does not become a factor of exclusion, but an opportunity to improve the lives and work of all people.

Umbria Business School has already taken up this challenge, offering the course “The ethics of decision-making: when philosophy, management and AI meet.” An initiative that brings together different disciplines to offer professionals the tools they need to make ethical and informed decisions in the age of artificial intelligence.

editor

21/02/2025