Why focus on AI?
Artificial Intelligence (AI) is rapidly progressing and transforming nearly every sector, demonstrating that technological innovation transcends traditional disciplinary boundaries. Its versatility is evident in diverse applications, from enhancing public health initiatives to empowering women in leadership. For instance, C-NAPSE collaborated with the Lisbon City Council and Lisboa Robotics to address COVID-19’s multifaceted impacts, providing critical insights into how the pandemic prompted new investments and projects. Furthermore, AI has proven to be an invaluable asset in promoting gender equality. For example, with the Equal Leadership project, C-NAPSE developed an interactive catalogue featuring an AI GPT-powered chatbot, offering best practices resources for researchers and stakeholders dedicated to empowering women in politics and leadership. C-NAPSE’s extensive experience and deep interest in AI underpin this article, which aims to inform SMEs and entrepreneurs about the upcoming EU AI Act.
Continue reading to gain a comprehensive understanding of this pivotal regulation. For further exploration into the Act’s potential influence on social inclusion, urban innovation, talent development, and entrepreneurship, please navigate to the next article.
What is the AI Act?
AI Act: “Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence”
The AI Act is the first of its kind worldwide legislation on AI risk-based rules for certain uses of AI, specifically for developers and deployers of AI. Its purpose is to cultivate trustworthy AI in Europe and will continue to position Europe as a global leader in this arena. The AI Act is one piece in a wider puzzle of policy measures for trustworthy AI which also includes the AI Pact, AI Factories, AI Innovation Package, and a Coordinated Plan.
There are four different levels of risk outlined in the AI Act:
- Unacceptable risk (banned practices listed below)
- High risk (e.g. transport, scoring exams, recruitment software, robot-assisted surgeries, etc.)
- Must pass strict assessments before included in the market
- Limited risk (e.g. GPT-chat bots)
- Requires transparency
- Minimal or no risk (e.g. AI video games, spam filters)
- No specific rules implemented
These eight practices are strictly prohibited by the AI Act:
- harmful AI-based manipulation and deception,
- harmful AI-based exploitation of vulnerabilities,
- social scoring,
- individual criminal offence risk assessment or prediction,
- untargeted scraping of the internet or CCTV material to create or expand facial recognition databases,
- emotion recognition in workplaces and education institutions,
- biometric categorisation to deduce certain protected characteristics,
- real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.
AI high-risk systems are assessed before being implemented through the following steps:
- A high-risk AI system is developed.
- It needs to undergo the conformity assessment and comply with AI requirements. For some systems, a notified body is involved too.
- Registration of stand-alone AI systems in an EU database.
- A declaration of conformity needs to be signed and the AI system should bear the CE marking. The system can be placed on the market. If substantial changes happen i the AI system’s life cycle, then the system must return to step 2 and repeat the process.
Why is the AI Act necessary?
This act is necessary to ensure that European users of AI can trust its legitimacy and safety. Most AI systems do not pose major risks to users but because of the possibility of unwanted outcomes and unanswered questions about particular AI decisions, these concerns must be addressed.
For example, AI-generated content should be clearly and visibly labelled, specifically deep fakes and text published for the purpose of informing the public on matters of common interest. For startups and SMEs, the AI innovation package was developed to support companies in building trustworthy AI compliant with the rules and values of the EU.
The AI Act’s general-purpose AI rules take effect in August 2025, and the AI Office is developing a Code of Practice to clarify these regulations. This Code will serve as a key resource for providers to demonstrate their adherence to the AI Act by incorporating cutting-edge practices. Currently, the third draft of the Code of Practice is publicly available and the final version will be completed by May 2025.
How will it be implemented, governed, and enforced?
The AI Act was first introduced in August 2024 and it will be officially applied on 2 August 2026 in the European Union. Some exceptions exist such as prohibitions and AI literacy obligations being applied from 2 February 2025, the governance rules and obligations for AI models will be applied on 2 August 2025, and the rules for high-risk AI systems in regulated products have a longer transition period until 2 August 2027.
The European AI Office and Member States’ authorities will be responsible for implementing, supervising and enforcing the AI Act. There are three advisory bodies which will steer and advise the AI Act’s governance:
- the European Artificial Intelligence Board with EU Member State representatives,
- the Scientific Panel with independent AI experts,
- the Advisory Forum with stakeholders.
Every EU Member State must identify its relevant authorities in an officially published list for market surveillance by 2 August 2025. These authorities will be empowered to investigate and enforce compliance with the AI act. The consolidated list of all identified authorities are here with Portugal’s relevant authorities listed below.
National Regulatory Authority for Communications | Autoridade Nacional de Comunicações (ANACOM) | https://www.anacom.pt/ |
General Inspectorate of Finance (IGF) | Inspeção-Geral das Finanças (IGF) | https://igf.gov.pt/ |
National Security Office (GNS) | Gabinete Nacional de Segurança (GNS) | https://www.gns.gov.pt/ |
Regulatory Authority for the Media (ERC) | Entidade Reguladora da Comunicação Social (ERC) | https://www.erc.pt/pt/ |
Inspectorate General for National Defence (IGDN) | Inspeção-Geral da Defesa Nacional (IGDN) | https://www.defesa.gov.pt/pt/defesa/organizacao/sc/IGDN |
Inspectorate General of Justice Services (IGSJ) | Inspeção-Geral dos Serviços de Justiça (IGSJ) | https://igsj.justica.gov.pt/ |
Criminal Investigation Police | Polícia Judiciária (PJ) | https://www.policiajudiciaria.pt/ |
Inspectorate General of Home Affairs | Inspeção-Geral da Administração Interna (IGAI) | https://www.igai.pt/pt/Pages/default.aspx |
Inspectorate General of Education and Science (IGEC) | Inspeção-Geral da Educação e Ciência (IGEC) | https://www.igec.mec.pt/ |
Health Regulatory Authority (ERS) | Entidade Reguladora da Saúde (ERS) | https://www.ers.pt/pt/ |
Economic and Food Safety Authority (ASAE) | Autoridade de Segurança Alimentar e Económica (ASAE) | https://www.asae.gov.pt/ |
General Inspection of the Ministry of Labour, Solidarity and Social Security (IGMTSSS) | Inspeção-Geral do Ministério do Trabalho, Solidariedade e Segurança Social (IGMTSSS) | https://www.ig.mtsss.gov.pt/inicio |
Authority for Working Conditions | Autoridade para as Condições do Trabalho (ACT) | https://portal.act.gov.pt/Pages/Home.aspx |
Energy Services Regulatory Authority | Entidade Reguladora dos Serviços Energéticos (ERSE) | https://www.erse.pt/inicio/ |
Discover Portugal’s Public Administration digital regulations regarding the AI Act. Follow the developments of the AI Act here from the Future of Life Institute. Subscribe to their newsletter for bi-weekly information about the AI Act. Refer to FAQs about the AI Act for your specific questions. Use the compliance checker to see your potential obligations.
Check our next article on the impacts of the application of AI to C-NAPSE’s specializations: social inclusion, entrepreneurship and talent, and innovation in cities.