AI Act: Deadlines, Digital Omnibus Risks, and Urgent Actions
L'AI Act It is already law in Europe and some provisions are operational, while others depend on decisions on the Digital Omnibus.’AI Act However, it poses a question for companies that can no longer be postponed: should they wait for clarification or immediately begin building a structured compliance strategy?
The context is paradoxical: the EU has the world's first comprehensive regulation on artificial intelligence, but the core of the regulation—high-risk systems—is still in a regulatory gray area. The Digital Omnibus discusses a postponement of up to 16 months for certain obligations, while the EDPB and EDPS warn against any shortcuts that could turn simplification into deregulation.
For companies developing, integrating, or using AI systems, regulatory uncertainty isn't a justification for inaction. It's a growing cost, resulting in stalled projects, postponed investments, and increased exposure to sanctions and reputational risks.
AI Act: existing obligations and operational bans
To understand the impact of the’AI Act When it comes to corporate strategies, it's crucial to start with what's already fully operational. EU Regulation 2024/1689 came into force on August 1, 2024, and has begun to have a tangible impact on AI governance in Europe.
From February 2, 2025, prohibited AI practices are in operation, including:
- the social scoring of natural persons;
- there subliminal manipulation of vulnerable users;
- there biometric categorization on the basis of race or other sensitive parameters.
On the same date the obligation came into force AI literacy for anyone operating an AI system, requiring organizations to implement targeted training programs for operators, developers, and decision-makers. The European Commission has published specific guidelines on the prohibitions and definition of an AI system, offering for the first time operational interpretative criteria to distinguish the systems covered by the regulation from other traditional software solutions.
Another key step in the AI Act is August 2, 2025, when the following will come into effect:
- the rules on GPAI models (General Purpose AI);
- the architecture of institutional governance European on AI;
- the regime sanctioning, which can reach up to 35 million euros or 7% of the company's annual global turnover, whichever is greater.
These elements make it clear that, even while awaiting decisions on the Digital Omnibus, a significant part of the AI Act's regulatory framework is already binding and requires concrete adjustments.
AI Act and high-risk systems: deadlines, Digital Omnibus, and uncertainties
The most delicate point of the’AI Act it concerns the high-risk systems, which affect areas such as credit scoring, personnel selection, educational assessments, biometric identification, and justice. This is where the legislation demonstrates its most ambitious approach, but also the most exposed to the risk of structural uncertainty.
On 19 November 2025, the Commission proposed, as part of the so-called Digital Omnibus, a conditional deferral mechanism: the obligations for high-risk systems would no longer be triggered on a fixed date, but 6 or 12 months after confirmation of the existence of "appropriate measures to support compliance", such as:
- harmonized standards Europeans;
- common specifications techniques;
- detailed interpretative guidelines.
The proposal, however, includes strict deadlines:
- December 2, 2027 for systems listed in the’Annex III of the AI Act;
- August 2, 2028 for the systems of the’Annex I.
The Commission's official motivation is pragmatic in nature. The standards of the CEN-CENELEC JTC21 I'm late: the first standard, prEN 18286 (AI-related quality management systems), closed the inquiry phase only in January 2026, while the complete package of standards is expected in the fourth quarter of 2026. Furthermore, many Member States have not yet designated their respective competent authorities for AI supervision.
The political dimension is equally relevant. On 20 January 2026, the Joint Opinion EDPB-EDPS 1/2026 has sent a clear message: easing the procedural rules cannot translate into an effective reduction of protections. In particular, the Guarantors oppose the cancellation of the registration obligation for systems self-excluded by art. 6(3), ask to maintain the standard of "strict necessity" for the use of sensitive data in bias detection and underline the need to permanently involve the data protection authority (DPA) in the new European sandboxes for AI. The dossier is entrusted to MEP Michael McNamara, and the trilogues are scheduled for spring 2026, with the aim of concluding by mid-year.
If the Digital Omnibus isn't adopted before August 2026, the original AI Act timeline for high-risk systems will automatically revert. Basing business plans on the assumption of a postponement therefore becomes a gamble and not a true compliance strategy.
High-risk compliance: project timelines, Article 50, and Law 132/2025
Regardless of the final outcome of the Digital Omnibus, companies must consider the real constraint: the project times required for high-risk compliance with the AI Act. A complete compliance process, including analysis, technical implementation, governance, and audit, typically takes between 8 and 14 months. Notified bodies are already reporting full schedules heading into the second quarter of 2026, further reducing operating margins for latecomers.
Compliance for high-risk systems isn't just a paperwork requirement. It requires:
- risk management systems specific for AI risks;
- a data governance in accordance with Article 10 of the AI Act;
- structured mechanisms of human supervision of the systems;
- audit trail complete and verifiable;
- extensive technical documentation;
- processes of conformity assessment interiors and exteriors.
Then there are deadlines that the possible postponement does not affect. Article 50 of the AI Act, dedicated to transparency, applies from 2 August 2026: platforms and providers will have to ensure clear information on AI-generated content, including the marking of AI-generated content (Article 50, paragraph 2). For systems already on the market, a short grace period is foreseen, limited until February 2027, but the direction is irreversible. The obligations for public administrations they are not deferred.

In the Italian context, the system remains fully operational. Law 132/2025, which inserted the AI Act into the national legal system introducing, among other things, the crime of deepfake and strengthening the connection with Model 231. The framework, therefore, does not move as a single block: it is a mosaic of obligations, each with its own effective date.
In parallel, companies must manage the intersection of the AI Act and other regulations such as the GDPR, the directive NIS2, the Cyber Resilience Act, and the Machinery Regulation. From 2026 onward, at least six major regulations will converge in the implementation phase, making siloed compliance management unsustainable.
Non-deferrable actions: How to prepare for the AI Act in layers
In a context where the timeline of the’AI Act it is mobile but the direction is defined, the most effective response for companies is to build an adaptation path for layers. The first layer is inventory: mapping all AI systems in use, in development, or being acquired, classifying them by role (provider, deployer, importer) and risk level. This exercise takes between 4 and 8 weeks, but it generates crucial organizational awareness.
The second layer is the integrated governance. A framework must be established that aligns the AI Act, GDPR, NIS2, Cyber Resilience Act, Machinery Regulation, and industry regulations. Managing each of these compliances separately increases costs, redundancies, and the risk of inconsistencies between policies, processes, and technical controls.
The third layer is the design for compliance. Every new AI project should be born aligned with the high-risk requirements of the AI Act, even if it doesn't formally fall under that classification today. Integrating traceability, explainability, and human supervision right from the start costs a fraction of what it would cost to retrofit systems already in production. Those who design today without considering the AI Act are creating a technical-regulatory debt which will present the bill tomorrow, with high interest.
Finally, there's the competitive dimension. The Commission's AI Act has attracted over 230 signatures from major global organizations—including Allianz, Lenovo, Reply, and Kaspersky—who have chosen to move ahead. Those building a robust AI management system today are not only reducing the risk of sanctions, but also gaining a competitive advantage based on trust, reliability, and the ability to scale AI use sustainably.
AI Act: Impact on Marketing and Business
THE'AI Act It's not just a technical standard for lawyers and data scientists, but a strategic lever for marketing, sales, and customer experience. European regulation mandates transparency, oversight, and accountability in the use of artificial intelligence, elements that directly impact the quality of relationships with customers and prospects.
In the field digital marketing, The AI Act impacts key activities such as segmentation, advanced profiling, lead scoring, campaign automation, and predictive analytics. AI systems used to assess customer reliability, prioritize requests, or personalize offers may fall into the high-risk category or otherwise be subject to transparency and human oversight requirements.
For companies that work on direct communication channels such as email, chatbots, social and instant messaging, The new regulation pushes for more responsible design of automated interactions. The need to identify when a user interacts with an AI system and to brand artificially generated content is changing the way we build digital funnels and journeys.
However, this scenario also opens up important opportunities:
- strengthen the trust customers through clear communications on the use of AI;
- improve the data quality used in campaigns, thanks to more robust data governance processes;
- integrate human supervision and automation, transforming marketing and sales teams into intelligent orchestrators of AI-driven flows.
Companies that anticipate the requirements of the AI Act in their marketing automation systems can present themselves to the market as reliable players, ready for a future where regulatory compliance becomes a prerequisite for innovation, not a limit to it.
How SendApp Can Help with AI Act
In this complex regulatory scenario, structured communication tools such as those of SendApp can support companies in combining automation, AI and compliance requirements.’AI Act. Business messaging – and WhatsApp Business in particular – is one of the most sensitive points of the customer experience, where transparency and AI control become essential.
With SendApp Official, companies can access the Official WhatsApp APIs and integrate chatbots, virtual agents, and automated workflows within a reliable, traceable, and scalable infrastructure. This allows for AI-driven conversations to be designed that comply with the transparency, logging, and oversight principles required by the AI Act, while preserving the quality of customer service.
For managing conversations and teams, SendApp Agent allows you to orchestrate the interaction between human operators and automations, ensuring that the human supervision It remains possible even when the first response is entrusted to an artificial intelligence system. This is crucial for high-risk requirements and for the proper management of complaints, disputes, and sensitive decisions.
SendApp thus offers a solid technological foundation for building conversational journeys that comply with the new regulatory framework, without sacrificing campaign speed, personalization, and scalability. Companies wishing to anticipate the AI Act deadlines can use the platform as a hub for integrating AI, automation, and human interaction in a controlled manner into their marketing and customer care processes.
To assess your company's readiness for the AI Act and understand how to structure compliant communication flows on WhatsApp Business, you can request a dedicated consultation and test SendApp solutions. It's time to transform compliance from a constraint to a competitive advantage, starting with the channels your customers use every day.







