Italy Antitrust Forces AI Hallucination Warnings

A formal portrait of Roberto Rustichelli, President of the Italian Competition Authority (Antitrust), standing with his arms crossed in an office setting. He is wearing a dark suit and glasses. In the background, the Italian and European Union flags are visible, alongside a framed portrait on the wall
Roberto Rustichelli, the President of the Italian Competition Authority (AGCM), pictured in his official capacity

ROME - The Italian Antitrust Authority is shining a spotlight on artificial intelligence and the sensitive issue of “hallucinations”, i.e. incorrect answers generated by systems. After a series of investigations, major companies in the sector have made concrete commitments to ensure greater transparency for users. From new warnings in chats to clearer pre-contractual information, the rules of the game are changing for those who use these tools. But what does this really mean for consumers, and what limits still remain? An intervention that marks a decisive step in the relationship between AI and market protection.

Antitrust Italy: AI

Antitrust Italy: AI, thanks to the Authority’s action, DeepSeek, Mistral and NOVA AI will provide transparent information on the risk of “hallucinations”. The Italian Competition and Market Authority has closed three investigations by accepting commitments aimed at strengthening the informational transparency of the systems offered, intervening in the service delivery channels (websites and apps) and in the various stages of the decision-making process preceding purchase or registration. In recent months, the AGCM has extended its enforcement action on unfair commercial practices to generative artificial intelligence systems. In particular, it has addressed the risk of so-called “hallucinations”, i.e. the production of inaccurate or misleading content.

The investigations and the companies involved

In this regard, the Antitrust Authority conducted three investigations into the companies Hangzhou DeepSeek Artificial Intelligence Co. Ltd and Beijing DeepSeek Artificial Intelligence Co. Ltd (‘Deepseek’), Mistral AI SAS (‘Mistral’) and Scaleup Yazilim Hizmetleri Anonim Şirketi, which offers a cross-platform chatbot service called NOVA AI. The three investigations were concluded with the acceptance of commitments, without any finding of infringement pursuant to Article 27, paragraph 7, of the Consumer Code. The companies have undertaken specific obligations to strengthen informational transparency regarding the risk of “hallucinations” in the systems offered, intervening in the service delivery channels (websites and apps) and in the various stages of the decision-making process preceding purchase or registration.

DON'T MISS THIS WEEK:

Weekly horoscope 27 April–3 May 2026 sign by sign

The measures adopted and the commitments

Permanent disclaimers have therefore been introduced in the user interfaces below chats, warning in Italian of the presence of hallucinations, including dedicated hyperlinks. In addition, the pre-contractual information has been integrated and expanded with explicit warnings about the limits of reliability of generated content and the need to verify it. In the case concerning DeepSeek, the company has also planned a technological investment to mitigate the phenomenon of “hallucinations”, while acknowledging that - at the current state of technology - it is not possible to eliminate them completely. In the commitments relating to NOVA AI, it has also been clarified to consumers that it is a service that only provides access, via a single interface, to certain chatbots (for which further details have been provided), without offering an aggregation or response-processing service.

Text of the DeepSeek decision

Text of the DeepSeek commitments

Text of the Mistral AI decision

Text of the Mistral AI commitments

Text of the NOVA AI decision

Text of the NOVA AI commitments

Commenti