What does ChatAIAct say about itself?
You know you are knee-deep in the metaverse once you start asking AI about AI. But it’s a relevant question: the EU AI Act governs all AI solutions in Europe, so including ChatAIAct, ChatMDR, ChatIVDR and ChatFDA.
We decided to ask the AI to assess itself. This was the prompt we gave to ChatAIAct:
ChatMDR, ChatIVDR and ChatFDA are chatbots that help regulatory affairs professionals navigate the medical device regulations of the EU and the USA. They are not medical devices themselves, but tools to help those working for manufacturers or notified bodies to understand and apply the regulation. Similarly, ChatAIAct is a chatbot that does the same for the EU AI Act.
Write a short blog post on how these tools are classified under the AI Act and which requirements need to be met. Avoid words like navigate, complexity, delve into and other obscure words that only an AI would use. Also, no emojis please.
And here is the post it wrote:
How Are Regulatory Chatbots Like ChatMDR, ChatIVDR, ChatFDA, and ChatAIAct Classified Under the EU AI Act?
Regulatory chatbots such as ChatMDR, ChatIVDR, ChatFDA, and ChatAIAct are designed to help professionals working with medical device and AI regulations. They are not medical devices themselves, nor do they make decisions that directly affect health, safety, or fundamental rights. Here’s how these tools are classified under the EU AI Act and what requirements apply:
Classification Under the AI Act
According to Article 6 of the AI Act, an AI system is considered “high-risk” if it is:
- A safety component of a product covered by EU harmonisation legislation (like medical devices), and
- That product requires a third-party conformity assessment.
Chatbots like ChatMDR, ChatIVDR, ChatFDA, and ChatAIAct are not safety components of a regulated product, nor are they products themselves under the Union harmonisation legislation listed in Annex I. They are support tools for regulatory professionals. Therefore, they do not fall under the high-risk category by default.
Article 6(2) also lists certain use cases in Annex III as high-risk. However, these chatbots do not perform tasks such as biometric identification, critical infrastructure management, or decision-making in areas like law enforcement or migration. They simply provide information and guidance.
Requirements to Meet
Since these chatbots are not high-risk AI systems, most of the strict requirements of Chapter III (such as conformity assessment, quality management systems, and CE marking) do not apply.
However, some general transparency obligations from Article 50 may still be relevant:
- If the chatbot interacts directly with users, it should be clear that the user is interacting with an AI system, unless this is already obvious.
- If the chatbot generates synthetic content (such as text), it should be marked as AI-generated if the content is published to inform the public on matters of public interest, unless there is human editorial control.
In practice, for these regulatory chatbots, the simplest way to comply is to:
- Clearly inform users that they are interacting with an AI system.
- Ensure that any AI-generated content is marked as such if it is published for public information without human editorial review.
Summary
Regulatory chatbots like ChatMDR, ChatIVDR, ChatFDA, and ChatAIAct are not classified as high-risk AI systems under the EU AI Act. They do not need to undergo conformity assessment or meet the strict requirements for high-risk systems. The main obligation is to be transparent with users about the AI nature of the tool, which can be fulfilled with a simple notice.
References:
- Article 6 (Classification rules for high-risk AI systems)
- Article 50 (Transparency obligations for providers and deployers of certain AI systems)