Let’s start this blog with explaining what the EU AI Act actually is. Europe’s AI regulation has a twofold intention:
It does this by classifying AI systems based on their risk level, ranging from a minimal risk, like for instance a spam filter, to an unacceptable risk, such as for instance manipulative algorithms in online games for children. The Act also foresees a governance framework to oversee the implementation and enforcement of its rules.
AI assistant tools in content management are currently – 2024 – based on fine-tuned commercialized versions of Large Language Models. This can be augmented with specialized domain and up-to-date information or a more general version of foundational models.
What’s the risk there? How could generative AI of this kind not comply with the European regulation? In several ways, so it seems. On the other hand, the AI Act clearly also offers a base to build trust in current and future digital solutions.
Let’s have a look at the different aspects of Enterprise Content Management and how AI will play a role in this in the future.
Future applications of user authentication for digital applications might implement AI for biometric identification. Emotion recognition systems could become an essential part in advanced content creation and management and could be harnessed to facilitate efficient case workflow management. These two examples alone, implemented in ECM of the future, catapults this software in what the AI regulation considers “high-risk AI systems”.
But there are far less futuristic implementations of AI which could be labelled as potentially high-risk, and which have already proven to need regulation, such as automated decision making.
If your ECM includes automation of decisions which impact customers’ employment, creditworthiness, or access to essential services, your AI must adhere to stringent requirements.
Generative AI as a part of service platforms – like advanced chatbots – are considered limited-risk AI, but still entail transparency obligations, such as the requirement to inform customers or employees that they are interacting with an AI system and the requirement to disclose whether content has been artificially generated or manipulated.
One of the most urgent problems concerning AI in general is its default lack of transparency by design, as these systems are training based probability machines rather than old-school rule-based programs. Hence the rise of a new subdomain within computer sciences: explainable AI, XAI for short. This could speed-up AI adoption even more, as a chain of AI “reasoning” will make AI more transparent and will also increase the opportunity for improving on suboptimal results.
In all cases, regardless of the risk level, the ECM provider must ensure that the AI system particularly complies with existing EU laws related to data protection and non-discrimination. If an ECM for instance does some black-box discrimination between people which impact their lives and cannot be justified or explained, the system is not compliant.
The upside of longer-term consequences of the AI Act is that enforced regulation can foster trust in the safety and transparency of digital solutions.
The boring part is getting to know the EU regulation 😊, in detail. The hardest part is figuring out in what way enterprise digital solutions should and can comply.
If you could use some help with that, reach out to AmeXio. We have been extending and implementing ECM’s for over 20 years and are well known with AI services as well.