From AI deployment to AI trust: A conversation about governance, risks, and responsibility

AI has arrived in everyday business life, often faster than strategies, roles, and rules can keep up. While classic software works according to clear if-then logic, generative AI systems in particular bring a new reality: results are context-dependent, probabilistic, and not always reproducible.

What does this mean for governance, compliance, and risk management? And how can companies manage to act in a way that is not only "sufficient" in regulatory terms, but also measurably trustworthy?

In conversation with Christine Wohlwend (Elleta AG), we shed light on the most important aspects of modern AI governance. As a partner of Elleta AG, we see time and again in projects that responsible AI is not an "add-on"; it is a prerequisite for AI to really work in business.

Christine Wohlwend

Managing Partner Consultant at elleta AG

AI is not only changing processes and tools, but also the way companies need to organize responsibility, risk, and transparency. In the following interview, Christine Wohlwend shares her perspective from a governance and compliance standpoint and highlights what is important in practice right now.

 

Interview with Christine Wohlwend (Elleta AG)

1. RWAI: "You come from a background in governance and compliance in the IT environment. In your opinion, what makes AI, compared to traditional software, a field that poses particular challenges for governance and risk management?"

Elleta: "Generative AI systems in particular do not deliver strictly reproducible 'if-then' results that are hard-coded into software. They work with probabilities and plausibilities, which means that results are not necessarily correct and accurate. The quality depends on the specific context and the task at hand. In addition, the behavior of an AI can change, known as model drift, for example due to updates, new data, different input patterns, or changed processes. This distinguishes governance in the AI environment from classic application development, because the latter does not change its behavior without our intervention. As a result, AI governance becomes less of a one-time "acceptance before go-live" and more of a lifecycle issue: clear responsibilities, risk classification, transparency about system boundaries, and ongoing monitoring become central.

2. RWAI: " With its 'Blockchain Act,' Liechtenstein has created a technology-neutral framework that aims to combine trust and user protection with innovation. Do you see any parallels or differences when it comes to AI?"

Elleta: "A law regulates the framework within which an activity takes place. That is probably the main parallel, because the key difference is the subject of regulation: while the Blockchain Act addresses a niche, AI is effective in almost all industries and can be found in companies of all sizes. This is where the EU AI Act, which also applies to Liechtenstein, comes into play: it has not been developed for a specific industry, but forms an AI governance framework for all companies in the European area."

3. RWAI: "How should AI governance be designed so that it is not only 'adequate' in regulatory terms, but also measurably builds trust—both internally and externally?"

Elleta: "Trust is created when a company can clearly demonstrate and show: We know what we are using AI for, we understand the risks, and we know what controls and measures we have implemented and how effective they are. This includes knowing the purpose for which AI is used, the limitations of AI, the categories of data processed, the labeling of AI content, and rules governing where and when AI may be used. Trustworthy AI, as enshrined in the EU Ethics Guidelines, reflects this: we define the requirements for transparency, robustness, security, and accountability, as well as human control in this process. Companies that address this issue early on not only reduce regulatory risks, but also increase acceptance among employees, customers, and partners, thereby gaining momentum because decisions in the project can be made more quickly and sustainably."

4. RWAI: "Which industries are currently furthest ahead in establishing AI governance – and which specific practices can be transferred to SMEs and IT projects?"

Elleta: "When I assess pioneers from a risk and compliance perspective, I focus less on who uses AI the most and more on who already manages AI in such a way that it is auditable, accountable, and operationally stable. In this sense, I see three groups at the forefront: financial service providers and insurance companies, as well as product-regulated industries such as medical technology and critical infrastructure. These industries are used to providing evidence and being supervised. In most cases, industry associations have already issued relevant guidelines and specifications, or the supervisory authorities have taken action. For SMEs or IT projects in general, this can certainly be used to derive a procedure for defining roles and responsibilities, or to model the approach for developing an AI use case inventory. Participants benefit from publicly available sources and the know-how of specialists in the industry, so there is no need to reinvent the wheel."

5. RWAI: "What governance gaps arise most frequently when employees use AI functions from cloud services before there is a clear strategy, roles, and rules in place?"

Elleta: "Without clear roles and rules, shadow usage arises and no one controls risk, quality, and evidence. Often, there is a lack of simple guardrails for permission regarding which tools may be used for which data categories. It is not enough to simply activate a tool; employees need training and AI skills development so that they can judge for themselves which processing procedures with a particular AI tool are critical and which are permissible. The typical governance gaps are therefore: lack of inventory and lack of usage overview, lack of technical managers, no data rules, and, as a result, lack of incident management in the event of incidents. Cloud AI is quick to implement, but without clear rules, the process gets out of control.

 

6. RWAI: "How does structured communication with supervisors and regulators affect AI risk management, and what needs to be in place internally to ensure that this communication accelerates rather than slows down?"

Elleta: "If you are operating in a regulated area, it is definitely advisable to start the dialogue early on. But not too early—you should have a specific use case to evaluate, not just an abstract idea. A regulatory authority only becomes a hindrance if the product or project is already complete and you need approval at short notice. In cross-border constellations, for example, in the case of a Swiss provider with EU/EEA use, it is also worthwhile to make contact at an early stage if there are any uncertainties in order to avoid duplicate documentation. But again, it is pointless to ask the question "are we allowed to use AI?" Instead, ask specifically "is this use case appropriate in your view, given the identified risks and the controls in place?" If there are no queries or it is not a risky area of application, only a notification to the supervisory authority is usually necessary anyway.

 

7. RWAI: "What are the most common problems you encounter in practice in connection with a lack of AI governance?"

Elleta: "The classic scenario is the use of AI tools such as Copilot, GPT, or Perplexity without a policy that has been trained and is also adhered to. Companies shy away from the cost of licenses they would have to pay for business accounts and only give selected employees access to trusted working environments. However, other employees still use AI and are often unaware that they are not allowed to feed AI with customer data when others are doing so. Once the data is out, there is no way back. My opinion is therefore clear: AI is not a luxury work tool for individuals, but has become a standard work tool. And that is exactly how it must be used, monitored, and trained in the company."

8. RWAI: "If you had to define a responsible AI standard suitable for businesses, which three elements would be essential—and how could you tell in everyday life that they are actually being implemented?"


Elleta: "Firstly, responsibilities and competencies must be clearly defined.
Secondly, a risk-based assessment approach is needed for use cases in which AI is to be used. Classification is mandatory and any measures must be proportionate to the risk.
Thirdly, and this is also the realization of whether it is actually being implemented in everyday life, there must be evidence of transparency and documentation in the company. Ideally, there should be monitoring processes for human oversight, should this be necessary, or else the handling of incidents should be described and implemented."

The interview clearly shows that AI governance is not a one-time go-live check, but rather an ongoing lifecycle issue. Trust is built when companies are transparent about what AI is used for, what risks exist, and what controls are effective in operation. Clear responsibilities, a risk-based assessment of use cases, and comprehensible documentation and monitoring are the central building blocks for responsible AI in practice.

AI is now a standard tool, which is precisely why it needs standards that work in everyday life. Starting early with a structured approach not only reduces regulatory risks, but also creates acceptance and confidence within the company. Together with Elleta AG, we support organizations in managing AI responsibly, pragmatically, and sustainably.

Next
Next

Beyond the hype: Five surprising truths about AI in business that you need to know in 2026