Pascal Mages
CTO
Open Circle AG – Zurich
Freilagerstrasse 32
8047 Zürich
Open Circle AG – Bern
Lagerhausweg 30
3018 Bern

As helpful and time-saving as AI tools such as ChatGPT, Copilot and DeepL are for everyday business, little is said about the risks involved. When an employee uploads a data table, where does this information end up? And could companies even be liable to prosecution in the worst-case scenario?
We highlight the five biggest risk areas when using AI tools and show what measures are necessary to ensure that efficiency does not come at the expense of confidentiality, data protection, accuracy and legal certainty.
AI tools only work if they are fed with data. This is precisely where the greatest risk lies: everything that is entered leaves the company and ends up on the servers of external providers. It is often unclear whether this data is stored, passed on or even used to train the models. It becomes particularly problematic when internal documents containing confidential information or personal data are processed. This may even constitute a violation of data protection laws.
Practical example:
An employee uploads an Excel list of customer data to ChatGPT in order to create a quick summary. This means that sensitive information is stored on US servers – without the consent of those affected. The result: a clear violation of data protection laws and a possible six-figure fine.
However, the risk goes beyond data protection. Trade secrets such as price calculations, project plans or market analyses can also be unintentionally disclosed. In the hands of third parties, this data is a strategic disadvantage for any company.

AI-generated content is not automatically free to use. In many cases, it remains unclear whether parts of existing works have been copied. Some providers even secure rights to the content of translated texts.
Practical example:
A company has an English technical article translated using a free translation tool and publishes the text as a white paper. It later becomes known that the service is allowed to save translations from the free version and use them for training purposes. The company has thus lost control over its content and opened the door to legal disputes.
The term ‘AI hallucination’ sounds harmless, but it describes a serious problem: language models are trained to formulate convincing texts – even if the facts are not entirely accurate. They ‘invent’ information, but present it as if it were reliable.
Practical example:
An employee asks ChatGPT to produce a market analysis. The result contains figures, diagrams and references. However, a review reveals that several of these sources have been fabricated or misinterpreted. If such an analysis is presented to the management meeting without being checked, decision-makers will make their decisions on the wrong basis. This can result in both financial losses and strategic missteps.
Translation tools do not always deliver reliable results either. Legal terminology, irony or linguistic nuances are quickly distorted. In the worst case, an incorrect translation can change the meaning of a contract clause. This can result in legal disputes that could have been avoided.

Many AI tools are black boxes. They deliver results without revealing how they were generated. Decisions are therefore made on the basis of information whose origin and weighting cannot be verified.
Practical example:
A legal department has contract clauses summarised by an AI. The text seems plausible, but it remains unclear why certain aspects were emphasised and others omitted. Should a legal dispute arise later, it is impossible to trace how the summary was arrived at (and thus also impossible to justify why decisions were made on its basis).
Even if an AI caused the error, the company is ultimately liable. OpenAI and others make this clear: their terms of use state that the tools may contain errors and that results must always be checked by humans. In doing so, they absolve themselves of responsibility.
Practical example:
A developer integrates code suggested by Copilot into a product. It later turns out that the code contains a security vulnerability that is exploited by hackers. The financial damage runs into millions, and there are also potential claims for damages from customers. The responsibility remains entirely with the company.

Define binding rules on which AI tools are permitted and what information may be processed. Create an internal set of rules based on the sensitivity of the information. Classifying information (e.g. public, internal, confidential, strictly confidential) is always a good basis for guidelines.
Compare providers, check the terms and conditions of the contract, and make an informed decision. Not every free service is secure. Often, you ‘pay’ for free services with your data. Pay attention to the following points:
Free tools are handy, but often not suitable for sensitive company data. Subscription versions usually offer clear data protection and deletion rules. Example: DeepL Pro. Texts are deleted immediately after translation and cannot be reused.
Without awareness, any policy is of little use. Your team must understand why certain information is off limits and must not leave the company under any circumstances. Training courses, brief guidelines or internal Q&As help to embed the rules in everyday life.
No matter what AI creates, the final check should always be carried out by humans who can assess the result. AI should not replace human intelligence and competence. Establish clear control steps before content is published, given to customers or used in any other way.
This ensures that AI continues to function as a productivity booster and does not become a security risk.
AI tools are here to stay. So the sooner we address the risks and establish binding rules for their use, the more confidently we can work with them.
It is important that companies set the framework themselves: What data may be used, which tools are approved, who checks the results? This creates a clear process that enables efficiency gains while ensuring data protection and legal certainty.