Model AI Directive for the enterprise (2024)
We developed an AI Directive model used by a large number of Swiss companies. Having trained over 95 institutions, we have seen the impact of a good AI charter: reassuring teams and facilitating implementation.
How to use this AI Directive model in business
Below you'll find all the essentials for an AI Directive. You can adapt them to suit your needs. If you would like to receive a Word version of this AI Charter, or would like specific advice, please do not hesitate to contact us.
Objectives of the IA Directive
This directive aims to establish guidelines for the use of generative artificial intelligence (AI) technologies within the enterprise. Its aim is to ensure responsible, ethical use aligned with the organization's values, while promoting innovation and efficiency.
General principles
Background and definition of generative AI
Generative AI refers to machine learning-based systems capable of creating various types of content, such as text, images, video or sound. These systems, such as advanced language models (e.g., ChatGPT), generate content in response to questions based on large datasets. The use of generative AI must be accompanied by an understanding of potential risks, such as bias and inaccuracy.
The role of human intelligence
Although generative AI can be a valuable tool for facilitating and optimizing content creation, human intelligence remains at the heart of all decisions and productions. AI must be seen as a tool for assistance, not as a substitute for human thought and creativity.
User responsibility
Users must ensure the accuracy and relevance of AI-generated content. They must remain alert to possible errors and biases in the results, and ensure that each final piece of content complies with the organization's quality and ethical standards. The responsibility for any publication or decision made on the basis of AI lies with the user.
Good usage practices
Data verification
It is essential to check the veracity and accuracy of any information generated by AI before using it. Users should compare the results obtained with reliable sources and ensure that the content is free of false information or inaccuracies.
Data confidentiality and security
Users should never enter personal, confidential or protected data into AI tools. Anonymization of data is a requirement, and any input that could make a physical person identifiable is proscribed.
If there are any doubts about the relevance of certain information, it is preferable not to submit it. Caution is advised for any content containing specific details that could compromise confidentiality or security.
User training
The proper and safe use of AI tools requires prior training. The company is committed to providing tailored training sessions and resources to enable users to understand best practices and precautions when using AI technologies.
Authorized uses
The use of generative AI tools is permitted and encouraged for certain tasks, such as:
- Assisted copywriting and content enhancement (articles, emails, non-sensitive reports).
- Translation of non-confidential documents.
- Generating creative ideas and finding inspiration.
- Text correction and linguistic revision.
- Analysis of public or open data (e.g. Open Government Data).
Approved AI tools
It is recommended to use tools and platforms validated by the company for each type of application. A list of approved software and tools, together with their objectives and fields of application, is provided to ensure optimal and secure use:
- Text generation: Microsoft Copilot (Microsoft 365 license)
- Translation : Deepl
- Image generation: Adobe Firefly
Revisions and updates
The AI Usage Charter is a living document that must be updated regularly to reflect technological advances and legislative changes. The organization is committed to periodically reviewing this charter to incorporate new practices, tools or regulatory requirements for the responsible use of AI.
Get the Word version of the IA Directive template
Please contact us so that we can send you a Word version of this IA Directive template. We can also