MAKE YOUR FREE AI Policy
What we'll cover
What is an AI Policy?
An AI Policy tells an organisation’s staff members which artificial intelligence (AI) tools and models they may use in their work and how these may be used. AI Policies set limits and provide information to help ensure AI is used productively, safely, ethically, and compliantly.
When should I use an AI Policy?
Use this AI Policy:
-
if your organisation employs staff members
-
to set out which AI tools and models can be used within an organisation and how
-
for organisations in various industries that could benefit from the use of AI in the workplace
-
for organisations in England, Wales, or Scotland
Sample AI Policy
The terms in your document will update based on the information you provide
About AI Policies
Learn more about making your AI Policy
-
How to make an AI Policy
Making your AI Policy online is simple. Just answer a few questions and Rocket Lawyer will build your document for you. When you have all the information about how AI can be used in your workplace prepared in advance, creating your document is a quick and easy process.
You’ll need the following information:
The organisation
-
What is the organisation’s (ie the employer’s) name?
Permitted tools and uses
-
Which AI tools and models may be used within the organisation?
-
How can AI tools and models be used within the organisation?
-
Which tools and models does each permitted use apply to?
-
Rules on AI use
-
Who is allowed to use AI within your organisation?
-
Do staff members need to obtain approval before:
-
Communicating AI output outside of the organisation (eg to clients or via publication)? If so, who must give approval?
-
Using AI output in a way that could impact the organisation's products, platforms, or technical foundations (eg by adding AI-generated code into existing source code)? If so, who must give approval?
-
-
Will you specify which licences and agreements apply to the use of your permitted AI tools and models (eg user agreements)?
-
If so, you’ll need to provide a list and you may provide URLs.
-
-
Do any other specific rules apply to the use of AI tools and models within the organisation?
-
If so, what are they?
-
Key contacts and decision-makers
-
Who is the key contact for matters related to the use of AI within the organisation? What are their phone number and email address?
-
Who decides which AI tools and models can be used within your organisation and how?
-
Who can approve the use of personal data (ie information about an individual from which they may be identified) in relation to AI tools and models?
Data protection and privacy policies
-
Does your organisation have in place:
Monitoring AI use
-
Will the use of AI within your organisation be monitored and evaluated (eg for the presence and perpetuation of biases or misinformation)?
-
If so, who is responsible for monitoring?
-
-
-
Common terms in an AI Policy
AI Policies explain which AI models and tools can be used within an organisation and how. To do this, this AI Policy template includes the following terms and sections:
Purpose of the AI Policy
The AI Policy starts by explaining why the employer is implementing this Policy: to set out how staff members can use AI at work to enable their employer organisation to leverage AI’s capabilities in a safe, compliant, and ethical way.
This section also identifies the key contact within the organisation to whom questions about the AI Policy and AI use should be addressed.
The organisation’s use of AI
This section encourages staff members to use AI within the organisation, but only within the limits set out in the AI Policy. It also identifies which AI tools and models may be used and tells staff members how they may suggest new tools or models that they believe will benefit the organisation.
Permitted uses of the organisation’s permitted tools and models
This section identifies the uses that can be made of AI tools and models within the organisation and which tools or models each use applies to. It also lets staff members know how they can gain permission to use a tool or model for a use not included in this list.
How permitted uses should be carried out
Next, the Policy imposes various rules on how the permitted uses of AI tools and models may be carried out. These include the requirements:
-
that only specified people may use the AI tools and models (eg certain individuals, roles, and/or departments)
-
to check AI output’s accuracy
-
to consider biases that may be present in a model’s training data and how this may be included in a tool’s output and perpetuated, and to evaluate whether, in light of these, an intended use is safe and ethical
-
to abide by licences or agreements that govern how an AI tool or model may be used by a given individual
-
that AI use abides by various areas of law (eg defamation, marketing, and anti-discrimination laws)
-
that AI use is always in accordance with relevant governmental and other industry-standard regulations, sets of guidance, and codes of practice
-
if you choose to include it, that AI users obtain permission before communicating AI output outside of the organisation and/or using it in products, platforms, or other technical foundations
Any requirements you choose to impose in addition to these will also be included in this section.
Staff members’ obligations
This section outlines a staff member’s obligation to abide by the terms of this Policy and to actively participate in any training on AI use. Note, however, that the Policy is not part of staff members’ contracts of employment, restricting how these obligations may be enforced.
The organisation’s obligations
This section states the organisation’s commitment to ensuring that AI is used in an ethical, compliant, and productive way.
If the organisation will monitor how AI is used within its activities, this is also set out here and responsibility for monitoring is assigned.
Training
This section contains the organisation’s commitment to providing any training that’s necessary to ensure staff members can abide by the AI Policy.
A list of information that may be appropriate for training to cover is also provided.
Intellectual property
This section starts by identifying how intellectual property created by individuals in the course of their employment is generally owned (ie usually by the employer, unless there are specific provisions to the contrary).
It then identifies factors affecting ownership of intellectual property created by AI. It highlights ways that AI use may infringe on others’ intellectual property rights and how this can be avoided.
Data protection and privacy
Next, the AI Policy highlights the requirement that all AI use within the organisation complies with data protection laws (eg the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR)). For example, it prohibits personal data from being entered into AI tools or models without express permission from a specific person or department.
AI users are also told to abide by the organisation’s policies related to data protection and privacy, and key policies are highlighted if the organisation has them in place (eg a Data protection and data security policy).
Protection of confidential information
This section restricts how confidential information (eg trade secrets, inventions, data, or strategies) may be entered into AI tools and models and how any AI output containing such can be used.
If you want your AI Policy to include further or more detailed provisions, you can edit your document. However, if you do this, you may want a lawyer to review the document for you (or to make the changes for you) to make sure that your modified AI Policy complies with all relevant laws and meets your specific needs. Use Rocket Lawyer’s Ask a lawyer service for assistance.
-
-
Legal tips for organisations
Get excited about using AI in your workplace; don’t get carried away
AI offers an organisation many opportunities. From making administrative tasks more efficient to helping create inspiring or informative content, it can help a business to gain a competitive edge over businesses that are less willing to explore AI’s capabilities.
That being said, be wary of using AI simply for the sake of it. Compliance aside, AI models and tools still have various limitations. For example, they can generate completely made-up output (ie they can ‘hallucinate’), which could lead to faulty work if not adapted adequately. Over-reliance on tools can also lead to staff members creating work that’s perhaps uninspired or not written in an appropriate manner.
To ensure you use AI in the best possible way, use your AI Policy as a starting point and support it with open discussions about AI use in the workplace, strategic plans for its use that are overseen by management, and in-house guidelines for specific uses of AI.
Know what intellectual property you may use
Intellectual property (IP) is an area of law that’s being particularly challenged by AI. Questions of inventorship, ownership, and permitted use have all arisen and not been fully resolved in relation to AI and IP.
Familiarise yourself and your team with rules around who owns and/or is licensed to use the IP that’s been used to train the AI tools and models you use and the IP created by the tools. To get started, read:
-
the terms on IP within any user agreements and software licences governing your organisation’s use of AI tools and models
Understand when to seek advice from a lawyer
In some circumstances, it’s good practice to Ask a lawyer for advice to ensure that you’re complying with the law and that you are well protected from risks. You should consider asking for advice if:
-
you need help obtaining licences to use an AI tool or model or others’ intellectual property
-
you need help establishing whether a given use of AI output is legal
-
this AI Policy doesn’t cover everything you want or doesn’t meet your needs
-
you want documents governing how others can use an AI tool or model that you’ve created
AI Policy FAQs
-
What is included in an AI Policy?
This AI Policy template covers:
-
which AI tools and models may be used
-
the purposes for which these tools and models may be used
-
rules about how they may be used (eg who may use them and whether any approvals are required)
-
how staff members should pitch new tools or models or uses of AI
-
staff members’ and the organisation’s obligations relevant to AI use (eg training obligations)
-
whether the employer monitors how AI is used within the organisation
-
training relevant to AI use
-
intellectual property rights (IPRs) considerations
-
data protection and privacy considerations
-
how confidential information should be protected when using AI
-
-
Why do I need an AI Policy?
Generative AI models (eg ChatGPT) that create text, images, and other output based on users’ prompts are becoming increasingly ubiquitous. They offer efficiency, inspiration, and intelligence that can help many businesses to optimise their activities and improve their productivity.
However, this rapidly developing area of technology does bring risks. For example, the risk of breaching data protection, intellectual property (IP), advertising and marketing, or discrimination laws. Having an AI Policy in place helps you to mitigate the risk of litigation, ethical issues, and commercial disadvantages arising due to breaches of these areas of law or of other AI best practice requirements.
-
What is AI?
‘Artificial intelligence’ is a broad term that covers various types of computer programmes designed to mimic human cognition and intelligence. These range vastly in complexity. The AIs currently in vogue are generally ‘machine learning’ models (and particularly large language models (LLMs)), which use algorithms to learn from many examples of a specific type of content (eg written information or images). This learning process is referred to as ‘training’ an AI. These trained AIs can then make predictions and extrapolations based on their learning to, for example, generate new content in response to a user’s prompt (ie ‘generative’ AI).
-
How can AI be used in workplaces?
AI models and the user-friendly tools created based on them have many potential uses in the workplace. These include (but are by no means limited to):
-
drafting text (eg for internal reports, creative writing, advertising copy, or advice)
-
creating images or videos (eg for marketing or packaging materials, creative content, or to support presentations)
-
organising and condensing information (eg research notes or meeting notes)
-
translating text from one language to another
-
writing computer code
-
analysing data (eg by pulling out trends or by creating graphs and tables)
-
conducting research and answering questions
-
providing ideas and inspiration
-
working out the best way to schedule various commitments, meetings, or events
-
providing customer service via chatbots
-
making decisions as to how service provision should be carried out (eg which customer should receive which services)
AI can always be used in new, inventive ways. You should think open-mindedly about ways it can be used within your organisation and include these in your AI Policy when prompted. The Policy also invites your staff members to suggest new uses of AI and new tools and models, which you can consider for implementation. This helps your organisation to make the most of your staff members’ unique ideas to leverage AI in your activities.
-
-
What are the risks of using AI in the workplace?
Growth in the presence, size, and technical complexity of AI models and their interactions with various environments (both digital and physical) are constantly accelerating. This offers excellent opportunities but brings with it various complex risks, some of which are still poorly regulated and/or understood.
Risks include:
-
breaching laws by putting data (eg people’s personal information or businesses’ confidential information) into AI tools
-
breaching laws (eg intellectual property, advertising, or data protection) laws by using the output of AI tools in certain ways (eg by publishing it)
-
damaging business resources (eg damaging a technical platform by adding problematic AI-generated code to it)
-
suggesting incorrect information (AI output is not always accurate)
-
perpetuating systemic biases (if an AI model is trained on data containing a particular bias, this bias may inform and be reproduced in its output, perpetuating the original bias when this output is used (eg for decision-making))
-
breaching contracts with AI service providers (eg by using their models or the models’ outputs in ways prohibited by the contracts)
-
-
How should an organisation decide which AI tools and models to use in its workplace?
Many different AI models, and user-facing tools utilising these models, are available for use. Some are more well-known than others, but all carry risks.
Keep an open mind when deciding which models and tools to use to help your workplace leverage AI in the most innovative and efficient ways possible.
You should ensure that someone who understands the rules and implications of a model’s or tool’s use approves its use and the specific ways that it is to be used within your organisation. For example:
-
someone with sufficient technical expertise to know how the tool works and the associated risks
-
someone with sufficient legal expertise to understand licences and/or user agreements that govern how a model or tool may be used
This AI Policy template asks you to assign somebody responsibility for deciding which AI tools and models can be used and how. This person could consult with other people with different areas of expertise (eg legal, technical, commercial, or creative) before approving a model or use, if necessary.
-
-
Should organisations train staff members on how to use AI?
Providing comprehensive, accessible, and engaging training is an excellent way of ensuring that staff members follow procedures and understand the opportunities and risks associated with aspects of their work. This includes work with AI tools and models.
This AI Policy template will always state that the employer will provide staff members with any training necessary to enable the staff members to comply with the requirements of the AI Policy. Exactly what this includes and how it is delivered depends on the nature of your organisation. However, it’s often prudent for training to cover:
-
what AI is and how it works
-
the areas of law that impact how AI may be safely and compliantly used
-
how AI can be used in a safe and ethical manner (eg the need to consider whether any biases are present in output)
-
the processes that should be used when using AI in the workplace (eg to check whether output is accurate and legally compliant)
-
which specific licences (eg software licences) and agreements (eg user agreements) apply to staff members’ use of AI tools and models, including the requirements imposed by these, how these should be complied with, and provisions on ownership (eg ownership of intellectual property generated by a tool)
-
Our quality guarantee
We guarantee our service is safe and secure, and that properly signed Rocket Lawyer documents are legally enforceable under UK laws.
Need help? No problem!
Ask a question for free or get affordable legal advice from our lawyer.