MAKE YOUR FREE AI Policy

What we'll cover
What is an AI Policy?
An AI Policy tells an organisation’s staff members which artificial intelligence (AI) tools and models they may use in their work and how these may be used. AI Policies set limits and provide information to help ensure AI is used productively, safely, ethically, and compliantly.
When should I use an AI Policy?
Use this AI Policy:
-
if your organisation employs staff members
-
to set out which AI tools and models can be used within an organisation and how
-
for organisations in various industries that could benefit from the use of AI in the workplace
-
for organisations in England, Wales, or Scotland
Sample AI Policy
The terms in your document will update based on the information you provide
AI USE POLICY
Purpose of the AI Policy
- (we, our or us) is excited by the opportunities for innovation and efficiency offered by the proliferation of artificial intelligence (AI) models and tools. We intend to incorporate these tools into our organisation’s operations in a safe, ethical, and legally compliant manner, to enable us, our Staff Members, and other stakeholders (e.g. our clients) to obtain maximum benefit from new and established AI technologies.
- has implemented this AI Policy to help us to achieve the above. The Policy sets out which AI tools and models may be used within our organisation, how they may be used, and by whom.
- Any questions in relation to this Policy should be referred to in the first instance.
’s Use of AI
- We allow and encourage the use of AI tools and models by our Staff Members (including officers, employees, consultants, trainees, homeworkers, part-time workers, fixed-term workers, casual workers, agency workers, volunteers, and interns), but only Permitted Uses of Permitted Tools and Models by Permitted Users (as defined below).
- Our Permitted Tools and Models are the particular AI tools or models that has considered and approved as safe and legally compliant for use within our business when used in accordance with this Policy. They are:
- .
- If any Staff Member believes that approving a certain additional AI tool or model (i.e. one that is not included in the list above) for use within our organisation would be beneficial for the business, and that it could be used in a safe and legally compliant manner, they should contact to communicate their suggestion. Their suggestion will be considered by and, if agrees with the proposal, the tool or model will be added to the list of Permitted Tools and Models above (via an update or an addendum to this Policy).
Permitted Uses of ’s Permitted Tools and Models
- The Permitted Tools and Models may only be used for the following Permitted Uses:
- may be used .
- If a Permitted User wants to use a Permitted Tool or Model for a use other than one of the Permitted Uses set out above, they should propose this use to , who will evaluate the proposal and, if they approve it, grant permission for the suggested use via a written confirmation.
How Permitted Uses should be Carried Out
- Permitted Uses of Permitted Tools and Models may only be carried out by Permitted Users. Our Permitted Users are:
- .
- Whenever a Permitted User carries out a Permitted Use, they must be aware that any output generated by AI tools or models may be inaccurate. This includes information purported to be factual, e.g. legal, medical, or technical advice. This applies regardless of media, i.e. whether the output is textual, graphic, audio, or of any other form. AI-generated output should never be taken to be true or accurate and Permitted Users should always check the accuracy of any purported statements, facts, or representations before using these to inform or contribute to their work in any way.
- Whenever a Permitted User is planning to carry out a Permitted Use, they must consider whether the AI tool or model used may have been trained on, reflect, and/or via its output perpetuate any biases (e.g. systemic biases that discriminate against particular groups of people). If any potential for incorporation of such of bias exists, a Permitted User must consider:
- Which biases may be present;
The potential impacts of their using the relevant AI tool or model in the planned way (e.g. whether decisions may be made or advertisements targeted in a manner that reinforces an existing privilege held by a social group);
How such biases and/or their effects may be mitigated by the Permitted User; and
- Based on the points above, whether the planned Permitted Use can be carried out in a way that does not risk harm to anybody and which will not constitute discrimination under the Equality Act 2010. If they cannot be confident that it can, they should not carry out the planned Permitted Use or should first seek advice from .
- Permitted Uses should always be carried out in accordance with the terms of any particular licences (e.g. software licences) or agreements (e.g. user agreements) that allow and/or govern the use of Permitted Tools or Models when such licences or agreements are held by or apply to either or individual Staff Members. If a Permitted User is in any doubt as to what any such licences or agreements require, they should contact and request further information. A list of the licences and agreements applicable to our Permitted Tools and Models, for Permitted Users’ reference, can be.
- Whenever a Permitted User uses an AI tool or model, they must do so in accordance with all laws relevant to the specific use. These may include, but are not limited to:
- Advertising and marketing laws and regulations;
Laws dealing with defamation, libel, and slander;
Anti-discrimination laws;
Privacy and data protection laws;
Laws restricting the disclosure of confidential information; and
Intellectual property laws.
- Permitted Uses should always be carried out in accordance with relevant governmental and other industry-standard regulations, sets of guidance, and codes of practice. These include, but are not limited to:
The Information Commissioner’s Office’s (ICO’s) guidance on AI and Data Protection (https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/).
- At any point if, while carrying out or planning to carry out a Permitted Use, a Permitted User is uncertain as to how to do so in a risk-averse manner and in compliance with this Policy and with the law, they should not hesitate to contact to discuss their questions or concerns.
Staff Members’ Obligations
- Staff Members must only use AI within our organisation in accordance with this Policy. They must comply with all provisions and must seek assistance via the identified contacts if uncertain on any point or, if identified contacts are unavailable, via another appropriate party (e.g. a line manager, Legal Department representative, or IT Department representative). The relevant use of AI should be paused until any such uncertainties are resolved.
- Staff Members should actively participate in any and all training provided or organised by that is relevant to AI and the use of AI tools and models. If any aspects of any training are unclear to a Staff Member, it is the Staff Member’s responsibility to raise these concerns with .
’s Obligations
- We are committed to implementing and facilitating the productive, ethical, and compliant use of AI technologies within our organisation. As such, we are committed to upholding this Policy and to supporting Staff Members to ensure that they can adhere to its provisions.
Training
- will provide any training on the use of AI in the workplace that’s necessary to make sure that Staff Members can comply with the requirements set out in this Policy.
- Training will cover (but is not limited to), where appropriate:
- Defining AI and explaining how different types of models (e.g. large language models) work;
The various areas of law that impact how AI may be safely and compliantly used, for example, data protection, intellectual property, privacy, defamation, and advertising and marketing law;
How AI can be used in a safe and ethical manner. For example, Staff Members should be aware of how AI may inadvertently be used to discriminate against individuals as prohibited by the Equality Act 2010; and
- The specific licences and agreements that and its Staff Members are subject to in relation to their use of AI within our organisation, including the requirements imposed by these, how these should be complied with, and provisions on ownership (e.g. ownership of intellectual property).
Intellectual Property
- Generally, any intellectual property rights created by or arising in works created by any of ’s Staff Members in the course of their employment will be the property of , unless alternative provisions are made in law or in individual Staff Members’ contractual arrangements with us (e.g. employment contracts or consultancy agreements). This includes any intellectual property rights that a Staff Member holds in any output created by an AI tool or model that the Staff Member was responsible for creating (e.g. which was created by an AI model in response to parameters entered by the Staff Member) in the course of their employment.
- Staff Members should be aware that they (and/or ) may not always hold all intellectual property rights existing in AI output that they were responsible for creating. Ownership of any intellectual property may depend on:
- The relevant AI tool’s or model’s user agreements or licences that apply to the user in relation to the use made of the tool or model. For example, the provisions within such agreements dealing with intellectual property ownership;
Any pre-existing intellectual property rights in the output created, whether the works in which the rights exist were input into the tool or model as training data, as user input, or not at all;
The terms of any licences or agreements governing the use of the model’s or tool’s training data;
Any other factors impacting intellectual property ownership, for example, existing licence agreements, previous disputes, or rules on different types of intellectual property rights.
- Bearing in mind the above, Staff Members must take care not to infringe the intellectual property rights of any other individual or organisation when:
- Sourcing or using an AI tool or model;
Contributing to training any AI tools or models;
Inputting data of any kind into any AI tools or models; or
Receiving and using the output of any AI tools or models.
- In particular, Staff Members should be careful not to use the output of any AI tool or model in a way that infringes any other party’s intellectual property rights. They should be aware that:
- AI tools and models may have been trained on content containing intellectual property rights belonging to others and such data may have been used without a valid licence for this use; and
- Even if training was carried out in accordance with a licence, publication of the tool’s or model’s output or of content containing such may not be covered by the provisions of the licence.
- Therefore, Staff Members should always:
- Comply with the terms of any relevant licences, including intellectual property licences granted to them, to , or to another party but which via further licences or agreements they are covered by (e.g. user agreements for relevant AI tools or models);
- Use AI tools and models in accordance with their terms of use and similar and in accordance with this Policy; and
- Contact for assistance if they are unsure whether a particular use of given output is likely to constitute intellectual property right infringement.
Data Protection and Privacy
- All uses of AI within our organisation must be carried out in accordance with the UK’s data protection laws, including the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR).
- No personal data (i.e. information about an individual from which they may be identified) belonging to anybody, including customers, Staff Members, and members of the public, should be input into any Permitted Tool or Model unless express approval to do so in the manner and for the purposes in question has been obtained beforehand from . Such approval will only be granted when the proposed use is in reliance on a legitimate basis for processing (e.g. it is with data subjects’ consent) and in accordance with other data protection principles (e.g. this processing is necessary and appropriate for the relevant purpose).
- Permitted Users must consider whether any AI-generated output they receive and use contains (or could contain) any personal data belonging to anybody. This applies regardless of whether or not a Permitted User input any personal data into the relevant AI tool or model themselves to generate this output. If output contains personal data or it is unclear whether it does or not, this output should not be used any further by the Permitted User who was responsible for generating it unless and until approval for such use is granted by .
- Additionally, Permitted Users should always comply with ’s other policies and procedures relevant to data protection and privacy, including our:
Protection of Confidential Information
- Staff Members must take care when using any of ’s confidential information as input into or to inform input into any AI tool or model. If any restrictions on the AI tools or models into which confidential information may be inserted are imposed in this Policy or otherwise (e.g. by line managers), these should be observed.
- If any AI-generated output contains ’s confidential information or if our confidential information could be extrapolated from the output, this output should not be communicated outside of our organisation (e.g. via publication or communication to a client) without prior approval from .
- If a Staff Member has access to any confidential information belonging to a partner, collaborator, subsidiary, employee, or similar of , the rules set out within this section, above, also apply to this information. Further, such information must only be used in accordance with any agreements governing the exchange and use of such information (e.g. any collaboration agreements, purchase or investment agreements, or non-disclosure agreements).
Attribution
- This AI Policy was created using a document from Rocket Lawyer (https://www.rocketlawyer.com/gb/en).
About AI Policies
Learn more about making your AI Policy
-
How to make an AI Policy
Making your AI Policy online is simple. Just answer a few questions and Rocket Lawyer will build your document for you. When you have all the information about how AI can be used in your workplace prepared in advance, creating your document is a quick and easy process.
You’ll need the following information:
The organisation
-
What is the organisation’s (ie the employer’s) name?
Permitted tools and uses
-
Which AI tools and models may be used within the organisation?
-
How can AI tools and models be used within the organisation?
-
Which tools and models does each permitted use apply to?
-
Rules on AI use
-
Who is allowed to use AI within your organisation?
-
Do staff members need to obtain approval before:
-
Communicating AI output outside of the organisation (eg to clients or via publication)? If so, who must give approval?
-
Using AI output in a way that could impact the organisation's products, platforms, or technical foundations (eg by adding AI-generated code into existing source code)? If so, who must give approval?
-
-
Will you specify which licences and agreements apply to the use of your permitted AI tools and models (eg user agreements)?
-
If so, you’ll need to provide a list and you may provide URLs.
-
-
Do any other specific rules apply to the use of AI tools and models within the organisation?
-
If so, what are they?
-
Key contacts and decision-makers
-
Who is the key contact for matters related to the use of AI within the organisation? What are their phone number and email address?
-
Who decides which AI tools and models can be used within your organisation and how?
-
Who can approve the use of personal data (ie information about an individual from which they may be identified) in relation to AI tools and models?
Data protection and privacy policies
-
Does your organisation have in place:
Monitoring AI use
-
Will the use of AI within your organisation be monitored and evaluated (eg for the presence and perpetuation of biases or misinformation)?
-
If so, who is responsible for monitoring?
-
-
-
Common terms in an AI Policy
AI Policies explain which AI models and tools can be used within an organisation and how. To do this, this AI Policy template includes the following terms and sections:
Purpose of the AI Policy
The AI Policy starts by explaining why the employer is implementing this Policy: to set out how staff members can use AI at work to enable their employer organisation to leverage AI’s capabilities in a safe, compliant, and ethical way.
This section also identifies the key contact within the organisation to whom questions about the AI Policy and AI use should be addressed.
The organisation’s use of AI
This section encourages staff members to use AI within the organisation, but only within the limits set out in the AI Policy. It also identifies which AI tools and models may be used and tells staff members how they may suggest new tools or models that they believe will benefit the organisation.
Permitted uses of the organisation’s permitted tools and models
This section identifies the uses that can be made of AI tools and models within the organisation and which tools or models each use applies to. It also lets staff members know how they can gain permission to use a tool or model for a use not included in this list.
How permitted uses should be carried out
Next, the Policy imposes various rules on how the permitted uses of AI tools and models may be carried out. These include the requirements:
-
that only specified people may use the AI tools and models (eg certain individuals, roles, and/or departments)
-
to check AI output’s accuracy
-
to consider biases that may be present in a model’s training data and how this may be included in a tool’s output and perpetuated, and to evaluate whether, in light of these, an intended use is safe and ethical
-
to abide by licences or agreements that govern how an AI tool or model may be used by a given individual
-
that AI use abides by various areas of law (eg defamation, marketing, and anti-discrimination laws)
-
that AI use is always in accordance with relevant governmental and other industry-standard regulations, sets of guidance, and codes of practice
-
if you choose to include it, that AI users obtain permission before communicating AI output outside of the organisation and/or using it in products, platforms, or other technical foundations
Any requirements you choose to impose in addition to these will also be included in this section.
Staff members’ obligations
This section outlines a staff member’s obligation to abide by the terms of this Policy and to actively participate in any training on AI use. Note, however, that the Policy is not part of staff members’ contracts of employment, restricting how these obligations may be enforced.
The organisation’s obligations
This section states the organisation’s commitment to ensuring that AI is used in an ethical, compliant, and productive way.
If the organisation will monitor how AI is used within its activities, this is also set out here and responsibility for monitoring is assigned.
Training
This section contains the organisation’s commitment to providing any training that’s necessary to ensure staff members can abide by the AI Policy.
A list of information that may be appropriate for training to cover is also provided.
Intellectual property
This section starts by identifying how intellectual property created by individuals in the course of their employment is generally owned (ie usually by the employer, unless there are specific provisions to the contrary).
It then identifies factors affecting ownership of intellectual property created by AI. It highlights ways that AI use may infringe on others’ intellectual property rights and how this can be avoided.
Data protection and privacy
Next, the AI Policy highlights the requirement that all AI use within the organisation complies with data protection laws (eg the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR)). For example, it prohibits personal data from being entered into AI tools or models without express permission from a specific person or department.
AI users are also told to abide by the organisation’s policies related to data protection and privacy, and key policies are highlighted if the organisation has them in place (eg a Data protection and data security policy).
Protection of confidential information
This section restricts how confidential information (eg trade secrets, inventions, data, or strategies) may be entered into AI tools and models and how any AI output containing such can be used.
If you want your AI Policy to include further or more detailed provisions, you can edit your document. However, if you do this, you may want a lawyer to review the document for you (or to make the changes for you) to make sure that your modified AI Policy complies with all relevant laws and meets your specific needs. Use Rocket Lawyer’s Ask a lawyer service for assistance.
-
-
Legal tips for organisations
Get excited about using AI in your workplace; don’t get carried away
AI offers an organisation many opportunities. From making administrative tasks more efficient to helping create inspiring or informative content, it can help a business to gain a competitive edge over businesses that are less willing to explore AI’s capabilities.
That being said, be wary of using AI simply for the sake of it. Compliance aside, AI models and tools still have various limitations. For example, they can generate completely made-up output (ie they can ‘hallucinate’), which could lead to faulty work if not adapted adequately. Over-reliance on tools can also lead to staff members creating work that’s perhaps uninspired or not written in an appropriate manner.
To ensure you use AI in the best possible way, use your AI Policy as a starting point and support it with open discussions about AI use in the workplace, strategic plans for its use that are overseen by management, and in-house guidelines for specific uses of AI.
Know what intellectual property you may use
Intellectual property (IP) is an area of law that’s being particularly challenged by AI. Questions of inventorship, ownership, and permitted use have all arisen and not been fully resolved in relation to AI and IP.
Familiarise yourself and your team with rules around who owns and/or is licensed to use the IP that’s been used to train the AI tools and models you use and the IP created by the tools. To get started, read:
-
the terms on IP within any user agreements and software licences governing your organisation’s use of AI tools and models
Understand when to seek advice from a lawyer
In some circumstances, it’s good practice to Ask a lawyer for advice to ensure that you’re complying with the law and that you are well protected from risks. You should consider asking for advice if:
-
you need help obtaining licences to use an AI tool or model or others’ intellectual property
-
you need help establishing whether a given use of AI output is legal
-
this AI Policy doesn’t cover everything you want or doesn’t meet your needs
-
you want documents governing how others can use an AI tool or model that you’ve created
AI Policy FAQs
-
What is included in an AI Policy?
This AI Policy template covers:
-
which AI tools and models may be used
-
the purposes for which these tools and models may be used
-
rules about how they may be used (eg who may use them and whether any approvals are required)
-
how staff members should pitch new tools or models or uses of AI
-
staff members’ and the organisation’s obligations relevant to AI use (eg training obligations)
-
whether the employer monitors how AI is used within the organisation
-
training relevant to AI use
-
intellectual property rights (IPRs) considerations
-
data protection and privacy considerations
-
how confidential information should be protected when using AI
-
-
Why do I need an AI Policy?
Generative AI models (eg ChatGPT) that create text, images, and other output based on users’ prompts are becoming increasingly ubiquitous. They offer efficiency, inspiration, and intelligence that can help many businesses to optimise their activities and improve their productivity.
However, this rapidly developing area of technology does bring risks. For example, the risk of breaching data protection, intellectual property (IP), advertising and marketing, or discrimination laws. Having an AI Policy in place helps you to mitigate the risk of litigation, ethical issues, and commercial disadvantages arising due to breaches of these areas of law or of other AI best practice requirements.
-
What is AI?
‘Artificial intelligence’ is a broad term that covers various types of computer programmes designed to mimic human cognition and intelligence. These range vastly in complexity. The AIs currently in vogue are generally ‘machine learning’ models (and particularly large language models (LLMs)), which use algorithms to learn from many examples of a specific type of content (eg written information or images). This learning process is referred to as ‘training’ an AI. These trained AIs can then make predictions and extrapolations based on their learning to, for example, generate new content in response to a user’s prompt (ie ‘generative’ AI).
-
How can AI be used in workplaces?
AI models and the user-friendly tools created based on them have many potential uses in the workplace. These include (but are by no means limited to):
-
drafting text (eg for internal reports, creative writing, advertising copy, or advice)
-
creating images or videos (eg for marketing or packaging materials, creative content, or to support presentations)
-
organising and condensing information (eg research notes or meeting notes)
-
translating text from one language to another
-
writing computer code
-
analysing data (eg by pulling out trends or by creating graphs and tables)
-
conducting research and answering questions
-
providing ideas and inspiration
-
working out the best way to schedule various commitments, meetings, or events
-
providing customer service via chatbots
-
making decisions as to how service provision should be carried out (eg which customer should receive which services)
AI can always be used in new, inventive ways. You should think open-mindedly about ways it can be used within your organisation and include these in your AI Policy when prompted. The Policy also invites your staff members to suggest new uses of AI and new tools and models, which you can consider for implementation. This helps your organisation to make the most of your staff members’ unique ideas to leverage AI in your activities.
-
-
What are the risks of using AI in the workplace?
Growth in the presence, size, and technical complexity of AI models and their interactions with various environments (both digital and physical) are constantly accelerating. This offers excellent opportunities but brings with it various complex risks, some of which are still poorly regulated and/or understood.
Risks include:
-
breaching laws by putting data (eg people’s personal information or businesses’ confidential information) into AI tools
-
breaching laws (eg intellectual property, advertising, or data protection) laws by using the output of AI tools in certain ways (eg by publishing it)
-
damaging business resources (eg damaging a technical platform by adding problematic AI-generated code to it)
-
suggesting incorrect information (AI output is not always accurate)
-
perpetuating systemic biases (if an AI model is trained on data containing a particular bias, this bias may inform and be reproduced in its output, perpetuating the original bias when this output is used (eg for decision-making))
-
breaching contracts with AI service providers (eg by using their models or the models’ outputs in ways prohibited by the contracts)
-
-
How should an organisation decide which AI tools and models to use in its workplace?
Many different AI models, and user-facing tools utilising these models, are available for use. Some are more well-known than others, but all carry risks.
Keep an open mind when deciding which models and tools to use to help your workplace leverage AI in the most innovative and efficient ways possible.
You should ensure that someone who understands the rules and implications of a model’s or tool’s use approves its use and the specific ways that it is to be used within your organisation. For example:
-
someone with sufficient technical expertise to know how the tool works and the associated risks
-
someone with sufficient legal expertise to understand licences and/or user agreements that govern how a model or tool may be used
This AI Policy template asks you to assign somebody responsibility for deciding which AI tools and models can be used and how. This person could consult with other people with different areas of expertise (eg legal, technical, commercial, or creative) before approving a model or use, if necessary.
-
-
Should organisations train staff members on how to use AI?
Providing comprehensive, accessible, and engaging training is an excellent way of ensuring that staff members follow procedures and understand the opportunities and risks associated with aspects of their work. This includes work with AI tools and models.
This AI Policy template will always state that the employer will provide staff members with any training necessary to enable the staff members to comply with the requirements of the AI Policy. Exactly what this includes and how it is delivered depends on the nature of your organisation. However, it’s often prudent for training to cover:
-
what AI is and how it works
-
the areas of law that impact how AI may be safely and compliantly used
-
how AI can be used in a safe and ethical manner (eg the need to consider whether any biases are present in output)
-
the processes that should be used when using AI in the workplace (eg to check whether output is accurate and legally compliant)
-
which specific licences (eg software licences) and agreements (eg user agreements) apply to staff members’ use of AI tools and models, including the requirements imposed by these, how these should be complied with, and provisions on ownership (eg ownership of intellectual property generated by a tool)
-

Our quality guarantee
We guarantee our service is safe and secure, and that properly signed Rocket Lawyer documents are legally enforceable under UK laws.
Need help? No problem!
Ask a question for free or get affordable legal advice from our lawyer.