Writing Your AI Use Policy — Part 2
A complete, ready-to-use AI Use Policy that balances innovation with protection
In Part 1, I went through why every organisation needs an AI Use Policy that encourages experimentation within safe boundaries. I described how a few companies learned hard lessons about ungoverned AI experimentation, and some of the principles that could guide an effective policy.
Now, in Part 2, we’ll get practical and detailed. I'll show how to craft a policy that gives your teams both the green lights they need to innovate and the clear red lines that keep everyone safe. Read to the end, and you'll have a complete template you can adapt for your organisation, one I’ve tested with a few clients that I’m confident can help you push forward with AI adoption without the risky free-for-all.
Drawing Clear Lines: Green Lights and Red Lines
I think the most effective AI policies articulate both what employees are encouraged to do (green lights) and what's absolutely forbidden (red lines). And in circumstances where you’re asking staff to make judgement calls, you have to err on the side of giving more background guidance and information than less. Clear boundaries remove ambiguity, people know where they have freedom and where they need to stop, but importantly they also know why.
Green Lights: Encouraged AI Uses
✅ Productivity and Efficiency Boosts. Using AI to automate mundane tasks is typically welcomed. We want people to summarise documents, sort data, draft routine reports, and generate first-pass code. The caveat: no confidential information and humans must review AI outputs. You get productivity gains without blind trust.
✅ Creative Brainstorming and Content Generation. AI excels as a creative partner for brainstorming ideas, writing drafts, creating prototypes. Marketing teams generating ad copy ideas, designers exploring variations, engineers outlining code logic. These can all be encouraged with clear expectations that AI provides starting points, not ending points.
✅ Data Analysis and Decision Support. Another generally permitted category is using AI for insights and decision support. This could mean running an AI-powered data analysis to find patterns, using a prediction model to forecast trends, or employing a diagnostic AI to flag anomalies. I think it’s sensible for the policy to encourage these uses because they can augment human intelligence and lead to better decisions. However, it must stress (a) the confidentiality requirements; and (b) that AI predictions or analytics need human validation. We’re basically saying "Feel free to use AI to analyse sales data for trends, but ensure you understand the analysis and verify any surprising conclusions with additional checks." This maintains that human-in-the-loop principle: AI informs decisions, humans make decisions.
✅ Customer Service Assistance (with Oversight) Lots of organisations are deploying AI chatbots or assistants to handle basic customer queries (password resets, common FAQs, etc.). Your usage policy can explicitly list this as permitted provided certain safeguards are in place. It might say AI can be used to interact with customers for low-risk, simple issues to improve response time. But it should also state conditions: the AI must identify itself as a bot, stick to scripted or approved knowledge bases, and hand off to a human for anything complex, sensitive, or complaints. This way, employees know they can use AI to scale customer support, but they also know the boundaries (e.g., don't let the AI handle a furious customer's complaint entirely on its own, and certainly don't let it give medical or legal advice to customers without human review).
✅ Learning and Skill Development. A more inward-facing permitted use is allowing employees to use AI tools to learn and grow in their roles. For example, a junior programmer might use GitHub Copilot to learn how to structure a function, or an analyst might use an AI tutor to grasp a new statistical method. Your policy can explicitly encourage using AI as a learning aid. The benefit is twofold: the employee upgrades their skills, and they become more familiar with AI tools in a low-stakes way. The caveat here is similar to others, if they need to practice or experiment, they should use non-sensitive data. The policy might say, "feel free to practice coding with an AI assistant using sample data, but don't feed actual client data into it while you're learning."
✅ Pilot Projects in Controlled Environments. Explicitly permit small-scale experiments in safe environments using test data, with time limits and scope boundaries. Require approval before pilots move to real data or production. All permitted uses share a pattern: AI assists, humans remain accountable. We’re saying: "AI can help you, but you're the ultimate decision-maker responsible for outcomes."
Red Lines: Absolute Prohibitions
Now to the tougher part, the red lines. These are the uses of AI that you explicitly ban because they carry too much risk to the business, employees, customers, or society. It's crucial to be as clear and specific as possible here. If the line is fuzzy, people may inadvertently cross it. Here are some common "must not do" categories that I think would be appropriate for most organisations, but tailor them of course to your context and risk appetite:
❌ Handling of Sensitive Data. This doesn’t need much repeating. Never input or expose confidential or sensitive data, including personal data to unsanctioned AI systems. For example: "Do NOT paste confidential company text, customer personal information, source code, financial data, or any restricted information into public AI tools like the free version of ChatGPT (or any external AI service that hasn't been approved for that exact purpose)." It can elaborate that unless an AI tool has been vetted and contractually bound to protect our data, assume anything you give it could become public. This addresses our opening scenarios directly. It's the difference between using an internal AI tool (or an enterprise-secure one) and some random app. By making this a bright red line, you significantly reduce the risk of leaks. The policy can mention that any AI outputs that look like they might contain sensitive info should be treated with caution. If an employee somehow sees internal-looking data coming from an AI, they should stop and report it (because it means some data may have leaked previously).
❌ Bypassing Security or Privacy Controls. Make it clear that employees must not use AI in ways that undermine the company's security, privacy, or compliance measures. This includes obvious things like "don't use AI to try to hack or decrypt anything" (no using an AI tool to guess passwords or scrape data you're not supposed to have). But it also includes more subtle scenarios: for instance, sending a document to a personal email and using an AI tool from a home computer to work on it. Or if a website blocks scraping, you shouldn't unleash an AI to scrape it anyway. Also, no using AI tools that themselves violate laws (like using an AI service that unlawfully provides you someone else's personal data). Essentially, don’t use loopholes to bypass controls or even use AI as a loophole in itself to get around rules, state that explicitly.
❌ Automated Decisions Without Human Oversight (in High-Stakes Areas) If your organisation uses AI to make or recommend decisions, draw a line at decisions that significantly affect people's lives or rights being made by AI alone. For example, "AI must not be the final decision-maker for hiring, firing, promotions, legal judgments, medical diagnoses, financial approvals (like loans or insurance), or any disciplinary actions, unless humans have reviewed and approved the decision." Even if you don't currently use AI in these ways, include this as a principle so that if someone tries to, they know it's against policy. High-consequence decisions require a human in the loop by default. If one day you choose to automate some of these, it should be an exception that goes through a rigorous approval and compliance process, the policy can allow that caveat that leadership can explicitly approve an exception if regulation permits and robust controls are in place. The point is to prevent a scenario where, say, a well-meaning data scientist deploys a new ML model that automatically rejects job applicants or closes customer accounts without oversight. That could lead to serious fairness and accountability issues. Your red line allows AI assistance, but does not permit unilateral AI decision-making, when stakes are high.
❌ Generating Harmful or Misleading Content. This is a red line focused on ethical standards of business conduct and integrity. Your policy should forbid using AI to produce content that violates your company's code of conduct or basic standards of decency. That means no using AI to generate hate speech, discriminatory remarks, sexually explicit material, or content that harasses or bullies. It should also ban using AI for any form of deception or fraud: e.g., no deepfakes to impersonate someone, no AI-written fake news or fake customer reviews, no AI-generated phishing emails, etc. If your employees wouldn't be allowed to do it manually, they shouldn't be allowed to do it with AI either. This might seem obvious, but putting it in writing covers edge cases, for instance, a marketer might think it's harmless to use AI to create a hundred fake positive reviews for your product, your policy should clearly outlaw that: "do not use AI to deceive or manipulate audiences". Additionally, if your company communicates with customers or the public, consider a rule that AI-generated content should be disclosed or identifiable as such where appropriate. (Some jurisdictions might even require this.) So an employee can't pretend a chatbot is a human if it isn't.
❌ Intellectual Property Violations AI complicates IP, but your policy should state that employees must not use AI in ways that infringe copyrights, patents, or other IP rights. For example, they shouldn't prompt an AI image generator to create something that is basically a knock-off of a copyrighted image. Or they shouldn't feed proprietary text into an AI and ask for it back verbatim (that's essentially copying). And if they use AI-generated code, they need to ensure it's not just plagiarising someone's open-source code without attribution. A concrete rule might be: "Don't use AI to generate content by copying or mimicking protected material, and don't take AI outputs that contain large snippets of others' work and use them as if they're free. Always respect licences and authorship." This not only protects your company from legal trouble, but also instils an ethical use mindset.
❌ Ignoring Regulatory Requirements. This is a bit of a catch-all. In regulated industries, compliance comes first. Any AI use that would violate laws or regulations is prohibited. If you know of specific laws in the places you operate, then you might want to list them, along with the specific requirements they create. When in doubt, tell your people to consult compliance teams.
❌ Concealing AI Limitations or Mistakes Don't cover up AI problems. If an AI tool produces suspicious results, stop and report the issue rather than quietly editing and continuing. This encourages a safety mindset over convenience. Make prohibited uses as unambiguous as possible. Use concrete examples: "Don't paste customer addresses into ChatGPT, that's confidential personal data" or "Don't use AI to generate performance reviews without approval, that requires human judgment." Include a general principle: "When in doubt, assume it's not allowed and seek guidance." Provide clear contact points and emphasise that violating red lines could lead to disciplinary action, while asking for guidance first will never be penalised.
Key Components of Your Policy
Ok, so with that guidance in mind, let’s talk trhough actually creating the policy. I give you a full policy you might want to use below, but try not to jump ahead. I’ll explain why each part is in there. Think of this as a checklist to ensure your policy is comprehensive, yet straightforward and usable:
☑ Purpose and Scope. Begin the policy with a brief statement of why it exists and to whom it applies. For example: "This policy establishes guidelines to encourage safe, responsible AI innovation across [Org Name]. It applies to all employees, contractors, and business units, covering all AI tools and systems used in our operations or products." Setting the scope clarifies that whether someone is building an AI system or just using a chatbot for brainstorming, the policy is relevant. It also positions the policy as enabling innovation and ensuring safety, striking the dual objective from the outset.
☑ Roles and Responsibilities. It helps to outline who is responsible for what in implementing the policy. This doesn't have to be long. You can specify that employees are responsible for following the guidelines and asking when unsure; managers are responsible for enforcing and educating their teams (promoting that culture of responsible use); IT/Security is responsible for vetting and providing safe AI tools; Legal/Compliance is there to advise on regulatory matters; and an AI Governance Committee or similar body will maintain the policy and handle escalations or approvals for high-risk uses. By naming these roles, you make it clear that support exists, it's not just a list of rules tossed over the fence. People know who to turn to for help (e.g., "contact IT to request a new AI tool approval" or "report incidents to the AI Committee"). It also signals leadership buy-in: if you explicitly mention that, say, the CTO or a Governance Committee oversees AI, employees sense that this policy matters up the chain.
☑ Definitions (if needed). Depending on your audience, you might briefly define key terms like "AI system," "AI tool," or "sensitive data." Keep it concise and non-academic. The goal is just to ensure readers know what falls under this policy. For example, clarify that by "AI" you mean not just sci-fi robots but any software using machine learning or automated decision logic, including things like chatbots, image generators, predictive analytics models, etc. Sometimes people ask, "Does this include that Excel macro with a bit of AI in it?" Your definitions can make scope clear. Remember, you want everyone to read and understand it, so write it for your most junior employee who doesn’t speak legalese!
☑ Guiding Principles (Optional). Maybe you would include a short section on principles, like fairness, transparency, accountability, basically echoing high-level AI ethics commitments but in plain terms. This can be a nice way to align with corporate values, existing ethical frameworks or connect it with the all-up AI Governance Policy, but don't let it get too abstract. For instance: "Our use of AI should be fair (not discriminate unlawfully), respectful of privacy, transparent to those affected when appropriate, and ultimately subject to human judgment." This sets a tone and can guide interpretation of the later rules. If you have an AI governance or ethics statement already, you can reference it here.
✅ Permitted Uses. As discussed, outline the encouraged, positive uses of AI in your context. List them as bullet points or sub-sections (automation, analytics, content generation, learning, etc.), with any caveats. This section essentially says, "Here's how you can use AI to make your work better, go for it (responsibly)!" Make sure to include the catch-all that even in permitted cases, employees must adhere to the principles and remain vigilant (AI outputs need review, etc.). By reading this section, an employee should get ideas and also know the limits of those ideas.
❌ Prohibited Uses. Clearly list the unacceptable uses, the red lines. Use bold or "Do NOT" phrasing to make them stand out. Each should be a distinct category: misuse of data, unauthorised decision automation, malicious use, etc., as we went through. If any are especially critical in your industry, put them early or emphasise them. For example, if you're a healthcare company, the rule about patient data and compliance would be paramount; if you're a software company, leaking source code or using AI to write code that violates open-source licences might be big. Also include a statement about consequences: that violations could lead to disciplinary action. This isn't entirely to scare, but to get real, these are aren't casual suggestions, they are firm rules.
⚠️ Use with Caution / Conditional Use. Now some policies have a middle ground, things that aren't outright forbidden, but where employees need to be extra careful and perhaps follow additional steps. If you have such cases, call them out. For example: "You may use AI translation tools for internal documents, but exercise caution with any confidential text" or "You can experiment with AI-generated content for social media, but any post must be reviewed by Corporate Comms before publishing." This is the "yellow light" category. But the idea is to acknowledge grey areas. Another example: using AI for personal tasks at work, maybe you permit a little (like using an AI scheduling assistant for your meetings is fine) but not if it interferes with work (like writing a novel on company time with AI is not okay). Clarifying these prevents misunderstandings.
☑ Data Protection and Privacy Guidelines. Given how central data handling is to AI use, it's worth explicitly instructing how to treat different classes of data. This ties into permitted/prohibited, but you might summarise: "Only use public or non-sensitive data with public AI tools. For anything confidential or regulated, use only approved tools or none at all. When in doubt, anonymise data before using it in any AI." Emphasise any specific regulations (GDPR, HIPAA, etc.) that mandate certain things, e.g., "no personal data shall be transferred to any system outside our control without clearance from Legal." These guidelines back up the rules with rationale.
☑ Process for Introduction of New AI Tools or Uses. Since innovation is a key theme, outline how employees can propose new uses or tools. For instance: "If you have an idea for a novel use of AI or want to use a new AI service, here's the process: talk to your manager and fill out the AI Tool proposal form; IT/AI Committee will review for risks; small-scale testing can then be approved." This doesn't have to be lengthy, even a few sentences and a point of contact (like an email of the AI governance team) suffices. The idea is to empower bottom-up innovation by giving it a channel. We want the adventurous employees on our side, not going rogue.
☑ Incident Reporting and Support. No policy can prevent 100% of issues, so put in instructions for what employees should do if something goes wrong or if they have questions. For example: "If an AI system behaves in a way that violates these guidelines or causes a potential incident (e.g., it exposes sensitive data or generates inappropriate output), stop using it and report the incident immediately to … ." Also encourage reporting if they see a colleague misuse AI (remember the playground rule that snitches-get-stitches so maybe not in a snitch way, but "flag it to your manager or compliance if you suspect an AI-related risk"). And provide contact for advice: "For any questions about this policy or to seek guidance on a specific AI use case, reach out to … ." This reinforces that the company is actively supporting the policy's adoption. You might also mention that the policy will be updated periodically and where to find the latest version (e.g., on the intranet).
☑ Alignment with Other Policies Since we noted the difference earlier, you can add a short note that "This AI Use Policy complements our broader AI Governance and Risk Management policies. In case of overlap, those detailed policies govern technical requirements, while this policy focuses on general usage principles for all staff." This just helps position it in the policy hierarchy and can appease auditors or compliance folks that it's part of a coherent system.
☑ Approval and Revision Info At the end, it's good practice to note who approved the policy and when, and when the next review is due (e.g., "Approved by CIO on such and such date. Next review: 1 year from approval."). This shows it's a living document. Given how fast AI evolves, I'd suggest reviewing such a policy at least semi-annually, if not quarterly, especially in the first couple of years.
That might seem like a lot of sections, but in reality an AI use policy can usually fit in a few pages by being concise in each area. I usually don’t like any policy over 2 pages, but theres a tradeoff in this particular one. You want your staff to make well-informed judgement calls, so you may have to provide more detail in explanatory notes. The policy I’ll give you below is over 10 pages, but the core is in just two pages - most of the document is an explanatory addendum (I’m a big fan of providing these kinds of explanatory addendums to policies - they’re a great way to sustain memory and inform line calls). The key is readability and clarity. Bullet points and clear language (as opposed to legalese) are your friends here. The policy should be written in the same tone you expect employees to abide by it: professional but approachable. If it reads like a dry contract, busy staff will skim it once and forget it. If it reads like a helpful guide from an experienced colleague ("hey, here's how to make the most of AI at work without stepping on any landmines"), they're more likely to remember and use it.
Evolve Your Policy Over Time
An AI Use Policy is not a one-and-done deal. As your organisation grows in its AI capabilities and as the external landscape (technology and regulation) evolves, your policy should scale and adapt. A small startup's approach to AI use will look very different from a large enterprise's, and rightly so. Here are some final thoughts on ensuring your policy keeps up and remains effective:
Start Simple, Then Expand. If you're early in the AI journey or a smaller company, you might begin with a very simple policy focusing on the most immediate risks, for example, a short list of "don'ts" (like the confidential data rule) and basic encouragement of experimentation in low-risk ways. That's fine. It's better to have a simple, readable policy that people follow than a comprehensive tome that people ignore. As you encounter new scenarios, you can add to it. Perhaps initially you didn't cover use of AI in customer service, but as your product team starts adding chatbots, you'll incorporate some guidelines about that. Plan to iterate. You can version-number your policy (e.g. "AI Use Policy v1.0") to signal that it will be updated. Make sure to communicate updates when they happen, highlighting what changed and why. Over time, you may go from one page to a few pages, but each iteration will be informed by real experience in your org.
Align with Organisational Maturity As your company's governance framework matures, the policy can shift from being a standalone guiding document to a more integrated part of the overall AI management system. For instance, once you have an AI risk assessment process running, the policy might incorporate references like "any project moving from pilot to production must go through risk assessment per our AI Risk Policy." Initially, you might not have had that, so it might have just said "get approval." The more processes and controls you build organisationally, the more specific your usage policy can get about interfacing with them. Also, if your workforce becomes more AI-savvy over time, you might not need to spell out certain basics, and instead focus on more advanced issues. Conversely, as you hire many new people, you might add an FAQ section or more examples to educate newcomers. Essentially, treat the policy as a living document that grows with you. This prevents it from becoming outdated or a blocker.
Respond to External Changes Keep an eye on the regulatory environment and industry best practices. AI regulation is a hot topic worldwide, especially due to the EU AI Act and various guidelines from professional bodies. If laws change (say, new rules about AI transparency or data usage), you'll need to update your policy to ensure compliance. Similarly, if a major incident happens in industry (imagine a high-profile case of AI misuse in your sector), use that as a learning moment to review your policy. Maybe you'll add a new rule or an example spurred by that event. The policy should also evolve with technology, e.g., maybe today deepfakes aren't a concern for your business, but in a year or two you realise you need a stance on synthetic media usage. The more proactive you are in updating the policy, the less likely you'll be caught flat-footed. A good practice is to schedule a regular review (say every 6 or 12 months) where a small group (including someone from AI team, legal, IT, etc.) evaluates if the policy is still adequate. Even if no changes are needed, that check-in is valuable. And when changes are needed, involve stakeholders across the organisation, perhaps your enthusiastic AI users, to get input on what's working or not in the current policy.
Scale the Enforcement with Maturity Early on, your enforcement of the policy might be very light touch, you're mostly educating and trusting. As you mature, you might implement more technical enforcement. For example, IT could start monitoring for usage of certain AI websites or integrate data loss prevention systems that alert if someone is pasting large chunks of code into a browser (some companies have done this). Some organisations even implement enterprise AI usage dashboards to see which departments are using what AI tools (sometimes called AI asset inventory). Your policy can gradually incorporate these capabilities: e.g., initially, "please self-report any new AI tool usage," later evolving to "IT will actively monitor and block disallowed tools." It's a journey. The principle is, as your ability to enforce grows, the policy can lean on those abilities. But always pair enforcement with continuous education. You don't want employees to feel it's a surveillance regime; you want them to see that everyone (including the company) is jointly accountable for using AI wisely.
Keep It Practical and User-Friendly As you update the policy, check that it doesn't become bloated or overly complex for the average reader. You might spin off technical details into appendices or separate guidelines if needed, keeping the main policy readable. One idea is to include a one-page Quick Reference or summary at the top or bottom of the policy, highlighting the top 5 things to remember. Busy professionals will appreciate that. You could even create an internal infographic or checklist. The easier you make it for people to grasp the rules, the more they'll internalise them. In training sessions or internal communications, reinforce the policy with real examples (anonymised if needed) from within the company: "Last quarter, one of our teams tried using AI for … insert use case, it was a great idea and here's how they did it within our policy guidelines... On the other hand, we had a near miss when someone almost pasted a customer list into an AI tool, thankfully they recalled the policy and stopped." These stories keep the policy alive and relevant.
Finally, celebrate and reinforce success. When your organisation navigates AI innovation safely because of the guidelines you set, acknowledge that. If an employee's proactive check or question prevents a potential issue, give them a shout-out (they exemplified the culture!). Over time, these positive feedback loops make the policy something people are proud of, rather than seeing it as just a set of rules.
Make this Template Your Own
To help you get started, I've created a comprehensive AI Use Policy that you can download and adapt for your organisation. Fair warning: it's quite long, but that's intentional. Since this is entirely new policy territory for many organisations, and because we're asking employees to make nuanced judgment calls about AI use, I've included a lot more explanation and guidance throughout than normal. You absolutely should trim out the explanatory content wherever appropriate to create a more concise version, but I've found in this policy area, that the additional context helps employees understand not just the "what" but the "why" behind each guideline.
The template also includes specific guidance on using AI in customer interactions, which may or may not be relevant for your organisation. Feel free to adapt or remove these sections based on your needs.
The entire template is provided copyright under a Creative Commons CC-BY-NC-SA licence, which means you're free to use and share it non-commercially with attribution. I hope it helps to encourage safe, responsible AI practices across more organisations. It’s just not for commercial sale. Think of it as a starting point for your own policy development, one that you can mould to fit your organisation's unique culture and risk profile.
But remember that a policy is only as good as the commitment to enforce and live by it. Link it back to the culture: encourage leaders to talk about it, include AI usage do's and don'ts in onboarding for new hires, maybe even incorporate a section on safe, responsible AI in your regular training. Make it a living part of your organisational knowledge.
And keep listening to your employees, they will tell you (directly or through their actions) if something in the policy isn't working for them. Maybe a rule is too restrictive and hampers a legitimate use; be willing to revisit it and find a safe way to achieve the same goal. The process of creating a policy is not a one-time project, but the beginning of an ongoing dialogue between the risk guardians and the innovators in your company. An AI Use Policy is about finding the sweet spot where innovation thrives under watchful, responsible guidance.
The technology will keep advancing, and your people will keep experimenting. By putting a solid AI Use Policy in place, you ensure that experimentation happens in the light of day, with safety nets ready. I believe that’s how we should unlock AI's true value: move fast, but don’t break things.
Thank you for reading this guide. I'm grateful to the clients, and fellow AI practitioners who've shared their experiences and insights with me over the past several weeks as I've developed these ideas into the full policy. Your real-world perspectives have made this article and the template policy so much better.
I hope you find this framework useful in your own AI governance journey. If you have feedback, suggestions, or ideas on how this guide could be improved, I'd treasure hearing from you. And if you're interested in more practical guidance on implementing AI responsibly in your organisation, please do subscribe to ethos-ai.org for future updates and resources.
Interesting and timely. Thank you for sharing
This is very helpful and strikes a great balance between innovation and safety nets. Thanks for the template!