Green Lights & Guardrails: Crafting Your AI Use Policy — Part 1
How to encourage experimentation within practical guardrails that encourage safe, responsible use of AI.
It's early 2023, and Amazon's developers, just like developers in every company at the time are buzzing with excitement about ChatGPT. They're using it to debug code, draft documentation, and solve problems faster than ever. No bad intent, just smart people trying to be more productive. Then someone notices something unsettling. ChatGPT's responses are starting to look suspiciously familiar. Too familiar. Responses that seem to contain Amazon's internal data.
In a company Slack channel, panicked employees ask: "Is there any official guidance on using ChatGPT?" An Amazon lawyer jumps in, warning staff to stop pasting confidential code into external AI tools. The damage assessment reveals what everyone feared: ChatGPT's outputs had "aligned with internal data," meaning proprietary code had likely been leaked to an external system. Amazon moved quickly. They put up internal warnings when employees accessed ChatGPT and started building their own internal AI assistant called "Cedric."
This incident, as reported by Gizmodo and Business Insider1 2 3, illustrates how quickly well-intentioned AI use can become problematic without proper guardrails. Disclaimer: I was an Amazon employee during this period - and soon after became accountable for AWS Responsible AI Assurance.
Halfway around the world in Perth, Australia (and a lot closer to me!), another AI mishap was unfolding. Concerns emerged that hospital staff across five facilities had discovered ChatGPT could help them draft patient medical notes and discharge summaries. Again, the intention was reasonable: spend less time on paperwork, more time with patients. But these AI-generated notes were potentially being pasted directly into official patient records without any oversight or approval. When executives discovered this, they sent an urgent all-staff email: there was "no assurance of patient confidentiality when using technology such as ChatGPT." All use of ChatGPT with patient information was banned immediately. The investigation revealed at least one doctor had been feeding real patient details into the chatbot. Fortunately, no privacy breach was reported, it was a near miss. But it sparked national headlines and calls for better AI regulation in healthcare.4
And again around the same time, a similar experience was unfolding in Samsung’s Korean operations. Well‑intentioned staff reached for a publicly hosted chatbot to accelerate their work. Engineers in the company’s semiconductor division pasted snippets of chip‑test code, diagnostic logs and an internal meeting transcript into ChatGPT while troubleshooting a production fault. When security compliance teams later reviewed outbound traffic, they identified three separate uploads that contained highly sensitive intellectual property, material now stored on external servers Samsung did not control.5
Management responded by blocking public generative‑AI services across the corporate network, prompts were limited to 1KB, and an internal large‑language model—later launched as “Gauss”—was fast‑tracked so employees could keep using conversational AI without sending data off‑site.6 The incident reinforced the shared lesson of all three stories: without clear governance, even routine experimentation with AI can expose organisations to data‑security and compliance risks they never intended to run.
All three stories share the same plot: well-meaning professionals pushing ahead with AI innovation without realising the hazards. These aren't tales of rogue actors, and they’re not at all uncommon. Very likely, they are happening in your organisation right now. These just happen to be public examples of the results when organisations don't proactively set boundaries around experimentation. The truth is, if you don't create guidelines for AI use, you'll eventually stumble into your own "Oh no!" moment. It might be a strategy memo appearing in a chatbot's responses, client data accidentally leaked, sensitive customer personal data disclosed unlawfully, or an AI agent making unauthorised decisions.
When Innovation Outpaces Governance
This brings us to an interesting tension in today's business environment. Many companies are telling their employees to embrace AI wholeheartedly. Fiverr famously announced they were "going all-in on AI" and encouraged every employee to integrate AI tools into their daily work - or let AI take their job!7 Shopify has implemented a new policy where employees must demonstrate that a task cannot be completed by AI before requesting new hires or additional resources, making use of AI a "fundamental expectation" for all employees.8 Amazon CEO Andy Jassy’s memo telling employees to embrace generative AI or risk being replaced by an agentic AI workforce has the potential to drive a rush‑first, govern‑later culture that could leave Amazon exposed to the very same shadow‑AI mishaps9.
The enthusiasm is understandable, AI can deliver remarkable productivity gains and competitive advantage. But asking employees to "go all-in" without providing clear guardrails isn't just unfair to your people, it's potentially dangerous for your company. We absolutely want to move fast with AI innovation, we just don't want to break things (especially the important things) in doing so. We need guardrails that enable and even accelerate safe innovation.
So this article is about those guardrails. I wanted to write about how to encourage experimentation and innovation, but do so safely through an AI Use Policy. Derived from work we’ve completed and validated with a number of clients and reviewed by a number of experts, I’ll walk you through how to create such a policy, what to include and how to make it supportive of experimentation, not a rule book of nasties and gotchas. I even provide a complete end-to-end policy that you can adapt and apply in your own organisation.
What Makes an AI Use Policy Different
An AI Use Policy, sometimes called an AI acceptable use policy or responsible AI use guideline, is your organisation's handbook for safe adoption of artificial intelligence. It lays out in plain language what employees should do to take advantage of AI, as well as what they must not do so as to protect the company and its stakeholders. This type of policy is meant for everyone in the organisation, from engineers and data scientists building AI models, to marketers using ChatGPT to draft copy, to doctors or lawyers experimenting with AI assistants. It's a document that translates high-level principles into practical do's and don'ts for day-to-day AI use.
It's important to understand how an AI Use Policy differs from broader AI governance frameworks or risk management policies. Many organisations embarking on AI governance will develop multiple layers of documentation and controls. In previous articles, I went through how to create an AI Governance Policy,10 an AI Risk Management Policy11, and so on. Think of those as the structural blueprints and detailed process requirements for managing AI in the organisation. For example, an AI Governance Policy typically defines the overall governance structure: roles like an AI oversight committee, requirements for when to conduct risk assessments, alignment with regulations, etc. It's aimed at leadership and teams building AI, ensuring there's an overarching system to oversee AI development and deployment. It’s supported by an AI Risk Policy that lays out how to identify and mitigate AI-related risks through formal processes.
In contrast, an AI Use Policy is more direct and operational. It speaks to every employee and spells out the boundaries for using AI in their work. One way to think of it is this: the governance framework sets the organisational structure and high-level commitments (the "what" and "why" of AI oversight), while the usage policy is the on-the-ground guidance for staff (the "how" of daily usage). The usage policy is less about satisfying regulators or aligning with ISO standards, and more about guiding and shaping employee behaviour in a practical way. It often focuses on things like: what kinds of data you can or cannot feed into AI tools, which AI tools are approved for use, how to safely experiment with new AI ideas, and when to seek approval or help.
To illustrate the difference, consider the earlier Amazon scenario. You would expect Amazon's governance framework would already have oversight and risk assessment processes. But what the engineers really needed in the moment was clear usage guidance, a simple rule like "don't paste confidential code into external AI services." In fact, employees explicitly asked for guidance on "acceptable usage of generative AI tools." That's precisely what an AI Use Policy provides: specific guardrails that any employee can understand and apply. It complements the broader governance policies by ensuring that everyday innovation doesn't inadvertently break those higher-level rules or laws.
Another key distinction is that a usage policy tends to be more dynamic and immediately actionable. It might mention current popular tools by name (e.g. ChatGPT, Copilot, DALL-E) and current risks, and it can be updated frequently as new tools emerge. A governance framework, by contrast, is commonly more static and abstracted to a high-level (e.g. "we commit to fairness, transparency..."). Both are important, but the usage policy is the one that directly influences how Bob in Marketing or Jane in Engineering actually uses AI today.
Governance frameworks set the direction, and usage policies set the guardrails. If you already have an AI governance or risk policy, you should ensure an AI Use Policy flows consistently from it, the principles in the big picture should translate into concrete rules in daily use. And if you don't yet have those bigger governance structures, starting with a straightforward AI Use Policy is actually a smart first step to impose some order and safety, then work backwards (and use some of my other articles as reference) to build out the big-picture governance framework. Think of the AI use policy as the "street-level" rules that make the lofty ideals of safe, responsible AI real in your organisation's daily operations.
A Word about Culture
Having a written policy isn't enough. The most dangerous failures often originate not in code, but in culture. You can hand employees guidelines and rules, but whether they follow them (or find clever ways to ignore them) depends on the values, incentives, and habits that shape everyday behaviour at work. So, what would it look like to have a culture that supports safe and responsible AI innovation? I believe it’s about creating an environment where doing the right thing is the path of least resistance. To borrow from Just Culture concepts12, it’s about creating the environment where safe behaviours become the default, where risky behaviours are coached and reckless behaviours can be sanctioned. In practice, several cultural elements are critical:
Transparency Wins Over Secrecy. Employees have to feel comfortable bringing up ideas and concerns. The culture should encourage "Hey team, I found this cool tool, is it OK to use?" rather than quiet experimentation under the radar. In Amazon's case, it was good that employees openly asked for guidance. Imagine if they'd been afraid to ask, they might have kept experimenting in secret, and Amazon might not have discovered the data leakage until real damage occurred.
Leadership Sets the Tone. Managers need to consistently reinforce that how we achieve results matters as much as the results themselves. If leadership implicitly rewards "getting it done at all costs," employees will cut corners and hide mistakes. When management "walks the talk" by regularly discussing safe, responsible AI, asking "Have we thought about the risks?" in meetings, it signals that following policy isn't a nuisance, it's the norm.
Psychological Safety Lets You Learn. Employees shouldn't fear punishment for admitting errors or uncertainty. If someone realises they used ChatGPT inappropriately, do they feel safe disclosing that to fix things? The culture should treat near misses as learning opportunities, not blame opportunities. In the Perth hospital case, someone raised the flag about ChatGPT use with patient data. If that had been swept under the rug out of fear, it could have continued until a more serious breach occurred.
Continuous Learning Over Perfect Compliance. AI technology evolves rapidly. A strong culture treats policies as part of a learning process, not static edicts. Encourage teams to share lessons from AI experiments: what worked, what didn't, what pitfalls they discovered. In a toxic or fear-driven environment, even a well-written policy will be ignored or worked around. By cultivating transparency, accountability, and openness to scrutiny, you make the policy an empowering tool rather than a bureaucratic burden.
Preventing Shadow AI Without Killing Innovation
One of the biggest challenges in the use of AI within business is "shadow AI": employees adopting AI tools outside official channels because formal processes can't keep up with the pace of innovation. By analogy to "shadow IT," which refers to employees using unsanctioned apps or devices without IT's knowledge, shadow AI means employees adopting AI tools or building little AI solutions outside of official channels or oversight. This happens when the formal processes in an organisation can't keep up with the pace of AI innovation on the ground. If getting a new tool approved is slow or if people fear a "no," they might just go ahead and use a free online AI service to get their job done, quietly, without telling anyone. We saw this dynamic play out at Amazon, Samsung and in the hospital case: in each case, staff went ahead and used a publicly available AI service (ChatGPT) because it was readily accessible and solved an immediate problem, and they did so before explicit guidance or approval was in place.
Why is shadow AI a problem? First, it can introduce significant risks without anyone realising. Employees could be unknowingly uploading sensitive data to external servers, as we discussed, or relying on AI outputs that haven't been vetted for accuracy, fairness, or security. Recent surveys reveal the scope: over one-third (38%) of employees admit to sharing sensitive work information with AI tools without permission13. Another study found 4.7% of employees had pasted confidential company data into ChatGPT in the first half of 202314, and many more were using it for work tasks in general. These numbers are eye-opening, they suggest that if you have 1,000 employees, dozens might already be quietly using AI in ways that could spill secrets or create compliance issues. Indeed, some companies like JPMorgan, Verizon and even Microsoft15 responded by temporarily blocking access to ChatGPT at work to stop the data leakage risk.
This data is two years old though, and although ChatGPT has introduced features for non-retention, there has been an enormous proliferation of other AI services since that don’t provide similar protections. One recent survey illustrates just how far shadow‑AI has spread. In July 2025, a ManageEngine poll of 700 North‑American workers found that 70 % of employees now use unapproved AI tools while on the job16. Complementing that user‑side view, network telemetry analysed by Prompt Security and reported by Axios shows the typical enterprise is already running around 67 generative‑AI apps—about nine in ten without formal licensing or IT sign‑off17. (One caveat to these reports: they’re both from companies offering solutions to the problem)
But simply banning AI tools isn't sustainable if you want innovation. Sure, you can technically prevent "shadow AI" by locking everything down, but you'll also likely stifle legitimate experimentation and frustrate your talent. Engineers will find ways to write code with AI, especially when competitors are adopting them, and when they see their future career growth depends on their understanding of AI. If they can't do it with your blessing, they'll do it in secret or on personal devices, which is worse from a governance perspective.
So how do we prevent the risks of shadow AI without smothering the creative use of AI? A balanced AI Use Policy should aim to bring those informal experiments into the light where they can be done safely, rather than drive them underground. Here are some strategies to consider:
Create Easy Pathways for Approval and Sandboxing. One effective approach is to establish lightweight approval or notification processes for trying new AI tools. For example, your policy can say that if an employee discovers a new AI SaaS app that could help in their work, they are encouraged to seek a quick review from the IT or security team (or an AI governance committee) rather than use it unvetted. The key is that this review process must be fast and not overly burdensome, think a couple of hours, not days or weeks. You might set up a simple online form or a "request an AI tool review" email alias.
Sandbox environments can be game-changers. Perhaps allow employees to conduct pilot projects in a controlled sandbox with test data. For instance, an employee has an idea to use a generative AI to automate email responses, you could let them experiment with it using fictitious or obfuscated data in a non-production setting. The policy should state that any such pilot must use either dummy data or very limited real data, and that moving beyond the sandbox (using it with actual customer data or integrating into production) requires formal approval. This way, you're saying "Yes, try new things, but do it safely and involve us early." People are far less likely to go rogue if there's a clear, reasonable path to do it the right way.
Maintain a Whitelist of Approved AI Tools (and Update It Frequently). Publish a list of sanctioned AI services that have been vetted for security, privacy, and legal compliance. This list should be communicated in the policy or an accompanying resource. For example, maybe your company has approved Microsoft's Azure OpenAI service or a specific internal large language model for use with sensitive data. If employees know "these are the tools we can safely use, and these ones are off-limits," it removes a lot of guesswork. Importantly, keep this list fresh. New AI tools pop up every month, have a process to evaluate and add (or reject) them relatively quickly. The policy can mandate that using any AI service not on the approved list is prohibited for work purposes until it's reviewed. This turns potential shadow AI into a form of collaborative evaluation, an employee can nominate a cool new AI app for approval rather than sneaking it in. At Amazon, for instance, the eventual solution was to develop an internal AI assistant called Cedric that was at least claimed to be "safer than ChatGPT"18 guiding employees to naturally prefer the sanctioned tool. Not every company will build its own AI, but even using an enterprise-licensed version of a tool (with a proper data agreement in place) can funnel employees toward safer options.
Set Clear Data Handling Rules A major cause of shadow AI risk is employees feeding sensitive data into whatever AI tool is handy. Your policy should draw a hard line on this. You’re going to need to explicitly state that certain categories of data are never to be shared with external AI tools that aren't approved. For example, "Don't input confidential business plans, source code, customer personal data, or any similarly sensitive information into ChatGPT, Claude or other public AI services," until and unless those services have been officially cleared and secured. Even if this is common sense to your IT team, don't assume all staff know it, spell it out. Many employees falsely assume that what they type into ChatGPT is private, they need to be educated that feeding information to an external AI has inherent risk. By emphasising this in your policy (and training), you'll deter a lot of risky experimentation without forbidding the experimentation entirely. Employees can still play with AI, but they'll think twice and perhaps use dummy data if they want to see how an AI would handle a scenario.
Encourage Consultation Over Concealment Make it clear the organisation wants employees to explore AI, they just need to involve the right support. The tone should be "We're excited for you to innovate, and we're here to help you do it safely" rather than "Any unapproved AI use will be punished." Include language like: "If you're unsure whether a particular AI use is allowed, ask your manager or AI committee first, you'll get guidance, not judgment."
The goal is shedding light on AI experimentation. You want to harness creativity while avoiding ugly surprises from unsanctioned use. It's a delicate balance between anarchy and stagnation, your policy should find the sweet spot that says "Yes, you can innovate, here's how" and "No, you can't cross these lines, here's why."
From Near‑Misses to a Mandate: Time for an AI Use Policy
Taken together, the Amazon, Samsung and Perth hospital “near‑misses”, and the data showing shadow‑AI usage rocketing to more than two‑thirds of the workforce, underscore a simple truth: if you haven’t spelt out how AI should and should not be used, your people will improvise. Sooner or later sensitive information or high‑stakes decisions will slip beyond your control. The remedy is not a blanket ban, but a clear, practical AI Use Policy that channels the same creative energy into safe, transparent innovation. That is exactly where we turn next.
In Part 2, I’ll move from “why” to “how”. You’ll see a fully worked, implementation‑ready and proven AI Use Policy that you can adapt for your own organisation, complete with concrete examples of encouraged uses, absolute prohibitions, and a lightweight approval pathway for everything in between. If you’d like to be notified the moment it’s available, make sure you’re subscribed to ethos‑ai.org (it’s free and we never spam). See you there!
https://www.entrepreneur.com/business-news/amazon-asks-its-employees-to-use-cedric-instead-of-chatgpt/480647
https://www.businessinsider.com/amazon-cedric-safer-ai-chatbot-employees-2024-9
https://gizmodo.com/amazon-chatgpt-ai-software-job-coding-1850034383
https://www.abc.net.au/news/2023-05-28/ama-calls-for-national-regulations-for-ai-in-health/102381314
https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak
https://developer.samsung.com/sdp/news/en/2023/11/28/samsung-developer-november-newsletter-samsung-electronics-reveals-the-samsung-gauss-generative-ai-model-and-other-latest-news
https://www.entrepreneur.com/business-news/fiverr-ceo-says-ai-will-take-your-job-heres-what-to-do/491198
https://x.com/tobi/status/1909251946235437514
https://www.cbsnews.com/news/amazon-ceo-generative-ai-corporate-workforce/
https://www.infosecurity-magazine.com/news/third-employees-sharing-work-info/
https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt
https://www.zdnet.com/article/why-microsoft-temporarily-blocked-chatgpt-from-employees-on-thursday/
https://www.manageengine.com/survey/shadow-ai-surge-enterprises/
https://www.axios.com/2025/02/04/shadow-ai-cybersecurity-enterprise-software-deepseek
https://www.entrepreneur.com/business-news/amazon-asks-its-employees-to-use-cedric-instead-of-chatgpt/480647
This is a great tool! And it applies especially well to general use AI. We also want to focus on the guardrails of the AI itself.
That's what we are doing at VERSES.
Very practical and easy for any company to implement quickly. Thank you for sharing.