Contents
Your AI Policy: A Practical Guide for Scottish Small Businesses
AI Policy: The foundation for AI Governance
An AI policy is your business's formal commitment to using artificial intelligence responsibly. It establishes the principles, boundaries, and procedures that guide how your team uses AI tools, including chatbots, like ChatGPT or Microsoft Copilot to any AI-powered software you adopt.
For small and micro enterprises, an AI policy doesn't need to be complex, but it does need to exist. It's the foundation that:
Sets clear expectations for staff about acceptable AI use
Protects your business from legal and reputational risks
Demonstrates to customers and partners that you take data protection seriously
Provides a framework for making decisions as AI tools evolve
Think of it as similar to your data protection policy or social media policy - a practical document that guides day-to-day decisions and protects your business interests.
Importantly, AI policy isn't just the domain of IT or technical staff. Because AI touches every part of your business, from customer service to marketing to HR, your policy should be developed with input from across your organisation. The people actually using AI tools daily often have the best insights into what guidance is needed and what will work in practice.
Why you need a policy (even as a small business)
Many small business owners think: "AI policies are for big corporations with legal departments and compliance teams. We only have a handful of people, surely we don't need formal policies?"
This thinking is understandable but mistaken, because:
Size doesn't protect you from risk. A micro business with 3 employees faces the same UK GDPR requirements as a company with 300. A data breach is just as serious whether it affects 50 customers or 50,000.
Small businesses are actually more vulnerable. You likely don't have dedicated IT support, legal counsel, or compliance officers. A single mistake, like an employee accidentally sharing customer data by uploading it to a large language model like ChatGPT, can have devastating consequences you're less equipped to handle.
Your customers expect it. Whether you're a sole trader or a 40-person business, customers trust you with their data. That trust doesn't scale with company size.
Without clear AI guidance, your business faces major risks, including:
Legal and regulatory risks include UK GDPR violations when staff input personal data into AI tools, potentially leading to ICO investigations and fines up to £17.5 million or 4% of annual turnover.
Accuracy and liability risks arise because AI generates content by predicting patterns without verifying facts - it can fabricate information, provide outdated legal guidance, or create faulty calculations, exposing you to negligence claims and flawed business decisions.
Data security and confidentiality risks occur when employees input customer details, employee information, business strategy, or trade secrets into AI platforms, breaching confidentiality agreements and UK GDPR obligations.
Reputational and discrimination risks emerge when AI produces biased recommendations or inappropriate content, particularly in hiring and customer service, potentially leading to discrimination complaints and damage to your brand.
What can happen without a policy: real consequences.
Consider the following scenarios:
Scenario 1: The HR Decision, 8-person professional services firm, Stirling
An office manager used ChatGPT to help evaluate redundancy decisions during a downsizing. The AI's suggestions reflected patterns from its training data that could be seen as age discrimination. Without a policy prohibiting AI use in HR decisions, the firm had no defence. Result: Employment tribunal claim, settlement costs, and reputational damage.
Scenario 2: The Client Data Breach, Solo practitioner accountant, Aberdeen
An accountant used Microsoft Copilot to draft emails without understanding that client data was being processed by the AI. A client asked for details under GDPR about how their data was being used. The accountant couldn't answer and had no policy demonstrating appropriate safeguards. Result: ICO complaint, investigation, and loss of several clients.
Scenario 3: The Wrong Advice, 12-person financial advisory, Edinburgh
An advisor used AI to research a technical tax question and shared the response with a client without verification. The AI's answer was outdated following recent legislation changes. The client followed the advice and faced a tax penalty. Result: Professional negligence claim, insurance excess payment, and damaged reputation.
Scenario 4: The Accountability Gap, 25-person software company, Dundee
Multiple employees used various AI tools in different ways with no guidance. When a client questioned how AI was being used with their data, the business couldn't provide a clear answer - different people gave different responses. Result: Lost client, broader client confidence issues, scramble to implement controls.
A clear AI policy protects your business by preventing common mistakes before they happen, demonstrating legal due diligence to regulators, maintaining customer trust, and empowering staff to use AI confidently within safe boundaries, turning a potential liability into a managed business tool.
An AI policy isn't therefore optional bureaucracy, but essential protection for businesses of any size.
Part 1: Getting Started
Chapter 1: Understanding AI in Your Business
Before you write a policy, you need to understand what AI actually is and where it might already be in your business.
What Counts as AI?
For the purposes of your policy, AI is any technology that uses data to make inferences and generate outputs, such as predictions, recommendations, content, or decisions, with some degree of autonomy.
In practical terms, this includes:
Generative AI Tools:
ChatGPT, Microsoft Copilot, Google Gemini, Claude, Grammarly
Tools that write text, generate images, or create content based on prompts
Machine Learning Systems:
Predictive analytics tools that forecast sales, customer behaviour, or demand
Recommendation systems that suggest products or services
AI-Powered Chatbots:
Customer service bots that generate their own responses (not scripted)
Virtual assistants that answer questions autonomously
Embedded AI in Software:
AI features within your existing tools (Microsoft 365 Copilot, Google Workspace AI, Canva's AI design features)
Social media scheduling tools that suggest optimal posting times
Accounting software that flags unusual transactions
What AI Is NOT (and therefore not covered by this policy):
Standard spreadsheet formulas (SUM, AVERAGE, IF statements)
Rule-based automations ("if X happens, then do Y")
Traditional business intelligence dashboards that display data
Basic search functions on your website
Before writing your policy, you need to know:
What AI tools your team is actually using (not just what's officially approved)
What they're using them for (some uses are higher risk than others)
What data they're inputting (personal data, confidential information, or public information)
Who's using them (different roles have different needs and risk levels)
Take action: AI Audit
Conduct an audit of how AI is currently being used in your organisation:
Ask your team: "What AI tools do you use for work, even occasionally?"
Check your software subscriptions for AI features you might not realise exist
Review any recent content, analysis, or communications that might have used AI
Document what you find. This becomes the foundation for your policy scope
Chapter 2: Assessing Your AI Needs and Risks
Not all businesses use AI the same way, and your policy should reflect your specific situation.
Your Business Context
Consider these factors:
What industry are you in?
Professional services (legal, accounting, consulting) have higher risk with client confidentiality
Retail and hospitality might use AI primarily for marketing and customer service
Creative businesses might use AI for content generation
Manufacturing might use AI for forecasting and quality control
What data do you handle?
Personal data (customer names, addresses, contact details)
Special category data (health information, financial data)
Business confidential information (pricing, strategy, client lists)
Publicly available information only
Who are your customers?
General public
Other businesses
Vulnerable groups (children, elderly, people with disabilities)
Regulated sectors (healthcare, finance, government)
What's your risk tolerance?
Conservative: Strict controls, limited AI use
Moderate: Clear boundaries with room for experimentation
Progressive: Encourage innovation with safety nets
Take action: Simple Risk Assessment
For each AI use case you identified in Chapter 1, ask:
What could go wrong?
Data breach or privacy violation
Inaccurate information affecting business decisions
Discriminatory or biased outputs
Reputational damage
Legal or regulatory breach
How likely is it?
High: Without proper controls, this will probably happen
Medium: Could happen if people aren't careful
Low: Unlikely but still possible
How serious would the impact be?
Critical: Business-threatening (major legal issues, significant financial loss)
High: Serious but recoverable (customer complaints, moderate financial impact)
Medium: Inconvenient (time wasted, minor reputation issues)
Low: Minimal impact (easily corrected)
Priority Assessment:
High Priority to Address: High likelihood + Critical/High impact
Medium Priority: Medium likelihood or Medium/High impact
Lower Priority: Low likelihood + Low/Medium impact
What This Means for Your Policy
Your risk assessment helps you determine:
Scope: Which AI tools and uses need the most detailed guidance
Rules: Which activities should be prohibited, restricted, or freely permitted
Controls: What verification, approval, or oversight processes you need
Training: Who needs what level of education about AI risks
Resources: How much time and effort your policy implementation requires
Part 2: Building your AI Policy
Chapter 3: Elements of your policy
Your AI policy needs several key components as listed below.
3.1 Purpose and scope
This section aims to answer: why does this policy exist? Who does it apply to? What AI tools are covered?
Defining your purpose
Your policy purpose should be a short statement (2-3 sentences) explaining what the policy achieves.
Example: This AI Policy establishes clear guidelines for the responsible use of artificial intelligence tools at [Your Business Name]. It aims to protect customer and business data, ensure legal compliance, maintain the quality of our work, and empower staff to use AI safely and effectively.
Who does this policy apply to?
Be specific about who must follow this policy.
Example: This policy applies to:
- All employees (full-time, part-time, and temporary)
- Contractors and freelancers working on behalf of [Your Business Name]
- Volunteers (if applicable)
- Anyone using [Your Business Name] systems or representing the businessFor micro businesses (fewer than 10 people), you can simplify this to: "This policy applies to everyone working for or representing [Your Business Name], whether as employees, contractors, or volunteers."
What AI tools are covered?
Define which technologies this policy governs. Use the definition you developed in Part 1, Chapter 1.
Example: This policy covers any AI system or tool that uses data to generate outputs such as text, images, predictions, recommendations, or decisions with a degree of autonomy.
This includes:
- Generative AI tools (ChatGPT, Microsoft Copilot, Google Gemini, Claude, etc.)
- AI features within existing software (Microsoft 365 Copilot, Google Workspace AI, etc.)
- AI-powered chatbots and customer service tools
- Predictive analytics and forecasting tools
- AI-assisted design and content creation tools
- Any other AI technology used for business purposesThis does NOT include:
- Standard spreadsheet formulas
- Basic automation rules ("if-then" statements)
- Traditional business reporting dashboardsIf you're unsure whether a tool is covered, ask [Policy Owner/Manager] before using it for work purposes.
Practical tip: List specific tools your team currently uses by name to avoid ambiguity. For example: "This includes ChatGPT, Microsoft Copilot, Canva's AI features, and any AI tools embedded in our Microsoft 365 subscription."
3.2 Your AI principles
Principles are the core values that guide how your business uses AI. They set the tone and help people make decisions when they encounter situations not explicitly covered in your policy.
Here are some key AI principles and what they mean in practice:
1. Human oversight and accountability
The principle: AI assists people - it doesn't replace human judgment, responsibility, or decision-making.
What this means:
All AI outputs must be reviewed by a person before being used
Final decisions always rest with people, not algorithms
Someone is always accountable for outcomes, even when AI is involved
Critical decisions (hiring, contracts, financial advice) require human expertise
2. Data protection and confidentiality
The principle: Personal data, confidential information, and trade secrets are protected and never inputted into AI tools without proper safeguards.
What this means:
Customer and employee personal data stays out of AI tools
Business confidential information is protected
Compliance with UK GDPR is non-negotiable
When in doubt, don't input it
3. Accuracy and verification
The principle: AI-generated content is always verified for accuracy before being relied upon or shared with others.
What this means:
Never assume AI outputs are correct
Check facts, figures, legal information, and technical details
Use multiple sources for important information
Document when AI has been used to generate content
4. Transparency and honesty
The principle: We're open about when and how we use AI, and we don't present AI-generated content as if it were entirely human-created.
What this means:
Don't mislead customers about AI use
Disclose AI involvement when it materially affects the service or product
Be honest internally about AI use
Keep appropriate records
5. Fairness and non-discrimination
The principle: AI is never used in ways that could discriminate against people or perpetuate bias.
What this means:
Extra caution in HR decisions, customer service, and any people-focused use
Awareness that AI can reflect biases from training data
Prohibition of AI use for sensitive decisions about individuals
Commitment to treating all customers and employees fairly
Making principles actionable
Principles are only useful if people can apply them. For each principle, staff should be able to ask:
"Does this use of AI align with this principle?"
"If not, what should I do instead?"
"Who can I ask if I'm unsure?"
You can test this by asking your team: "If you were about to use AI for [specific task], which principles would you consider and how would they guide you?"
3.3 What’s allowed, what’s not
This section provides the specific rules your team needs. It moves from principles to practice: Here's what you can do with AI, here's what you can't, and here's what needs special approval.
Three categories of AI use
Organise your rules into three clear categories:
Permitted uses (go ahead, there are fine)
These are AI applications that pose low risk when used responsibly. Staff can use AI for these purposes without special approval, provided they follow the principles in Section 3.2.Prohibited uses (never do these)
These are activities where AI creates unacceptable risks. These prohibitions are absolute - no exceptions without senior leadership approval and documented risk assessment.Restricted uses (requires approval or special safeguards)
These activities aren't prohibited but require additional safeguards, approval, or expertise before proceeding.
See our AI policy template for examples.
3.4 Roles and responsibilities
Clear accountability is essential for policy success. Someone must own the policy, answer questions, approve exceptions, and ensure compliance.
Large organisations have dedicated AI governance teams, chief AI officers, and compliance departments. As an SME, what you do need is clarity about who's responsible for what within your existing structure.
Key principle for small businesses: One person or a small team holds overall accountability, but everyone has day-to-day responsibilities.
Essential roles for small businesses
1. Policy owner
Who this should be: A senior person with authority to make decisions and allocate resources. Typically, the business owner, managing director, or senior partner.
Responsibilities:
Overall accountability for the AI policy and its implementation
Final decision-maker on policy interpretation and exceptions
Champions responsible AI use across the business
Ensures policy is reviewed and updated
Allocates resources for training or tools if needed
2. Day-to-day contact person (optional but recommended)
Who this should be: Someone accessible who understands both the policy and the day-to-day work. Could be an office manager, operations lead, or senior team member.
Responsibilities:
Answers routine questions about AI use
Provides guidance when staff are unsure
Escalates complex issues to the policy owner
Monitors compliance informally
Coordinates any training or communications
3. Everyone in the business
Responsibilities:
Follow the AI policy in all work activities
Complete any required training
Ask questions when unsure rather than guessing
Report concerns or incidents
Take personal accountability for AI outputs they create
This applies equally to:
Full-time and part-time employees
Contractors and freelancers
Temporary staff
Anyone representing the business
3.5 Compliance and enforcement
This section explains how your business monitors compliance, handles policy breaches, and ensures accountability. Clear consequences and fair processes protect everyone. Staff need to know what happens if mistakes occur, and the business needs procedures for addressing violations.
What to consider before writing
Link to existing procedures
You probably already have disciplinary procedures or a staff handbook. Your AI policy enforcement should align with these, not create a separate system. Reference your existing processes rather than duplicating them.
Scale to your business size
A 3-person business doesn't need investigation protocols and formal hearing procedures. A 30-person business might. Write what fits your context.
Categorise breaches
Think through the types of violations you might encounter and group them:
Low-level breaches (learning opportunities)
Medium-level breaches (requiring formal response)
Serious breaches (potential gross misconduct)
Define your thresholds
Where's the line between a conversation and formal disciplinary action? This depends on your business culture, but common thresholds include:
Intent: Accidental vs. deliberate
Severity: Could this cause real harm?
Repetition: First time vs. ongoing pattern
Response: Did they own up or try to hide it?
Impact: Did harm actually occur?
Consider special circumstances
What if someone reports their own mistake? Many policies encourage self-reporting by treating it more leniently than being caught. Consider whether you want to explicitly reward honesty.
What if the breach was based on confusion about the policy? If multiple people make the same "mistake," your policy probably needs clarifying, not enforcement.
What if the policy owner is unavailable? Who handles enforcement in their absence? This matters for time-sensitive issues like data breaches.
Structure of this section
A well-structured compliance and enforcement section typically includes:
1. Monitoring approach: How will compliance be overseen? Who's responsible? What will they actually do?
2. Reporting mechanism: How should staff report concerns? Who do they tell? What information should they provide? What protection exists for reporting?
3. Breach categories and responses: What types of violations exist? How will each be handled? What are the potential consequences?
4. Investigation process (if needed). For businesses over about 15-20 people, or those handling very sensitive data, you might need a basic investigation process. Smaller businesses can keep it very simple.
5. Learning and improvement: How will incidents inform policy improvements? This shifts the focus from punishment to prevention.
Chapter 4. Making it work.
You've written your policy. Now you need to implement it so it becomes part of how your business operates, rather than another document in a drawer.
4.1 Implementation steps
Finalise and approve
Complete your policy document and share it with staff members who provided input during development to ensure their feedback has been properly reflected. Get approval from leadership or business owners, then set an effective date typically 2-3 weeks out to allow time for communication and training.
Communicate and train
Send the policy to all staff with a clear message explaining why it exists, what the key points are (what's allowed, what's not, who to ask), and when training will occur. Hold a training session to help your staff understand why these matters (to protect customers and the business), what's covered (which AI tools the policy applies to), your core principles, what they can and can't do, the data protection rule (no personal or confidential data), that they must always verify AI outputs, who to ask when unsure, and how to report concerns.
Make it accessible
Store the policy where everyone can find it, such as a shared drive or intranet. Add the policy to your employee handbook if you have one, and include it in onboarding materials for new starters.
4.2 Monitoring compliance
Stay aware through normal work activities by including AI use in project reviews, having occasional conversations about how people are using AI, and observing work outputs for signs of AI use. Make it easy for staff to ask questions by responding quickly and helpfully to queries, treating questions as positive rather than problems, and building a culture where people feel comfortable discussing AI.
Watch for warning signs
Be alert to work that seems unusually polished but contains errors, content that doesn't match someone's usual style, AI-typical phrases or formatting, or staff being evasive about how they produced something.
When policy breaches occur
Follow the approach in your Compliance and Enforcement section. Remember that honest mistakes should be addressed through conversation and clarification, serious breaches require disciplinary procedures, and the focus should be on learning and prevention rather than blame.
4.3 Keeping the policy current
Review schedule
After the first 3 months, assess how the policy is working in practice, what questions keep arising, whether there are gaps or unclear areas, and whether the permitted and prohibited lists need adjusting.
Every 6-12 months thereafter, check whether you're using new AI tools that need addressing, whether legal requirements have changed, and whether the policy is too restrictive or too loose.
Conduct ad-hoc reviews when triggered by adopting significant new AI tools, changes in UK regulations, a serious policy breach, or major business changes.
How to review
Gather feedback from staff about what's working and what's confusing. Review any incidents or recurring questions that have arisen. Check for new AI tools being used in the business. Make necessary updates to clarify confusing sections, add new tools or use cases, or adjust permitted and prohibited lists. Communicate changes clearly by highlighting what's different, provide refresher training if changes are significant, and update the version number.
4.4 Building the right culture
Encourage responsible AI use
Respond quickly to questions and acknowledge when people check before proceeding. Share examples of good AI use and make the policy easy to access and understand.
Make it safe to admit mistakes
Treat honest errors as learning opportunities rather than failures. Don't punish people for asking "Was this okay?" after the fact. Thank people who report concerns and focus on fixing problems rather than blaming individuals.
Celebrate success
Highlight cases where AI helped achieve something efficiently whilst respecting the policy. Share clever approaches that followed the guidelines. Recognise people who ask thoughtful questions about AI use.
Sources:
https://www.saidot.ai/insights/how-to-craft-a-generative-ai-policy-for-your-organisation
https://www.responsible.ai/ai-policy-template/
https://www.industry.gov.au/publications/guidance-for-ai-adoption/ai-policy-guide-and-template