All Articles

Building an AI Acceptable Use Policy (With Template)

An AI acceptable use policy is the foundation of AI governance. Here's how to create one that actually gets followed — with a customizable template.

Tenlines Team10 min read

Why Policies Fail

Most AI policies fail not because they're poorly written, but because they're disconnected from how people actually work.

A policy that says "employees shall not use unauthorized AI tools with company data" doesn't change behavior if:

  • It's not clear which tools are authorized
  • There's no easy way to use authorized tools
  • There's no enforcement mechanism
  • The productivity benefits of violating the policy outweigh the perceived risk

Effective policies work because they're clear, actionable, supported by technical controls, and designed around legitimate use cases rather than blanket prohibitions.

Core Components of an AI Policy

Scope and Applicability

Define clearly who's covered (all employees, contractors, specific departments), what AI tools are addressed (generative AI, AI assistants, automated decision systems), and what data is subject to the policy.

Classification of AI Tools

Create clear categories:

  • Approved for general use: Tools vetted for broad deployment
  • Approved with restrictions: Tools permitted for specific purposes or data types
  • Prohibited: Tools that don't meet security/compliance requirements
  • Requires approval: New tools that need evaluation

Data Handling Rules

Specify what data can and cannot be used with AI:

  • Customer PII: [Permitted/Prohibited/Conditional]
  • Employee data: [Permitted/Prohibited/Conditional]
  • Financial data: [Permitted/Prohibited/Conditional]
  • Source code: [Permitted/Prohibited/Conditional]
  • Confidential business information: [Permitted/Prohibited/Conditional]

Acceptable Use Cases

Provide concrete examples of permitted uses:

  • Drafting and editing (with appropriate data restrictions)
  • Code assistance (with repository/data limitations)
  • Research and summarization (with public information)
  • Analysis (with approved tools and data types)

Prohibited Activities

Be specific about what's not allowed:

  • Submitting customer PII to unapproved tools
  • Using AI for decisions affecting employment without human review
  • Uploading confidential documents to consumer AI services
  • Circumventing technical controls

Responsibilities

Define who's responsible for what:

  • Employees: Follow policy, report concerns, complete training
  • Managers: Ensure team compliance, approve exceptions
  • IT/Security: Maintain approved tools list, enforce controls
  • Compliance: Monitor adherence, update policy

Incident Reporting

How should violations or concerns be reported? What's the response process?

Consequences

What happens when policy is violated? Range from training to termination depending on severity.

Template: AI Acceptable Use Policy

[ORGANIZATION NAME] AI ACCEPTABLE USE POLICY

Effective Date: [Date] Version: [Number] Owner: [Department/Role]

1. Purpose

This policy establishes guidelines for the acceptable use of artificial intelligence (AI) tools and systems by [Organization] employees, contractors, and authorized third parties. The policy aims to enable productive AI usage while protecting sensitive data, ensuring regulatory compliance, and managing associated risks.

2. Scope

This policy applies to:

  • All employees, contractors, and temporary workers
  • All AI tools and systems, including but not limited to: generative AI (ChatGPT, Claude, Gemini, etc.), AI coding assistants, AI-powered productivity tools, automated decision-making systems
  • All company data, customer data, and data processed on behalf of clients

3. AI Tool Classification

3.1 Approved Tools (General Use) The following AI tools are approved for general business use with standard data handling restrictions:

  • [List approved tools, e.g., "Enterprise ChatGPT via company SSO"]
  • [Tool name and access method]

3.2 Approved Tools (Restricted Use) The following AI tools are approved for specific use cases only:

  • [Tool name]: Approved for [specific purpose] by [specific roles]

3.3 Prohibited Tools The following AI tools are not approved for business use:

  • Consumer versions of AI tools (personal ChatGPT accounts, etc.) with company data
  • AI tools that do not meet [Organization]'s security requirements
  • AI tools from providers that retain data for training without appropriate agreements

3.4 Tools Requiring Approval AI tools not listed above require approval from [IT Security/designated approver] before use with any company or customer data.

4. Data Handling Requirements

4.1 Prohibited Data The following data types must NOT be submitted to any external AI tool:

  • Social Security numbers, government IDs
  • Payment card data (credit card numbers, CVVs)
  • Authentication credentials (passwords, API keys, tokens)
  • Protected health information (PHI)
  • Data subject to specific contractual confidentiality obligations
  • [Additional prohibited categories specific to your organization]

4.2 Restricted Data The following data types may only be used with Approved Tools (General Use) that have appropriate data protection controls:

  • Customer names and contact information
  • Employee names and work-related information
  • Non-public financial information
  • Proprietary source code
  • Confidential business strategies and plans

4.3 Permitted Data The following data types may be used with any Approved Tool:

  • Publicly available information
  • General business correspondence (without sensitive details)
  • Generic, non-proprietary code examples
  • Anonymized or synthetic data

5. Acceptable Use

5.1 Permitted Uses AI tools may be used for:

  • Drafting and editing communications (subject to data restrictions)
  • Generating and reviewing code (subject to data restrictions)
  • Research using public information
  • Summarizing and analyzing documents (subject to data restrictions)
  • Brainstorming and ideation
  • Learning and professional development

5.2 Prohibited Uses AI tools must NOT be used for:

  • Making or substantially influencing employment decisions without human review
  • Processing customer data through unapproved tools
  • Circumventing security controls or access restrictions
  • Generating content that violates company policies or applicable law
  • Submitting data that would violate contractual obligations to clients
  • Any purpose prohibited by applicable AI regulations

6. Human Oversight Requirements

When AI tools are used in connection with decisions affecting individuals (employment, credit, services, etc.):

  • AI output must be reviewed by a qualified human before action
  • The human reviewer must have authority to override AI recommendations
  • Decisions must be documented, including the AI's role and human review

7. Responsibilities

7.1 All Users

  • Complete required AI training before using AI tools
  • Follow this policy and related data handling requirements
  • Report suspected violations or security concerns to [reporting channel]
  • Maintain awareness of which tools are approved and restricted

7.2 Managers

  • Ensure team members understand and follow this policy
  • Approve appropriate AI use within their teams
  • Escalate requests for tools or uses outside policy scope

7.3 Information Security

  • Maintain and publish the list of approved AI tools
  • Evaluate new AI tools for security and compliance
  • Implement technical controls supporting this policy
  • Investigate reported violations

7.4 Compliance/Legal

  • Ensure policy alignment with regulatory requirements
  • Provide guidance on AI use in regulated activities
  • Update policy as regulations evolve

8. Technical Controls

[Organization] implements technical controls to support this policy, including:

  • Data inspection and protection at the point of AI interaction
  • Logging of AI usage for compliance and security purposes
  • Policy enforcement mechanisms for prohibited data and tools

These controls supplement but do not replace individual responsibility to follow this policy.

9. Incident Reporting

Report suspected policy violations, data exposure incidents, or security concerns immediately to [reporting channel/contact]. Prompt reporting enables faster response and may mitigate potential harm.

10. Compliance and Enforcement

Violations of this policy may result in disciplinary action up to and including termination. Violations may also result in legal liability for the individual and [Organization].

11. Training

All employees must complete AI acceptable use training within [timeframe] of hire and annually thereafter. Additional training may be required for specific roles or AI tools.

12. Policy Review

This policy will be reviewed [annually/semi-annually] and updated as needed to reflect changes in technology, regulations, and business requirements.

13. Questions and Exceptions

Questions about this policy should be directed to [contact]. Exception requests must be submitted to [approver] and will be evaluated based on business need, risk, and available mitigations.

Acknowledgment

I acknowledge that I have read, understand, and agree to comply with [Organization]'s AI Acceptable Use Policy.

Name: _______________________ Date: _______________________ Signature: ___________________

Making the Policy Work

A policy document alone doesn't create governance. Complement the policy with:

Technical enforcement: Controls that implement policy rules automatically — protecting sensitive data, logging usage, blocking prohibited tools.

Training: Ensure everyone understands not just the rules, but the reasoning behind them.

Easy paths to compliance: If approved tools are harder to use than consumer alternatives, people will route around the policy.

Regular review: Update the approved tools list, refine data classifications, and adjust as the AI landscape evolves.

Visible leadership support: Executives following the policy signals that it matters.

The goal isn't a perfect document — it's governance that enables safe, productive AI use.

Stop data leakage before it starts

Tenlines sits between your team and AI providers, scrubbing sensitive data before it leaves your environment. No workflow changes required.

Join the Waitlist