Every company using AI needs an acceptable use policy. Without clear guidelines, employees either avoid AI entirely or use it recklessly.
Below is a template you can adapt for your organization. It's written in plain language because policies nobody reads are useless.
[COMPANY NAME] AI Acceptable Use Policy
Version 1.0 | Effective Date: [DATE]
Purpose
This policy establishes guidelines for using artificial intelligence tools at [Company Name]. Our goal is to enable productive AI use while protecting company data, maintaining quality standards, and complying with regulations.
Scope
This policy applies to all employees, contractors, and third parties who use AI tools for company work, whether on company devices or personal devices used for work purposes.
Approved AI Tools
The following AI tools are approved for work use:
- [Tool 1, e.g., ChatGPT Enterprise]: General writing, research, analysis
- [Tool 2, e.g., Claude Pro]: Document review, complex writing
- [Tool 3, e.g., GitHub Copilot]: Code assistance (engineering team only)
Using unapproved AI tools for work is prohibited. To request approval for a new tool, contact [IT/Security Team].
Data Classification
You MAY use AI with:
- Public information (published content, public websites)
- Generic business communications (templates, standard procedures)
- Anonymized or sample data with no identifying information
- Your own original writing and ideas
You MAY NOT use AI with:
- Customer personal data (names, addresses, account numbers)
- Employee personal data (performance reviews, salary information)
- Financial data (unpublished results, forecasts, pricing)
- Legal documents and contracts
- Trade secrets and proprietary information
- Healthcare information (PHI)
- Any data marked "Confidential" or "Internal Only"
When unsure: Ask your manager or contact [Data Privacy Team]. When in doubt, don't input it.
Quality and Accuracy
AI outputs require human review before use. You are responsible for:
- Verifying factual claims (AI can make things up)
- Checking calculations and data (AI makes math errors)
- Reviewing for tone and appropriateness
- Ensuring compliance with company standards
Never submit AI output without review. You are accountable for work you submit, regardless of how it was created.
Disclosure
You do not need to disclose AI assistance for internal documents and routine communications.
You should disclose AI assistance for:
- External publications (articles, white papers)
- Client deliverables (when required by contract)
- Regulatory submissions
Check with your manager if unsure about disclosure requirements.
Prohibited Uses
The following uses of AI are prohibited:
- Generating content that violates law or company policies
- Creating misleading or deceptive content
- Impersonating individuals (deepfakes, fake communications)
- Circumventing security controls or company policies
- Making automated decisions about hiring, firing, or compensation
- Accessing or attempting to access systems without authorization
Security Requirements
- Use company-provided AI accounts, not personal accounts
- Do not share account credentials
- Log out of AI tools when finished
- Report security concerns to [Security Team]
- Do not install AI browser extensions without IT approval
Training Requirements
All employees must complete AI training before using AI tools for work. Training covers:
- Tool capabilities and limitations
- Data handling requirements
- Security best practices
- This policy
Refresher training is required annually.
Violations
Policy violations may result in:
- Revocation of AI tool access
- Disciplinary action up to and including termination
- Legal action if laws are violated
If you become aware of a policy violation, report it to [Manager/HR/Ethics Hotline].
Questions
For questions about this policy:
- General questions: [Contact]
- Data classification: [Data Privacy Team]
- Security concerns: [Security Team]
- Tool requests: [IT Team]
Policy Updates
This policy will be reviewed quarterly. AI capabilities evolve rapidly, and our policies will adapt accordingly. Employees will be notified of significant changes.
How to Use This Template
- Customize the brackets: Replace [bracketed items] with your company's specific information
- Review with legal: Have your legal team review before distribution
- Adapt to your industry: Add industry-specific requirements (HIPAA, FINRA, etc.)
- Train employees: Don't just distribute - ensure people understand it
- Update regularly: Review at least quarterly as AI tools evolve
Common Customizations
For Healthcare (HIPAA)
Add explicit prohibition on PHI in any AI tool, regardless of enterprise claims. Require BAA before any AI vendor can touch patient data.
For Finance (SOX, FINRA)
Add audit trail requirements. Prohibit AI for financial statements without human verification. Consider prohibiting AI for client communications.
For Legal
Add client confidentiality requirements. Prohibit AI for privileged communications. Require disclosure in court filings where required by jurisdiction.
Need Help Creating Your AI Policy?
Laibyrinth helps companies create comprehensive AI governance frameworks. We'll customize policies for your industry, integrate with existing compliance requirements, and train your team.
Get Expert Help