UM AI Data Protection Standard

Body

   
Issued Under Authority of UM Information Security Policy
Responsible Office UM Information Security Office
Category Data Protection and Lifecycle

IN PLAIN LANGUAGE

AI tools can be powerful for teaching, research, and administrative work — but they come with real data risks. This standard draws a clear line: sensitive or confidential University Data must never be entered into public AI tools like free chatbots or consumer AI services that aren't covered by a University contract. This includes student records, health information, personal data, and unpublished research. If an AI tool has been reviewed and approved by the University, it may be used in accordance with normal data handling rules. You are responsible for knowing how sensitive your data is before using any AI tool with it, and AI-generated outputs should always have appropriate human review before being acted on.


1. Purpose

The purpose of this Standard is to establish data protection requirements for the use of Artificial Intelligence (AI) tools in connection with University of Montana business, research, instructional, and administrative activities.

AI technologies can provide significant benefit to the University community. However, use of AI tools must not expose digital University Data to unauthorized disclosure, violate contractual or regulatory obligations, or create unmanaged institutional risk.

This Standard defines mandatory data protection boundaries for AI use and supports the University's Information Security Program.


2. Scope

This Standard applies to:

  • Faculty, staff, student employees, affiliates, contractors, and third parties acting on behalf of the University
  • Students when handling Restricted or Confidential University Data
  • AI tools used in connection with University business, research, instruction, or administrative functions

This Standard applies to digital University Data as defined in the UM Data Governance Policy. It does not govern non-digital records or general instructional guidance unrelated to University Data.


3. Definitions

Artificial Intelligence (AI) — Software systems that perform tasks typically requiring human intelligence, including generative text, image creation, analytics, code generation, or automated decision support.

Public AI Tool — An AI tool or service that is publicly accessible and not governed under a University enterprise agreement, contract, or University-managed hosting environment.

UM-Approved AI Tool — An AI tool that is covered under a University enterprise agreement or contract, subject to Vendor Risk Management review, or locally hosted or University-managed.

University Data — As defined in the UM Data Governance Policy.


4. Core Requirements

4.1 Prohibited Data Use in Public AI Tools

Restricted (Moderate Risk) and Confidential (High Risk) University Data must not be entered into Public AI Tools that are not governed under a University contract, enterprise agreement, or University-managed environment.

This includes, but is not limited to:

  • Personally identifiable information (PII)
  • Student education records
  • Protected health information (PHI)
  • Controlled Unclassified Information (CUI)
  • Non-public financial or administrative data
  • Proprietary research data

Users are responsible for understanding that information submitted to Public AI Tools may be retained, processed, or used for model training outside University control.

4.2 Use of UM-Approved AI Tools

AI tools covered under a University enterprise agreement, contract, or University-managed hosting environment may be used in accordance with:

  • IT Data Security Standard
  • Cloud Computing Security Standard (if applicable)
  • Vendor Risk Management Standard
  • IT Asset Management Standard (if locally hosted)

Use of Restricted or Confidential University Data within UM-Approved AI Tools must comply with applicable data classification and handling requirements.

4.3 User Accountability

Users are responsible for:

  • Determining the classification of University Data prior to entering it into any AI system
  • Ensuring AI tool use complies with University policies and standards
  • Exercising appropriate professional judgment when relying on AI-generated outputs

AI-generated outputs must not replace required human oversight in high-risk decision-making processes.


5. Relationship to UM AI Guidance

The University maintains guidance on AI privacy and security practices at umontana.ai/guidelines/privacy-security.

That guidance provides recommended best practices. This Standard establishes mandatory data protection requirements.


6. Exceptions

Exceptions to this Standard must:

  • Be documented with justification
  • Be approved by the CISO or designee

7. Enforcement

Failure to comply with this Standard may result in:

  • Restriction of access to AI tools or University systems
  • Corrective action consistent with University policy
  • Additional review or monitoring where appropriate

8. Review and Maintenance

This Standard must be reviewed at least annually and updated as necessary to reflect evolving AI technologies, regulatory developments, and institutional needs.

Details

Details

Article ID: 171025
Created
Thu 3/19/26 5:05 PM
Modified
Thu 4/9/26 11:24 AM