Body
Overview
Artificial Intelligence (AI) tools are widely used in teaching, research, and administrative work across the University of Montana.
AI technologies can provide significant benefit. However, AI use must not expose University Data to unauthorized disclosure, violate contractual or regulatory obligations, or create unmanaged institutional risk.
This article outlines mandatory data protection requirements when using AI tools in connection with University activities.
These requirements are established in the UM Artificial Intelligence Data Protection Standard .
For broader AI privacy and security guidance, visit:
https://umontana.ai/guidelines/privacy-security/
The Core Rule
Restricted or Confidential University Data must NOT be entered into Public AI tools.
If the AI tool is not covered under a University contract, enterprise agreement, or University-managed hosting environment, it is considered a Public AI Tool.
Public AI tools may retain, process, or use submitted information outside University control.
What Data Is Prohibited in Public AI Tools?
The following types of data must not be entered into Public AI tools:
-
Personally identifiable information (PII)
-
Student education records (FERPA-protected)
-
Protected health information (PHI)
-
Controlled Unclassified Information (CUI)
-
Non-public financial or administrative data
-
Proprietary or unpublished research data
-
Restricted (Moderate Risk) University Data
-
Confidential (High Risk) University Data
If you would not post the information publicly, do not enter it into a Public AI tool.
When Can AI Be Used With University Data?
AI tools may be used with non-public University Data only if the tool is UM-Approved.
A UM-Approved AI tool is one that:
-
Is covered under a University contract or enterprise agreement
-
Has completed Vendor Risk Management review
-
Is locally hosted or University-managed
Use of Restricted or Confidential data within UM-Approved AI tools must still comply with:
-
Data classification requirements
-
IT Data Security Standard
-
Cloud Computing Security Standard (if applicable)
Approval of the tool does not remove user responsibility for appropriate data handling.
User Responsibilities
All users are responsible for:
-
Determining the classification of University Data before entering it into any AI system
-
Ensuring AI use complies with University policy and standards
-
Applying professional judgment when reviewing AI-generated outputs
AI-generated outputs must not replace required human oversight in high-risk decisions.
Additional AI Privacy & Security Guidance
The University maintains additional AI privacy and security best practices at:
https://umontana.ai/guidelines/privacy-security/
When in Doubt
Before entering information into an AI tool, ask:
-
What is the classification of this data?
-
Is this tool covered under a University agreement?
-
Could this information be retained or reused outside UM control?
-
Would disclosure create legal, contractual, or reputational risk?
If unsure:
-
Contact your Data Steward
-
Contact the UM IT Helpdesk
-
Contact the Information Security Office