-
Purpose and Use
ChatGPT should be used for non-sensitive tasks such as answering general public
inquiries, drafting reports, and providing summaries. It should not be used for
generating or interpreting content that requires legal, medical, or sensitive
policy-related expertise unless approved by appropriate legal or policy offices. -
Transparency
All users interacting with ChatGPT should be made aware that they are engaging with an
AI system. For instance, responses generated by ChatGPT must be clearly labeled as
AI-generated content in both internal and public-facing communications. -
Accountability
The content generated by ChatGPT should be reviewed and approved by a human before
it is published or used in official government communications. ChatGPT should not be
relied upon for final decision-making or official interpretations. -
Data Privacy and Security
Personal data, sensitive government data, or confidential information should never be
entered into ChatGPT. Agencies should ensure that any data provided to ChatGPT is
anonymized to comply with relevant data protection and privacy laws (e.g., PII, HIPAA). -
Bias and Fairness
ChatGPT is susceptible to generating biased outputs based on its training data. Agencies
should actively monitor the outputs for bias and ensure they align with the state’s
commitment to equity, inclusiveness, and fairness in services.
ChatGPT (Open AI) Guidelines for Use
GTA recognizes ChatGPT as a natural language processing tool used to generate human-like text
based on prompts. It can assist in ideating, drafting content, answering questions, and more.
Guidelines
Prohibited Uses
-
Legal or Policy Advice
ChatGPT cannot be used to provide official legal interpretations, advice, or policy
recommendations. AI-generated content is not a substitute for expert human analysis in
these sensitive areas. -
Handling Confidential or Sensitive Information
ChatGPT cannot be used to process or analyze sensitive government data, personal
identifiable information (PII), or classified information. Data security risks and privacy
concerns make this type of use unacceptable. -
Making Autonomous Decisions
ChatGPT cannot be used to make decisions that impact citizens' rights, benefits, or
access to government services without human review. AI should not be the final
authority in decision-making processes. -
Bypassing Human Review
No content generated by ChatGPT can be published or used in official communications
without prior human review and approval. Unreviewed AI outputs may contain
inaccuracies or biases. -
Impersonating Government Officials
ChatGPT cannot be used to generate statements that give the false impression they are
coming directly from a government official. All AI-generated content must be clearly
labeled as such. -
Generating Content with Malicious Intent
ChatGPT cannot be used to generate harmful, misleading, or biased content. This
includes the generation of discriminatory language, misinformation, or content
designed to deceive or manipulate.