GenAI Assistants

GTA recognizes ChatGPT and others that are similar as natural language processing tools used to generate human-like text based on prompts. These tools can assist in ideating, drafting content, answering questions, and more. Similar tools include Claude, MS Copilot Chat, Perplexity, Gemini, and many others.

Guidance

  • Purpose and Use

    GenAI Assistants (such as ChatGPT) should be used for non-sensitive tasks such as answering general public inquiries, drafting reports, and providing summaries. It should not be used for generating or interpreting content that requires legal, medical, or sensitive policy-related expertise unless approved by appropriate legal or policy offices.

  • Transparency

    All users interacting with GenAI Assistants should be made aware that they are engaging with an AI system. For instance, responses generated by ChatGPT or a similar tool must be clearly labeled as AI-generated content in both internal and public-facing communications.

  • Accountability

    The content generated by a GenAI Assistant should be reviewed and approved by a human before it is published or used in official government communications. The Assistant should not be relied upon for final decision-making or official interpretations.

  • Data Privacy and Security

    Personal data, sensitive government data, or confidential information should never be entered into ChatGPT or similar tools. Agencies should ensure that any data provided to the GenAI Assistant is anonymized to comply with relevant data protection and privacy laws (e.g., PII, HIPAA).

  • Bias and Fairness

    All GenAI Assistants are susceptible to generating biased outputs based on their training data. Agencies should actively monitor the outputs for bias and ensure they align with the state’s commitment to equity, inclusiveness, and fairness in services.

Prohibited Uses

  • Legal or Policy Advice

    GenAI Assistants cannot be used to provide official legal interpretations, advice, or policy recommendations. AI-generated content is not a substitute for expert human analysis in these sensitive areas.

  • Handling Confidential or Sensitive Information

    GenAI Assistants cannot be used to process or analyze sensitive government data, personal identifiable information (PII), or classified information. Data security risks and privacy concerns make this type of use unacceptable.

  • Making Autonomous Decisions

    GenAI Assistants cannot be used to make decisions that impact citizens' rights, benefits, or access to government services without human review. AI should not be the final authority in decision-making processes.

  • Bypassing Human Review

    No content generated by a GenAI Assistant can be published or used in official communications without prior human review and approval. Unreviewed AI outputs may contain inaccuracies or biases.

  • Impersonating Government Officials

    GenAI Assistants cannot be used to generate statements that give the false impression they are coming directly from a government official. All AI-generated content must be clearly labeled as such.

  • Generating Content with Malicious Intent

    GenAI Assistants cannot be used to generate harmful, misleading, or biased content. This includes the generation of discriminatory language, misinformation, or content designed to deceive or manipulate.