Guidance for the Use of AI Tools

This guidance is intended to help Georgia state agencies use AI tools responsibly, securely, and effectively. AI tools can enhance productivity, improve service delivery, and support decision-making, but they must be used in a way that upholds public trust, protects sensitive information, and aligns with state policies and ethical standards.

Core Principles

  • Human Oversight

    AI should support, not replace, human judgment. All AI-generated content, insights, or recommendations must be reviewed and validated by a responsible individual before use, especially in official communications or decision-making.

  • Transparency

    Agencies must clearly disclose when AI tools are used. AI-generated content should be labeled appropriately, and staff should be aware when AI is involved in content creation, analysis, or communication.

  • Accountability

    Agencies are responsible for all outputs produced using AI tools. Processes should be in place to review, approve, and audit AI-assisted work. Audit logs and documentation should be maintained where applicable to support compliance and oversight.

  • Data Privacy and Security

    Sensitive, confidential, or regulated data, including personally identifiable information (PII), protected health information (PHI), or classified materials, must not be entered into AI tools unless explicitly authorized and secured within approved environments. Agencies must follow all applicable data protection standards, including encryption, access controls, and data classification requirements.

  • Fairness and Bias Mitigation

    AI tools can reflect or amplify bias. Agencies should actively monitor outputs to ensure they are fair, accurate, and inclusive, and take steps to mitigate any unintended bias, particularly in content or analysis that impacts public services.

  • Training and Awareness

    Staff must be properly trained on the appropriate use of AI tools, including their capabilities, limitations, and risks. Ongoing training is recommended to keep pace with evolving technologies and policies.

Acceptable Use

AI tools may be used to support tasks such as:

  • Drafting and editing content
  • Summarizing information or documents
  • Analyzing non-sensitive data and identifying trends
  • Supporting internal research, brainstorming, and knowledge management

These tools should be used to enhance efficiency and quality, not as a sole source of truth or final authority.

Prohibited Use

Agencies must not use AI tools to:

  • Process Sensitive or Regulated Data

    Do not input or analyze PII, PHI, confidential, or classified information without explicit authorization and appropriate safeguards.

  • Make Autonomous or High-stakes Decisions

    Do not rely on AI as the sole decision-maker for legal, policy, financial, or service-related outcomes that impact individuals or communities.

  • Provide Official Legal, Medical, or Policy Advice

    Do not use AI-generated content as a substitute for qualified professional judgment in sensitive or regulated areas.

  • Bypass Human Review

    Do not publish or act on AI-generated outputs without proper human validation.

  • Engage in Discriminatory or Harmful Practices

    Do not generate or distribute biased, misleading, or harmful content, or use AI in ways that unfairly exclude or disadvantage individuals or groups.

  • Misrepresent AI Capabilities

    Do not suggest that AI systems operate autonomously or replace human decision-making in official capacities.

  • Conduct Unauthorized Surveillance or Profiling

    Do not use AI to monitor, track, or profile individuals without legal authority, consent, and clear justification.

  • Train AI Models with Agency Data Without Approval

    Do not use internal data to train or fine-tune AI systems without explicit authorization and governance review.

Implementation Considerations

  • Use AI tools only within approved, secure environments and enterprise systems when available.
  • Limit access to authorized users and maintain appropriate access controls like MFA and role-based permissions.
  • Ensure any AI-generated outputs used publicly meet accessibility standards like WCAG and Section 508.
  • Regularly audit AI usage and configurations to ensure compliance with state policies and security requirements.
  • Report any suspected data exposure, misuse, or security issues in accordance with agency and GTA protocols.

Find the Right AI Tool

Browse approved AI tools by use case, compare features, and review guidance to ensure secure and responsible use.