Guidelines for State Organizations

Georgia Technology Authority, in collaboration with the AI Council, adopted these guidelines for state entities and employees to expand upon the principles and responsibilities outlined in Enterprise AI Responsible Use Policy (PS-23-001) and AI Responsible Use Standard (SS-23-002).

AI technologies offer significant benefits for state operations, from improving data analysis to automating routine tasks and supporting informed decision-making, to streamlining processes and helping employees across departments. However, it is important to follow established AI guidelines to maximize these benefits responsibly. Proper use helps prevent the leakage of sensitive information to external models, reducing the risk of data breaches, and helps maintain the accuracy and reliability of outputs, keeping generated content free of errors and bias. By following these guidelines, state employees can use AI responsibly, ensuring high-quality work and maintaining public trust in state-led initiatives.

These guidelines apply to all forms of generative AI, including but not limited to text, image, video, and audio generation.

When engaging with generative AI, all state employees must abide by the 5 Guiding Principles to ensure the safety and welfare of all stakeholders with a vested interest in the data used and created from generative AI tools.

  • Get Approval Before You Start

    Prior authorization from GTA is required for any generative AI tools intended for regular organizational usage, including but not limited to AI-driven transcription, summarization, note-taking, or decision-making assistance. Unauthorized use of such tools is strictly prohibited.

    GTA is working on specific guidance for commonly requested tools and will continue to add to this list of AI tools.

    Prior authorization from GTA is required for any generative AI tools intended for regular organizational use. Use the webform to make a request.

  • Use AI Tools Safely and Properly

    Use only pre-vetted tools: State employees should only use AI which GTA has vetted, with pre-approved vendors. However, one version of an AI model being approved does not necessarily imply that other versions of that same model are also approved. Additionally, an approved AI model can always have its approval revoked. Employees are encouraged to consult with GTA regularly for updates and to verify they are using compliant versions of AI tools.

    Record prompts: Employees should create a record of queries and responses outside of the generative AI software or platform for future reference. This practice allows for accurate tracking and auditing, aiding in tracing decision-making processes, and can also help address issues that may arise from AI tool use.
    Review AI-generated content: To manage the risks associated with AI-generated content, employees should apply careful review methods. This includes cross-checking AI outputs with trusted sources, verifying the accuracy of information, and assessing for possible biases or inaccuracies. Refer to the strategies below for more details.

    Keep a Human-In-The-Loop (HITL) approach: AI tools, including generative AI and other automated decision-making systems, should not be the only agents involved in any decision-making. For instance, while an HR department may consult with AI platforms in making hiring decisions, it may not use AI to scan and reject resumes automatically. By keeping a “Human-In-The-Loop”, AI systems can be properly kept in check.

    Cite properly: Anything produced with the assistance of AI should be cited correctly. See How to Properly Cite AI-Generated Content

    Double-check AI-generated facts: One of AI’s greatest weaknesses is hallucinations, meaning that it sometimes generates inaccurate facts and presents them as truth. For any fact provided by an AI, find a reliable source that corroborates it.

    Do not enter personally-identifying or confidential information: Generative AI models use user-inputted information to further train their models. For this reason, do not provide them with confidential or private state information, or personal records. If employees are uncertain whether specific data should be entered into a generative AI tool, they should first consult with their department's data privacy officer or designated compliance authority to clarify. When in doubt, it is better to err on the side of caution and avoid inputting potentially sensitive information without approval. Additionally, avoid using personal accounts, such as personal email addresses, within any software to ensure that all content generated for government purposes remains secure. For applications requiring users’ data, ensure that such data is only used with user consent.

    Create a culture of transparency: All state employees should be open and honest about their AI use. Cite and acknowledge all AI-generated, brainstormed, or edited content so that others know where your information is from. Teach and learn from other employees. Employees should help one another and make sure that all AI information is cited honestly. Employees should rely on the honor system, openly disclosing AI usage without fear of severe repercussions. This transparency enables the early detection of improper AI use, preventing potential impacts on sensitive information.

    Assess risk level: AI has the potential to massively improve productivity, but this is not a guarantee. Employees should use common sense and exercise caution in deciding whether AI is right for a given task. See Mitigating the Risks of Generative AI

  • Be Vigilant in Virtual Meetings

    Use of AI note-taking tools is prohibited in State of Georgia Microsoft Teams meetings or any other conferencing platforms. Meeting hosts may opt to use the recording and transcription services built into Teams.

    Meeting hosts must be diligent when admitting participants to virtual meetings to ensure no AI bot note-takers join. You can look for the “.ai” suffix to confirm.

    While AI capabilities for recording and transcribing virtual meetings offer convenience, they should be used thoughtfully. Agencies must ensure that the use of AI tools in virtual meetings complies with all relevant laws and regulations, including those related to data protection, intellectual property, and labor laws. AI should not be used to manipulate or misrepresent the contributions of participants. Be mindful of the potential for AI to misinterpret or misrepresent human communication. AI outputs should be used only to supplement, not replace, human judgment.

    Once created, meeting recordings and transcriptions become public records and are subject to retention rules. Further, there are storage and cost considerations associated with maintaining recordings. Meeting hosts are responsible for determining the need for recordings and automated transcriptions and keeping any records created.

  • Do Not Put Private Data at Risk

    The goal of these guidelines is to reduce the risk of employees using Shadow AI. Using AI without proper citation and disclosure can release sensitive state or personal information to generative AI models, communicate inaccurate information to the public, or log incorrect information into state records without accountability.

    All state employees using AI should understand that transparency builds trust, supports accountability, and encourages collaborative efforts, leading to increased productivity, reduced repetitive tasks, and more efficient research. However, these benefits are only possible if everyone knows what work is AI-generated. All AI-generated content – including text, images, videos, and audio – must be clearly labeled as AI-generated and double-checked to ensure it is free of inaccurate information, AI hallucinations, or bias.

    Strategies for Data Privacy and AI:

    • Keep personal and work materials separate, and create an account using your work email specifically for GenAI materials.
    • Opt out of data collection on any tool you use, particularly those involving pictures.
    • Protect all data used by AI systems from unauthorized access or breaches. This includes regularly changing passwords, minimizing data retention by regularly clearing chats, and conducting regular audits to ensure compliance.
    • Do not enter personal and/or sensitive data into a GenAI model.
    • The use of GenAI tools must be consistent with Georgia’s privacy laws, such as the Georgia Computer Data Privacy Act (GCDPA).
  • Beware of Bias

    Generative AI models can inadvertently perpetuate biases based on race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Bias can occur at any stage from data collection, labeling, model training, and deployment. The most common types of bias in AI include but are not limited to:

    • Algorithm Bias: This bias results from users improperly asking a question to an AI or if the user’s feedback provided after an AI’s mistake is not specific enough. When querying an AI, carefully word the question and be sure that the AI is responding to the question asked.
    • Cognitive Bias: This bias results from implicit biases users possess. If a user’s query to an AI model contains bias, the AI may reproduce this bias in its output. When querying an AI, think about whether the question being asked presupposes things that are not necessarily true.
    • Confirmation Bias: This bias results from users being quick to accept output that matches their expectations. When querying an AI, carefully consider whether the answer is correct and consult a third party or non-AI source before proceeding, regardless of whether the answer is what was expected.

    It is important to be aware of these types of biases in any work produced by AI and screen the work for bias before putting them into use.