Ethics Framework

GTA’s Office of Artificial Intelligence and the AI Council have relied on the EthicsDNA Project from the University of North Georgia to help ethically navigate the shift in the information revolution.

Technology is moving from a season of substitutional change to a time of infrastructural change. With substitutional change, newer, more efficient ways of doing things replace an older technology, like word processors replacing typewriters. Infrastructural change affects the way we produce goods and services, the nature of work, and the way we live together in society.

With the public launch of technologies that create content autonomously, people suddenly felt that emerging technologies now had capabilities that once were science fiction. That realization raised questions about our ethics — our understanding about how to live and work with these technologies.

Before the launch of ChatGPT, Claude, and other LLMs (large language models), the ethics of human interaction with technology seemed stable. Now the ethical frameworks don’t seem to hold. As members of this community, we are thrust — willingly or not — into a process called ethics making. Together, we will determine how people, organizations, and communities will adapt our historical ethical criteria to the design, acquisition, and use of tools that use artificial intelligence.

During periods of ethical uncertainty, we become aware of values-in-tension. The foundational tensions between core ethical values raise existential questions, such as exploring what it means to be human. The next set of tensions raise questions requiring us to discern the most ethical way to accomplish a specific task — choosing behaviors that allow all in the community to thrive.

How to Choose the Most Ethical Action

Responsibly navigating change within government operations requires deep introspection into our core values – autonomy, community, reason, and experience – and identifying where tension exists. Individual rights or goals may clash with the needs of the many (autonomy vs. community). Cold, impartial logic may clash with prior observations or our life’s events (reason vs. experience).

Because we don’t know how best to interact with these emerging technologies, we have to: (1) ask core questions to research how best to use the tool; (2) reflect as we identify and prioritize the values in tension; and then (3) take action.

By asking ourselves a set of questions, we can explore the tensions and then harmonize them in any use of an existing or emerging AI tool. We must then check for ethical blind spots and consider the risks and benefits of the technologies.

As you work through the list below, you may not know the answer to all of the questions until you complete your research. Your answers will help you notice where one path of action will favor one value over another. You will then have to reflect about which value is the most important for the particular use you’re considering. Then, as you discuss the choices with other stakeholders, you will clarify what action will create not only the most harmony but allow individuals, the organization, and the community at large to thrive.

  • Progress: Engaging With Change

    This section highlights the following values:

    • Adaptability, where one adjusts to changing circumstances. 
    • Commitment, where one shows dedication to a purpose or cause.
    • Curiosity, where one fulfills the human desire to learn and experience new things.
    • Growth, where one increases capacity, develops, or matures over time.
    • Innovation, where one creates and implements practical new ideas.

    Existential Question: What does it mean to be human?

    From self-driving cars to robots that can process orders and prepare products for shipping, to programs that create text and images in response to a query, inanimate technologies (computers and robots) are increasingly doing tasks we have historically defined as uniquely human.

    Many who seek a bright line between humans and machines assert our technologies cannot create novel content, innovate by connecting unrelated concepts, or make decisions about what is best to do in any particular situation. Others see technologies as able to complete those functions. Thus, as we explore the opportunities available through creativity and innovation, we need to start by paying attention.

    Practical Question: What do you want this tool to do?

    As you pay attention, you notice what is really going on in your world. You become mindful as you clarify what you want to accomplish. By asking specific questions, you can gather information and determine what a particular tool can do. The core question is: What desired goals would make acquiring and using this AI tool (or other emerging technology) a wise investment of time and resources?

    As you are attentive, the following questions help you determine how to engage mindfully in progress.

    1. User Requirements: Have you clearly stated what you hope the emerging technology can do?
    2. Creativity vs. Continuity: Are you going to use the tool for continuity of existing knowledge by gathering information or editing existing content? Or, will you be using the tool to support creating unique content?
    3. Autonomy: Will this tool be a resource, supporting you in your activities, rather than a replacement for your own human creativity and voice?
    4. Explainability: Can the vendor clearly state in non-technical language what the tool can currently do and what safeguards are in place to meet the ethical requirements?
    5. Source of Access: Should you use this tool from your home computer or are you able to access it from your work computer?
  • Trust: Shaping Our Agreements

    This section highlights the following values:

    • Accountability, where one accepts responsibility for their actions and commitments.
    • Caring, where one’s acts convey a sense of concern for the wellbeing of others.
    • Honesty, where one provides information free from deceit or fraud. 
    • Integrity, where one acts consistently in harmony with their morals. 
    • Respect, where one treats others as worthy of value and regard.

    Existential Question: Who and what can we trust?

    In order for a community to thrive, people need to trust each other. As designers have programmed tools to engage us in conversation and to create confidence in the accuracy of their answers, we have to determine how much of what the tools create is, in fact, trustworthy.

    Practical Question: What agreements do you need with other stakeholders?

    As you consider how to build trust among people involved in your use of the technology, you use reason and wisdom to identify the boundaries or constraints that either are or should be in place in order to show respect to all involved. These boundaries may include legal limitations, policy guidelines, or the technical limits of existing technology. Your inquiry will help you define the edges of the moral space: the place where an action moves from acceptable to unacceptable as people work to harmonize the values-in-tension.

    As you seek to be responsible, the following questions help you identify core principles to help all involved with the use of the technology build trust.

    1. Clear Commitment: Can you find the provider's commitment if you purchase or use the technology? Is the commitment understandable and forthright in delineating the provider’s ethical commitment?
    2. Vendor Honesty: What systems (design, testing, and maintenance) has the provider created to ensure the technology will perform according to their description and commitment?
    3. Meaningful Consent: Do you have control over how you will use the tool? If you give the vendor information, do you get to determine what the vendor will do with your data?
    4. Ownership: Does the agreement you have with the provider clearly state who owns the tool's source code, data, and output?
    5. Transparency: Will the vendor share with you the source of information used to create the data set used for your application?
    6. Accuracy: If you get text or images created by the tool, can you find out the source of the information so you can check for accuracy? Is the information accurate enough for you to be able to complete your project?
    7. Answerability: Can you clearly trace the actions of all parties back to those responsible? Do systems exist to hold the appropriate party accountable for any misconduct?
    8. Legality: Is the intended use of the tool legal in your community? 
  • Opportunity: Ensuring All Can Participate

    This section highlights the following values:

    • Fairness, where one treats all members of the community justly. 
    • Equity, where one minimizes bias and avoids favoritism. 
    • Inclusion, where one invites others to join, share, or be part of the community. 
    • Reliability, where one acts in a way others can count on to be consistent. 
    • Teamwork, where one promotes collaborative work to satisfy mutual interests.

    Existential Question: Will all people be able to access the technology?

    AI technologies are extremely expensive to develop, so a core question is whether some people and organizations will not be able to afford the tools, creating technological haves or have-nots. Another concern is whether people have the ability to connect to the tools. Even in the United States, we have technology deserts where people cannot connect to a network.

    Practical Question: What do you need so that all can use the technology?

    As you reasonably attend to human needs and expectations, you can imagine how you and others might use the tool. A core concern of this question is attending to those without power: those who are at the mercy of someone else’s decisions. Thus, you must notice the power you or others in your organization have and be mindful of how you use that power.

    The answers to the following questions help you explore the interior of the moral space — the space (within the boundaries set by the previous step) where any envisioned action is acceptable but some choices are better than others. As you seek to be reasonable, the following questions will help you attend to the human needs and expectations of those with whom you interact.

    1. Courteous Partnership: As you interact with the designer and provider of the tool, do they treat you and your clients with courtesy and civility?
    2. Human Assistance: While you are using the tool, can you tell when you are interacting with a bot created by predictive or generative AI? Can you or others who will use the tool easily access human assistance when needed?
    3. Accessibility: Is the tool appropriately accessible to those with disabilities or those with limited access to or understanding of technology?
    4. Affordability: Will your clients be able to afford using the tool or will your organization or agency be able to help with the cost?
    5. Elimination of Bias: Does everyone involved (provider, procurer, and users) strive to be fair and eliminate bias in the creation of underlying data sets and the deployment of the tool?
    6. Training: Can you or your users receive appropriate training, either from the procurer or the provider, to optimize their use of the tool?
  • Protection: Keeping Our Communities Safe

    This section highlights the following values:

    • Authenticity, where one is true to their own self regardless of external pressures or influences.
    • Courage, where one acts in the face of fear or discouragement.
    • Ethical excellence, where one is outstanding or extremely good, surpassing ordinary standards.
    • Loyalty, where one has devotion and faithfulness to a person, group, or institution.
    • Service, where one takes actions to help or provide for others.

    Existential Question: How do we balance freedom and protection?

    From worrying about the dark web, to monitoring hateful or threatening speech, to concern about use of data, questions about protection are pervasive. While people try to imagine every way someone can misuse the technology, opportunities for harm seem to multiply exponentially.

    Practical Question: What do you need so users are safe?

    The final question helps you consider how to use the chosen emerging technologies responsibly. With these questions, your focus moves toward setting good examples and putting safeguards in place to protect all involved with the tool.

     

    As you seek to act responsibly, the following questions help you find solutions that protect you and all those involved.

    1. Freedom of Expression: Do the vendors and users commit to balancing freedom of expression with monitoring or protecting content for accuracy and civility?
    2. Content Quality: Does the provider protect the quality of the input and data the tool uses to generate, refine, and correct its content and output?
    3. Data Privacy: Do the vendors disclose their privacy policies and maintain the privacy of stakeholders' personal information in accordance with those policies?
    4. Data Protection and Safety: Do procurers and users ensure they protect and keep the data submitted by stakeholders and the output created by users safe according to stated policies and practices?
    5. Data Retention: Do the vendors disclose how long they will hold users’ information and what (if anything) they will do with the data?
    6. Stewardship: Is the purchase of the AI a beneficial use of the money and other resources such as time you have to use?
  • Forecast: Knowing What We Don’t Know

    Existential Question: How do we anticipate a future with technologies that continue to evolve?

    Many exploring technology’s leading edge are talking about AI with consciousness and robots who can make decisions and thus be ethical agents. Many are talking about the incredible amount of water and energy needed to run these incredibly sophisticated technologies. Others worry about computers taking everyone’s jobs — and then what? We don’t know the answers to any of those questions, but continuing to ask them is important.

    Practical Question: What might happen in the future?

    During the final set of questions, we each look to the past and present to consider the future. As you reflect on what has already happened — both anticipated and unanticipated outcomes — you can notice where new opportunities and problems may arise within your sphere of influence and circle of concern. 

     

    As you work through this section, you learn to live in the tension of competing values and the unknown. To lead within a community, you continue building the capacity for living with ethical tension. This skill allows you to creatively work with the novel and the unknown to fashion elegant solutions to emerging problems.

    1. Risk Assessment: Have you and others involved in the decision process considered the risks involved in adopting the tool?
    2. Ethical Blind Spots: Have you considered personal and organizational ethical blind spots you or others might have prior to adopting and deploying the technology?
    3. Benefit: Have you determined that the use of the AI tool will benefit the users or your clients/customers without creating undue obstacles?
    4. Impact: Have you and those involved in the decision process, with the information you have and within your circle of influence, considered (and mitigated to the degree possible) the environmental, social, and economic impact on your stakeholders?
    5. Navigating the Unknowns: Are you — whether you are creating, selling, or using the tool and underlying technologies — open to considering future unknowns?
  • Summary

    As innovative technologies and uses become available, we all would do well to develop the attention, agility, and resilience needed to move with the changes and adjust to the unforeseen.

    Part of this work is ensuring the change is not so rapid that the progress would disrupt employment, macroeconomics, or critical power structures. Part of this work is also determining how many of a tool’s advertised features a user can realistically implement at the time of deployment and how many features remain a promise for the future.

    As we go through the discernment process over and over, we become more skilled at decision making. As we gain knowledge and experience, we also can answer more of the questions on the checklist. We also know the decisions we make today about how to ethically use emerging technologies may not be the appropriate answers for tomorrow. As together we engage in the work of ethics making, we create the moral space for dreamers, adventurers, and the cautious to engage with these emerging technologies. Along the way, we can also enjoy the ride!