Participants learn why responses may vary, how prompts and settings influence model behavior, and how to evaluate outputs for accuracy, tone, and task alignment. The course also highlights built-in guardrails and best practices for handling and redacting sensitive data to support secure and compliant use.

What You’ll Learn

Training Objectives

  • Explain why ChatGPT responses may vary across sessions and configurations
  • How prompts/system messages, and configurations can intentionally be shaped to affect model behavior
  • What guardrails are built in to prevent misuse; native safety layers; a few user-side prompt and safety guardrails to consider
  • Describe a few output evaluation techniques, any frameworks for determining whether a response meets a task need, and how to use follow up prompts to redirect undesirable outputs
  • Walk through best practices for data handling. Teaching employees how to identify and redact sensitive data before input. 

Learning Outcomes

  • Explain in plain language why ChatGPT responses may vary across sessions/configs and the different ways the model’s behavior can be intentionally shaped 
  • Go home with basic evaluation techniques (fact check; tone check; goal check; reasoning check; chain-of-thought)
  • Describe the guardrails available in ChatGPT Enterprise and additional techniques (like data masking) that employees can do on their own to support responsible and compliant use in their state gov   
    setting.  

Meet the Presenter

Close-up photo of David Sperry in a dark blazer and blue shirt

David Sperry is an AI Adoption and Deployment Manager at OpenAI, where he helps large government organizations securely deploy ChatGPT Enterprise and scale workforce adoption. Previously at Amazon Web Services, he specialized in guiding federal financial agencies through AI adoption and deployment. A U.S. Army veteran, David brings a mission-driven approach to enabling public sector organizations to realize impact with AI.