-
Example Context Files to Upload to Your Repeatable Workflow (Projects, Gems, Skills, etc.)
Keep each file focused, clearly structured, and limited to only the information the model needs.
-
Example System Prompt for your Repeatable Workflow
The system prompt is typically found in the setup or configuration section of your AI tool (often labeled “Instructions,” “Custom Instructions,” or “System Prompt”), where the default behavior and rules for the model are defined.
This GPT helps draft and visualize public-facing agency communications, including service change notices, marketing content, and outreach materials. It uses attached reference files to produce clear, accurate content that follows agency tone, approved terminology, branding standards, and plain language requirements.
Use the attached files as follows:
- Agency style guide: match tone, structure, and formatting preferences.
- Service change summary: use as the primary source for facts about the change.
- Approved terms and phrases: use required terminology and avoid unapproved wording.
- Past public notices: use as examples of format, level of detail, and audience expectations.
- Plain language requirements: keep the writing clear, concise, and easy for the public to understand.
Constraints:
- Before drafting, confirm the task scope
- Flag any missing, conflicting, or outdated information
- Escalate legal, policy, or other high-risk content for human review
- Mark factual claims for subject matter expert review
- Treat every output as a draft that requires final human approval before publication or action
-
Example Day-to-Day Prompt in this Repeatable Workflow
The day-to-day prompt is the task-specific request entered each time, using the repeatable workflow’s instructions and context to produce an output for the situation at hand.
Use the attached context files to draft a public notice about the following service change in under 500 words: “Starting July 1, regional offices will move from walk-in benefit recertification to an appointment-based system. Residents may still complete recertification online, by mail, or during a scheduled in-person visit. This update is designed to shorten lobby wait times and make services more predictable for both staff and the public”
May 01, 2026
Better AI Output Starts with Better AI Input
There’s a version of prompt engineering that looks like this: open a chat window, type a request, and trust the model to figure out the rest. And sometimes it does exactly that. Other times, it produces something technically correct yet completely wrong for the situation, either the wrong tone, the wrong assumptions, or maybe the wrong framing entirely.
Thankfully, context files and new features for repeatable workflows are a fix for that. But like most things in AI and agentic work, they’re only useful if you understand what you’re really doing with them. By mastering these kinds of features and usage tricks, we no longer have iterate over and over with an AI assistant, or wait for the next model update to sharpen our results; instead, we can take proactive steps right now to use these tools with confidence, and integrate AI as the seamless assistant we’ve heard it can be.
What Goes Into a Context File
A context file can consist of instructions, examples, and other reference materials. It’s a file or instruction set that you provide to a model before or alongside a prompt. It could be a style guide, a policy document, a glossary, a set of examples, a transcript, or a redacted data file. The point is to give the model grounding or a POV it doesn’t have by default. You can upload context files directly into your individual chats within your approved AI Assistant or if you need to reference that context on a recurring basis, you can leverage the new repeatable workflow features such as “Projects” in Claude, “Custom GPTs” or “Skills” in ChatGPT, and “Gems” in Gemini.
Adding more context is a strong way to improve outputs, particularly when that context is clear and well-structured. AI Assistants not only use but interpret what they’re given, so more precise inputs tend to produce more reliable outputs. This aligns with a foundational concept in computing, sometimes described as “garbage in, garbage out” (GIGO), meaning outputs reflect the quality of the inputs. So while human review remains important, stronger inputs can help streamline the process and reduce back-and-forth refinement.
How to Write Prompts That Actually Use Context
The most common mistake is attaching a context file and writing a prompt as if the file doesn’t exist. The model will reference it, but it won’t weight* it correctly if your prompt doesn’t signal what to do with it.
A few principles that hold up in real work:
- Be explicit about the relationship between the context and the task. Don’t just attach a style guide and ask for a draft.
- Say: “Use the attached style guide to match the tone and sentence structure. Flag any place where you had to make a judgement call.” That instruction changes how the model uses the file.
- Scope the context to the task. A 40-page policy document is not the same as a well-scoped 2 page summary of the relevant section. Models can work with large context windows, but they don’t always weight* information evenly across them.
- 💡 Tip: This type of AI workflow performs best when the core details in your context file are clear and easy to locate. Important information that’s unlabeled or buried in a long file may be used less effectively.
- ⚠️ Before you upload: Remove PII and sensitive information from your files, and confirm the tool you're using isn't training on your inputs. If your agency is working with sensitive content at scale, retrieval-augmented generation (RAG) can be a safer architecture for context retrieval.
- Use examples as context. “Write in a warm but authoritative tone” is harder to operationalize than 3 examples of writing that actually hits that register. Examples give the model something to pattern-match against, whereas guidelines give it something to interpret. See more about increasing precision in the table below.
Remember to also define what you don’t want. It can be just as useful to say “this document represents the voice we’re moving away from” as it is to show positive examples. Clear boundaries and constraints help shape outputs, so don’t forget about them.
What Good Looks Like
A good context file isn’t about dumping every single thing that you know about a topic or initiative, but rather, it’s precise, and gives AI exactly what it needs to do the job. It tells the model what to use, how to use it, and what matters most. It defines success in terms the model can actually act on, and flags the places where human judgement still needs to be the ultimate final word.
That last part is something we focus on a lot at GTA’s Office of AI. Even though context files make AI assistants more useful, the workflows that perform best are the ones where someone with enough subject matter expertise reviews the output and has the standing to say: that’s not right, and here’s why. Even though this review can feel cumbersome at times, it’s what makes the entire workflow reliable and keeps the system trustworthy.
Here’s an example of how to set up a repeatable workflow using context files, a system prompt, and a day-to-day prompt. Each plays a distinct role: the context files provide context and reference material, the system prompt sets your expectations, and the day-to-day prompt defines that day’s specific task.
Examples of making repeatable workflows more precise
Before | After |
|---|---|
“Write in a professional tone” | “Write in a professional tone suitable for [Department of X], no slang, no contractions.” |
“Keep it short” | “Keep it under 150 words. This will appear on [Department of X’s website]” |
“Make it engaging” | “Open with what the reader can do or get, not what the agency does. Use plain action verbs. No bullet points as this will be read aloud by a screen reader.” |
“Fill in the missing info” | “Some fields in this document are incomplete. Do not guess or infer. Check the source documents provided to fill in the missing info. If the answer is not there, leave the field blank and insert: “[Source not found in provided materials”] |
“Follow the standard format” | “Use the [standard agency memo] format: to/from/date/subject line/body paragraph. No headers, no bullets, no bold text.” |
“This is for an external audience” | “This is for [audience] who [description of audience]” |
“Refer to the previous version” | “Here is the approved language from the last policy update: [pasted]. Do not change any language that appears in quotation marks as those phrases have been through legal review.” |
“Be concise but thorough”
| “This is a [public facing FAQ]. Cover all required compliance points, but use no more than 2 plain language sentences per answer. If a topic requires more detail, end with [for more information, contact agency]” |
“Don’t use jargon” | “Avoid agency acronyms and technical terms unless they appear in the glossary I’ve attached.” |
“Write formally, but keep it conversational” | “Write in plain language that a Georgia resident without a college degree can follow on the first read. Avoid phrases that sound bureaucratic like “utilize”, “facilitate”, “in accordance with”, but do not use casual language like “hey” or “just”. |
A Practical Checklist and Framework
Does your setup include all of the core elements that AI needs to perform well?
Before building a repeatable workflow, check that all of the essentials are in place. Is the context there so the model has the right background? Is the goal clear so it knows what you are trying to achieve? Is the action defined so it knows what to do? Are the specifics included so it understands constraints and boundaries? And is the expected format clear so it knows what the final result should look like? If any of these pieces are missing, the output may be harder to trust or reuse.
Does your context file reflect how you actually evaluate the work?
Not how it’s supposed to be evaluated, but how it really truly is. If your style guide says one thing and your gut says another when you see the output, the file needs updating. Spend 10 minutes on this before you hand the task back to AI. It will save you many iteration cycles.
Can you describe a bad output before you see one?
You don’t need an exhaustive rubric, you need just enough to know when to stop and fix something. Whether it’s the wrong tone, wrong assumptions, or missing something important, pick 2 or 3 things that would actually matter and write them down. If you can’t name them upfront, you’ll be making judgement calls under pressure later.
Is there someone at your agency who can catch drift, not just errors?
Drift can be harder to catch than an objective error. It’s not always as obvious as a misspelled or incorrectly used word or phrase. “AI drift” is what happens when outputs are consistently plausible but subtly or quietly wrong. This doesn’t have to be a formal review role necessarily. It just has to be someone who knows the work well enough to notice when something is technically fine but subtly off. That person should ideally be looped in to see the output before it goes anywhere.
Below is a simple example of how these pieces can come together in a repeatable workflow in your approved AI Assistant.
Bottom Line
A good context file narrows the gap between what you want and what the model produces, but it doesn’t close it—that last bit is yours.
For the latest approved AI tools, use cases, and guidance, visit ai.georgia.gov/guidance/guidance-use-ai-tools.
* The word “weight” in this context refers to the priority or significance the AI assigns to specific information when generating a response.