Skip to main content

System Prompts Explained

A deep dive into the structure, components, and purpose of system prompts that control the HFIM chatbot's behavior.

What Is the System Prompt?โ€‹

Definitionโ€‹

The system prompt is the master set of instructions given to the AI before it answers any user question. It defines:

  • Identity: Who the AI is (HFIM assistant, UGA resource, etc.)
  • Purpose: What it's designed to do (answer questions, provide guidance)
  • Knowledge boundaries: What it knows (HFIM program info) and doesn't know (personal advice)
  • Behavior rules: How it should respond (cite sources, admit uncertainty)
  • Tone and style: How it should communicate (professional, helpful, concise)

Analogy: Like a job description + employee handbook for the AI


Where to Find Itโ€‹

Location: Prompts Configuration โ†’ System Prompt tab

Access:

  1. Log in to admin panel
  2. Click "โš™๏ธ Prompt Configuration" in sidebar
  3. Click "System Prompt" tab
  4. View current prompt in text editor

System Prompt Structureโ€‹

Typical Componentsโ€‹

A well-structured system prompt contains:

  1. Identity Section - Who the AI is
  2. Knowledge Scope - What topics it covers
  3. Core Rules - Fundamental guidelines
  4. Response Format - How to structure answers
  5. Edge Cases - Handling uncertainty and errors
  6. Examples - Sample responses (optional)

Component 1: Identity Sectionโ€‹

Purpose: Establishes the AI's role and persona

Example:

You are Sage, the official virtual assistant for the University of Georgia's
Hospitality and Food Industry Management (HFIM) program. Your purpose is to
help students, prospective students, and stakeholders learn about the HFIM
program, courses, requirements, and career opportunities.

What it does:

  • Sets the AI's name ("Sage")
  • Defines the domain (UGA HFIM)
  • Clarifies the target audience (students, prospective students)
  • Establishes authority (official assistant)

Why it matters: Gives the AI context for interpreting questions and framing responses


Component 2: Knowledge Scopeโ€‹

Purpose: Defines what the AI knows and what it doesn't

Example:

**Your Knowledge Base**:
You have access to current HFIM program information including:
โ€ข Course descriptions and prerequisites
โ€ข Degree requirements and program structure
โ€ข Faculty information and research areas
โ€ข Internship and career path opportunities
โ€ข Application processes and admission requirements

**Out of Scope**:
You do NOT have access to:
โ€ข Individual student records or grades
โ€ข Personal financial information or billing details
โ€ข Real-time class availability or schedules
โ€ข Non-HFIM UGA programs (refer to other departments)

What it does:

  • Lists topics the AI can confidently discuss
  • Clarifies what's beyond the AI's knowledge
  • Sets expectations for users

Why it matters: Prevents the AI from overstepping boundaries or making up information


Component 3: Core Rulesโ€‹

Purpose: Fundamental guidelines that govern all responses

Example:

**Core Response Rules**:
1. **Accuracy First**: Only provide information you can verify from your
knowledge base. Never make up information or guess.

2. **Always Cite Sources**: When providing factual information, cite the
source document (e.g., "According to the HFIM Handbook 2026, page 12...").

3. **Admit Uncertainty**: If you're unsure or don't have enough information,
say so clearly: "I don't have enough information to answer that question
accurately. I recommend contacting [relevant department]."

4. **Be Helpful**: Provide actionable information. Don't just say "contact
the department" without explaining why or who to contact.

5. **Stay On Topic**: Focus on HFIM-related questions. Politely redirect
off-topic questions.

What it does:

  • Prevents hallucinations (made-up info)
  • Enforces transparency (source citation)
  • Guides behavior (helpfulness, focus)

Why it matters: These are the "laws" the AI must follow - most critical part of the prompt


Component 4: Response Formatโ€‹

Purpose: Controls how information is presented

Example:

**Response Format Guidelines**:
โ€ข Keep responses **concise but complete** (typically 50-200 words)
โ€ข Use **bullet points** for lists and multi-part answers
โ€ข Use **numbered steps** for processes or sequences
โ€ข **Break up long text** with paragraph breaks
โ€ข **Use bold** for emphasis on key terms or requirements
โ€ข **Start with a direct answer**, then provide supporting details

**Example Response**:
User: "What are the prerequisites for HFIM 3000?"

Response:
"To enroll in HFIM 3000 (Hospitality Operations Management), you need:

โ€ข **HFIM 2100** - Introduction to Hospitality Management
โ€ข **HFIM 2200** - Food Service Operations
โ€ข **Junior standing** (60+ credit hours)
โ€ข **Minimum 2.5 GPA** in major courses

Both prerequisite courses must be completed with a grade of C or better.

Source: HFIM Course Catalog 2026, page 45"

What it does:

  • Ensures consistent formatting across all responses
  • Improves readability (bullets, bold, spacing)
  • Provides a template for the AI to follow

Why it matters: Makes responses professional, scannable, and user-friendly


Component 5: Edge Casesโ€‹

Purpose: Handles unusual situations and boundary conditions

Example:

**Handling Edge Cases**:

**Unclear Questions**:
If a question is vague or ambiguous, ask for clarification:
"Could you provide more details about [specific aspect]? For example, are
you asking about [option A] or [option B]?"

**Off-Topic Questions**:
If asked about non-HFIM topics:
"I specialize in HFIM program information. For questions about [other topic],
I recommend visiting [relevant UGA department or website]."

**Outdated Information**:
If you notice information in your knowledge base may be outdated:
"According to my knowledge base [source, date], the answer is [information].
However, program details may change. For the most current information, please
verify with [department contact]."

**Personal Advice Requests**:
If asked for personal advice ("Should I major in HFIM?"):
"I can provide information about the HFIM program to help you make an informed
decision, but the choice should be based on your interests, goals, and
circumstances. Would you like to know about [specific aspects like curriculum,
career paths, etc.]?"

What it does:

  • Provides scripts for handling tricky situations
  • Prevents awkward or unhelpful responses
  • Maintains professionalism even in edge cases

Why it matters: Real users ask all kinds of questions - AI needs guidance for non-standard cases


Component 6: Examples (Optional)โ€‹

Purpose: Show the AI exactly how you want it to respond

Example:

**Response Examples**:

**Example 1: Course Prerequisites**
User: "what do i need before taking 3000"

Good Response:
"To enroll in HFIM 3000, you need to complete:
โ€ข HFIM 2100 - Introduction to Hospitality Management
โ€ข HFIM 2200 - Food Service Operations

Both courses must be completed with a C or better. You'll also need junior
standing (60+ credit hours).

Source: HFIM Handbook 2026, page 34"

**Example 2: Uncertain Question**
User: "what's the best career path"

Good Response:
"The 'best' career path depends on your interests and goals. HFIM graduates
pursue diverse careers including:
โ€ข Hotel and resort management
โ€ข Restaurant operations
โ€ข Event planning and management
โ€ข Tourism and travel industry
โ€ข Food service consulting

Would you like to learn more about a specific career area?"

What it does:

  • Shows concrete examples of good responses
  • Helps AI understand the expected tone and style
  • Provides templates for common question types

Why it matters: Examples are powerful - they show (not just tell) what you want


Anatomy of a Strong System Promptโ€‹

Checklistโ€‹

A strong system prompt should have:

โœ… Clear Identity: AI knows who it is and who it serves โœ… Explicit Rules: Core guidelines are specific and unambiguous โœ… Knowledge Boundaries: What it knows and doesn't know โœ… Formatting Guidelines: How to structure responses โœ… Edge Case Handling: Scripts for unusual situations โœ… Tone and Style: Professional, helpful, appropriate for audience โœ… Examples: At least 2-3 sample Q&A pairs โœ… Source Citation Rules: How to reference documents โœ… Uncertainty Handling: What to say when unsure โœ… Concise: Long enough to be comprehensive, short enough to be clear


Red Flagsโ€‹

Signs of a weak system prompt:

โŒ Vague Instructions: "Be helpful" (too general) โŒ Missing Rules: No guidance on citing sources or admitting uncertainty โŒ Contradictions: "Be concise" + "Provide detailed explanations" โŒ No Examples: AI has to guess what you want โŒ Too Long: Prompt is 5+ pages (AI may lose focus) โŒ Too Short: Prompt is 2-3 sentences (not enough guidance) โŒ Outdated Info: References old programs or policies โŒ Unclear Audience: Doesn't specify who it's talking to


How System Prompts Interact with Other Featuresโ€‹

System Prompt + Cacheโ€‹

Relationship:

  • System prompt applies to all responses (cached and RAG)
  • Cache entries should follow the same tone/style as system prompt
  • If you change prompt tone, update cache entries to match

Example:

  • System prompt says: "Always use professional, formal language"
  • Cache entry says: "hey! HFIM is super cool, you'll love it!"
  • Problem: Inconsistency between cached and non-cached responses

Solution: Keep cache entries aligned with system prompt guidelines


Relationship:

  • System prompt tells AI how to use retrieved documents
  • Doesn't control which documents are retrieved (search algorithm does that)
  • Guides how to synthesize information from multiple sources

Example:

  • Documents retrieved: 5 PDFs about HFIM 3000
  • System prompt says: "Synthesize information from multiple sources into a coherent response"
  • AI combines info from all 5 docs into one unified answer

System Prompt + Conversation Historyโ€‹

Relationship:

  • System prompt applies to the entire conversation, not just one question
  • Guides how AI maintains context across multiple questions
  • Defines when to reference previous questions

Example:

User: "What is HFIM?"
AI: [explains HFIM program]

User: "How do I apply?"

With conversation context (system prompt enables this): AI: "To apply to the HFIM program, follow these steps..."

Without conversation context: AI: "Apply to what? Could you clarify?"


Common System Prompt Patternsโ€‹

Pattern 1: Rule-Based Promptโ€‹

Structure: Focuses on explicit rules and constraints

Example:

1. Always cite sources
2. Never make up information
3. Use bullet points for lists
4. Keep responses under 200 words
5. Admit when uncertain

Pros: Clear, easy to follow, consistent results Cons: Can feel rigid or robotic if overused

Best for: Factual domains where accuracy is critical


Pattern 2: Persona-Based Promptโ€‹

Structure: Emphasizes the AI's character and voice

Example:

You are Sage, a friendly and knowledgeable HFIM advisor. You're enthusiastic
about hospitality careers and love helping students discover their path. You
speak warmly but professionally, like a favorite teacher or mentor.

Pros: Creates consistent personality, engaging tone Cons: Can be tricky to balance friendliness with professionalism

Best for: Domains where user engagement and rapport matter


Pattern 3: Example-Heavy Promptโ€‹

Structure: Provides many concrete examples of good responses

Example:

When asked about prerequisites, respond like this: [example]
When asked about careers, respond like this: [example]
When uncertain, respond like this: [example]

Pros: Very clear expectations, reduces ambiguity Cons: Can become very long if too many examples

Best for: Complex domains where nuance matters


System Prompt Versionsโ€‹

Version Managementโ€‹

How versioning works:

  1. Every time you save changes, a new version is created
  2. Versions are stored with timestamp and author
  3. You can view and revert to previous versions

Why versioning matters:

  • Allows you to undo changes if needed
  • Tracks who made what changes when
  • Provides a history of prompt evolution

Learn more: Change History


When to Create a New Versionโ€‹

Create a new version when:

  • Making significant rule changes
  • Adjusting tone or style
  • Adding or removing sections
  • Fixing errors or outdated info
  • Responding to user feedback patterns

Don't create versions for:

  • Minor typo fixes (still save, just don't announce it)
  • Whitespace formatting changes
  • Comment additions (if your system supports comments)

Testing System Promptsโ€‹

Before Savingโ€‹

Always test changes with:

  1. Common Questions: Questions you know users ask frequently
  2. Edge Cases: Unusual or tricky questions
  3. Tone Check: Verify responses match intended personality
  4. Format Check: Ensure bullet points, structure work correctly

Learn more: Testing Changes


After Savingโ€‹

Monitor for 24-48 hours:

  1. Check user feedback (increase in negative feedback?)
  2. Spot-check 10-20 conversations
  3. Look for formatting issues or tone problems
  4. Watch for hallucinations or off-topic responses

Be ready to revert if major issues arise


Next Stepsโ€‹

Now that you understand system prompts:

  1. Learn to edit prompts safely - Step-by-step editing guide
  2. Understand change history - Track and revert changes
  3. Learn testing practices - Validate changes before saving
  4. Review hot reload - Understand immediate deployment
  5. Read best practices - Proven strategies for prompt management

Remember: The system prompt is the chatbot's constitution - every response is governed by it. Take time to understand it thoroughly before making changes!