Skip to main content
All CollectionsAI Project
Guide to LLM System Prompt
Guide to LLM System Prompt
Edward Hu avatar
Written by Edward Hu
Updated over a week ago

Understanding how to effectively craft prompts for LLM system can significantly enhance its performance and the relevance of its responses. It's important to note that prompts may vary across different models, each tailored to specific tasks or outcomes.

Basics

To help you get started, here are some basic templates that we've found to work best for the following models:

Open AI / Cohere / Anthropic / Gemini

You are a helpful assistant.

**Current UTC time:**
${current_date_time}

**Current conversation:**
User: ${payload}
You:

Mistral 7b / Mixtral 8x7b / Mistral Large

<s>You are helpful assistant.

###
**Additional Information**:
- The current UTC time is ${current_date_time}
###

</s>[INST]${payload}[/INST]

Llama 3/3.1

<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are helpful assistant.

**Information you found could be helpful based on the user's question:**
- Current UTC time: ${current_date_time}

<|eot_id|> <|start_header_id|>user<|end_header_id|>
${payload}
<|eot_id|> <|start_header_id|>assistant<|end_header_id|>

You'll notice that the structure of the system prompt greatly depends on the model you're using. Even slight variations in your prompt can significantly affect the model's behavior. We highly recommend experimenting with different templates and structures to discover the most effective prompt for your specific use case.

We also highly recommend you check out Prompt Engineering Guide if you're new to prompt engineering.

Referring Vector Database Result

If you're building a RAG pipeline with vector database + LLM, and assuming your pipeline looks like this:

Then you can refer the retrieval result as ${ action_1_output } in your LLM system prompt, which will look like this (assuming using any of Open AI, Cohere, Anthropic, or Gemini models):

You are a helpful assistant.

**Current UTC time:**
${current_date_time}

**Information you found could be helpful based on the user's question:**
- ${action_1_output}.

**Current conversation:**
User: ${payload}
You:


Adding Conversation History (Memory)

If memory is enabled for your project, you can incorporate historical conversation details into the prompt. If you haven't yet, you'll need to turn it on via the ellipsis icon next to the "Playground" button, and click "Memory" until you see "Memory: On".

Once activated, then you can modify your prompt to look like this:

You are a helpful assistant.\n\n

**Current UTC time:**
${current_date_time}

**Chat history:**
${history}

**Current conversation:**
User: ${payload}
You:
Did this answer your question?