Skip to main content
All CollectionsGetting Started
Setting up Memory For An LLM Pipeline
Setting up Memory For An LLM Pipeline
Edward Hu avatar
Written by Edward Hu
Updated over a week ago

Memory simply gives your AI the ability to remember previous conversations, making the interaction more intuitive.

To turn on memory for your AI Project, navigate to your project > the ellipsis icon next to the "Playground" button, and click on the Memory: On/Off button:

Memory via HTTP Request

The system uses the Channel Token in your SSE or HTTP Request endpoint call as the unique identifier for memory. For instance, if your SSE call is:

https://payload.vextapp.com/sse/${assigned_sse_token}/post/helloworld

Then the helloworld is your unique identifier. The system now will remember up to 5 queries (query + response) and up to 5 minutes for anything related to the helloworld unique identifier.

We recommend using your user's "User ID" as the channel token here as it:

  • Is unique and has zero to no chance of overlapping/repeating

  • Is user-specific; memory is tied to a designated user

Setting Up System Prompt

In order for the memory to function properly, it's also critical to include the "history" system variable within the system prompt in your LLM or Agent so the context is being included.

Here's an example of how you can write your system prompt while including the "history" system variable:

Did this answer your question?