Working Memory ✅
While conversation history and semantic recall help agents remember conversations, working memory allows them to maintain persistent information about users across interactions within a thread.
Think of it as the agent’s active thoughts or scratchpad – the key information they keep available about the user or task. It’s similar to how a person would naturally remember someone’s name, preferences, or important details during a conversation.
This is useful for maintaining ongoing state that’s always relevant and should always be available to the agent.
Quick Start ✅
Here’s a minimal example of setting up an agent with working memory:
import ai.kastrax.core.agent.agent
import ai.kastrax.core.memory.memory
import ai.kastrax.integrations.deepseek.deepSeek
import ai.kastrax.integrations.deepseek.DeepSeekModel
import kotlinx.coroutines.runBlocking
fun main() = runBlocking {
// Create agent with working memory enabled
val myAgent = agent {
name("AgentWithWorkingMemory")
description("An agent that uses working memory")
// Configure the LLM
model = deepSeek {
apiKey("your-deepseek-api-key")
model(DeepSeekModel.DEEPSEEK_CHAT)
temperature(0.7)
}
// Configure memory with working memory enabled
memory = memory {
workingMemory(true)
workingMemoryContent("The user's name is Alice and she prefers concise responses.")
}
}
// Use the agent
val response = myAgent.generate("Tell me about quantum computing")
println(response.text)
}
How Working Memory Works ✅
Working memory is included in every prompt sent to the model. It’s a way to provide persistent context that doesn’t change between messages.
The working memory content is typically added to the system prompt or as a separate message at the beginning of the conversation history. This ensures the model always has access to this information when generating responses.
Adding Information to Working Memory ✅
You can add information to working memory in several ways:
1. During Agent Creation ✅
memory = memory {
workingMemory(true)
workingMemoryContent("User: Alice, Preferences: Technical explanations, concise responses")
}
2. Updating Working Memory Programmatically ✅
// Update working memory with new information
agent.memory.updateWorkingMemory("User: Alice, Preferences: Technical explanations, concise responses, Examples: Preferred")
3. Structured Working Memory ✅
For more complex scenarios, you can use structured working memory:
// Create structured working memory
val userProfile = mapOf(
"name" to "Alice",
"preferences" to listOf("Technical explanations", "Concise responses"),
"expertise" to "Beginner in quantum physics"
)
// Convert to string representation
val workingMemoryContent = "User Profile:\n" +
"Name: ${userProfile["name"]}\n" +
"Preferences: ${(userProfile["preferences"] as List<String>).joinToString(", ")}\n" +
"Expertise: ${userProfile["expertise"]}"
// Update working memory
agent.memory.updateWorkingMemory(workingMemoryContent)
Best Practices ✅
-
Keep It Concise: Working memory should contain only the most important information that needs to be available in every interaction.
-
Structured Format: Use a clear, structured format for working memory to make it easy for the model to parse.
-
Update Selectively: Only update working memory when you have new, important information that should persist across the entire conversation.
-
Combine with Other Memory Types: Use working memory alongside conversation history and semantic recall for the best results.
Example: User Preferences Agent ✅
Here’s a complete example of an agent that uses working memory to remember user preferences:
import ai.kastrax.core.agent.agent
import ai.kastrax.core.memory.memory
import ai.kastrax.integrations.deepseek.deepSeek
import ai.kastrax.integrations.deepseek.DeepSeekModel
import kotlinx.coroutines.runBlocking
fun main() = runBlocking {
// Create agent with working memory
val preferencesAgent = agent {
name("PreferencesAgent")
description("An agent that remembers user preferences")
// Configure the LLM
model = deepSeek {
apiKey("your-deepseek-api-key")
model(DeepSeekModel.DEEPSEEK_CHAT)
temperature(0.7)
}
// Configure memory
memory = memory {
workingMemory(true)
conversationHistory(10)
}
}
// Initial interaction
val response1 = preferencesAgent.generate("Hi, my name is Alice and I prefer technical explanations that are concise.")
println("Agent: ${response1.text}")
// Extract preferences and update working memory
val preferences = "User: Alice, Preferences: Technical explanations, concise responses"
preferencesAgent.memory.updateWorkingMemory(preferences)
// Next interaction should reflect the preferences
val response2 = preferencesAgent.generate("Can you explain quantum entanglement?")
println("Agent: ${response2.text}")
}
Next Steps ✅
Now that you understand working memory, you can:
- Learn about semantic recall
- Explore memory processors
- Implement custom memory storage