Kastrax AI Agent Project Structure ✅
This page provides a guide for organizing your Kastrax AI Agent projects. Kastrax is a powerful Kotlin-based AI Agent framework that combines the actor model with advanced AI capabilities, allowing you to build sophisticated, distributed agent systems.
Kastrax offers a modular architecture where you can use components separately or together to create intelligent agent systems that can reason, learn, and interact with their environment.
You can structure your project according to your specific needs, but we recommend following certain patterns to maximize maintainability and scalability of your AI agent systems.
Project Setup with Gradle ✅
Kastrax AI Agent projects are typically set up using Gradle, the preferred build system for Kotlin projects. To create a new Kastrax project, you’ll need to configure your build.gradle.kts
file with the appropriate dependencies.
plugins {
kotlin("jvm") version "1.9.0"
kotlin("plugin.serialization") version "1.9.0"
}
dependencies {
// Kastrax Core - provides the fundamental AI agent capabilities
implementation("ai.kastrax:kastrax-core:1.0.0")
// Kastrax Actor - provides the actor model implementation
implementation("ai.kastrax:kastrax-actor:1.0.0")
// Kastrax Memory - for agent memory and persistence
implementation("ai.kastrax:kastrax-memory:1.0.0")
// Kastrax Tools - standard tools for your agents
implementation("ai.kastrax:kastrax-tools:1.0.0")
// Kastrax LLM providers
implementation("ai.kastrax:kastrax-llm-deepseek:1.0.0") // For DeepSeek integration
// Other LLM providers are available as separate dependencies
}
You can customize your dependencies based on which components you need for your AI agent system:
Recommended Project Structure
A typical Kastrax AI Agent project follows a structure that separates different components of your agent system. Here’s a recommended structure for a comprehensive Kastrax project:
- AssistantAgent.kt
- ResearchAgent.kt
- PlannerAgent.kt
- AgentActor.kt
- SupervisorActor.kt
- SearchTool.kt
- DatabaseTool.kt
- CalculatorTool.kt
- AgentMemory.kt
- ConversationHistory.kt
- AppConfig.kt
- LLMConfig.kt
- Main.kt
- Application.kt
- AgentTests.kt
- ActorTests.kt
- ToolTests.kt
- build.gradle.kts
- settings.gradle.kts
- gradle.properties
- application.conf
Key Project Components
Component | Description |
---|---|
agents/ | Contains agent definitions with different capabilities and purposes |
actors/ | Contains actor implementations that enable distributed, concurrent agent execution |
tools/ | Custom tools that agents can use to interact with external systems and perform specialized tasks |
memory/ | Components for agent memory, including conversation history, knowledge bases, and persistent storage |
config/ | Configuration classes for the application, LLM providers, and other system settings |
Key Configuration Files
File | Description |
---|---|
build.gradle.kts | Gradle build configuration with dependencies and build settings |
application.conf | HOCON configuration file for application settings, API keys, and environment-specific configurations |
Main.kt | Entry point for the application that initializes the agent system |
Application.kt | Core application class that sets up the agent environment and coordinates components |
Agent System Architecture
Kastrax AI Agent systems typically follow a layered architecture:
- Core Layer: Provides fundamental AI capabilities through the Kastrax Core module
- Actor Layer: Implements the actor model for distributed, concurrent agent execution
- Agent Layer: Defines specialized agents with different capabilities and responsibilities
- Tool Layer: Provides tools that agents can use to interact with external systems
- Memory Layer: Manages agent memory, conversation history, and knowledge bases
This architecture allows for building sophisticated AI agent systems that can scale from simple assistants to complex multi-agent networks capable of collaborative problem-solving.
Best Practices
- Separate Concerns: Keep agent definitions, tools, and configuration in separate modules
- Use Dependency Injection: Inject dependencies like LLM clients and tools into agents
- Implement Proper Error Handling: AI systems can fail in unexpected ways, so implement robust error handling
- Test Thoroughly: Write unit and integration tests for your agents and tools
- Monitor Performance: Implement logging and monitoring to track agent performance and behavior
- Version Your Models: Keep track of which LLM versions your agents are using
- Secure API Keys: Store API keys securely and never commit them to version control