Categories
Praison AI

PraisonAI Agentic Framework: Key Features Overview

Introduction to PraisonAI

PraisonAI is a production-ready Multi-AI Agents framework with self-reflection capabilities, designed to create AI agents that automate and solve problems ranging from simple tasks to complex challenges. It provides a low-code solution that streamlines the building and management of multi-agent LLM systems, emphasizing simplicity, customization, and effective human-agent collaboration.

Core Architecture

The framework integrates multiple AI agent technologies including AG2 (formerly AutoGen) and CrewAI into a unified low-code solution. This integration allows developers to leverage the strengths of different agent frameworks while maintaining a consistent development experience.

Key Unique Features

  • Self-Reflection – Agents evaluate and improve their own responses for higher accuracy
  • Multi-Step Reasoning – Advanced logical reasoning and autonomous problem-solving capabilities
  • Persistent Memory – Zero-dependency memory system with context awareness across sessions
  • MCP Integration – Model Context Protocol support for connecting to external tools and services
  • Multimodal Support – Process text, images, and other data types seamlessly
  • Workflow Patterns – Built-in support for route, parallel, loop, and repeat patterns
  • Stateful Agents – Maintain context and learn from interactions across sessions

Advanced Capabilities

PraisonAI offers specialized features that set it apart from other frameworks:

  • Planning Mode – Agents can plan and execute complex multi-step tasks
  • Deep Research – Built-in research capabilities with web search integration
  • Query Rewriting – Intelligent query optimization for better results
  • Code Agent – Interact with entire codebases for development tasks
  • Guardrails – Safety and control mechanisms for agent behavior
  • Telemetry – Production monitoring and performance tracking
  • Background Tasks – Asynchronous task execution

Developer Experience

The framework provides multiple interfaces for different developer preferences:

  • Python SDK – Full programmatic control with praisonaiagents package
  • JavaScript/TypeScript SDK – Web development support
  • CLI Interface – No-code command-line operations
  • YAML Configuration – Declarative agent definitions
  • Visual UI – Multi-agent management interface

Integration Ecosystem

PraisonAI supports extensive integration options:

  • 100+ Model Support – Compatible with OpenAI, Groq, Ollama, and more
  • Database Persistence – Multiple database backends for state management
  • Knowledge & RAG – Built-in retrieval-augmented generation capabilities
  • Tool Ecosystem – Web search, file operations, code execution, database tools
  • Custom Tools – Easy plugin development with decorators

Code Example

from praisonaiagents import Agent

# Create a simple agent
agent = Agent(instructions="You are a helpful AI assistant")
agent.start("Write a movie script about a robot on Mars")

Installation

pip install praisonaiagents
export OPENAI_API_KEY=your_api_key

Conclusion

PraisonAI stands out as a comprehensive agentic framework that combines ease of use with powerful features. Its unique combination of self-reflection, MCP integration, persistent memory, and extensive tool support makes it ideal for both simple automation tasks and complex multi-agent workflows. The framework’s low-code approach ensures accessibility while maintaining the flexibility needed for advanced applications.

Categories
Uncategorized

Python Package Performance Profiling Guide

A comprehensive guide to profiling and optimizing Python packages for real-world performance. Learn how to measure, analyze, and improve your package’s startup time, import overhead, and runtime performance.

Why Profile Your Python Package?

Performance matters. Users notice when your CLI takes 3 seconds to start or when your library adds 500ms to import time. This guide covers practical techniques to identify and fix performance bottlenecks.

Measuring Import Time

Python’s import system can be surprisingly slow. Here’s how to measure it:

Using time.perf_counter()

import time

t0 = time.perf_counter()
import your_module
import_time = (time.perf_counter() - t0) * 1000
print(f'Import time: {import_time:.0f}ms')

Using Python’s -X importtime

python -X importtime -c "import your_module" 2>&1 | head -20

This shows cumulative and self time for each import, helping identify slow dependencies.

Using cProfile for Function-Level Analysis

cProfile is Python’s built-in profiler. Use it to find hot functions:

import cProfile
import pstats

profiler = cProfile.Profile()
profiler.enable()

# Your code here
result = your_function()

profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(20)  # Top 20 functions

Key Metrics

  • cumulative: Total time in function including subcalls
  • tottime: Time in function excluding subcalls
  • calls: Number of times function was called

Separating Network from Compute

For packages that make API calls, separate network latency from local computation:

import time
import json

t0 = time.perf_counter()
# Import phase
import openai
t_import = time.perf_counter()

# Init phase
client = openai.OpenAI()
t_init = time.perf_counter()

# Network phase
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hi"}]
)
t_network = time.perf_counter()

print(f'Import:  {(t_import - t0) * 1000:.0f}ms')
print(f'Init:    {(t_init - t_import) * 1000:.0f}ms')
print(f'Network: {(t_network - t_init) * 1000:.0f}ms')
print(f'Total:   {(t_network - t0) * 1000:.0f}ms')

Cold vs Warm Runs

Always measure both cold (first run) and warm (subsequent) performance:

import subprocess
import statistics

def measure_cold_start(command, runs=3):
    times = []
    for _ in range(runs):
        t0 = time.perf_counter()
        subprocess.run(command, capture_output=True)
        times.append((time.perf_counter() - t0) * 1000)
    return statistics.mean(times), statistics.stdev(times)

avg, std = measure_cold_start(['python', '-c', 'import your_module'])
print(f'Cold start: {avg:.0f}ms (±{std:.0f}ms)')

Identifying Eager Imports

Eager imports at module level are the #1 cause of slow startup. Look for:

# BAD: Eager import at module level
from heavy_dependency import HeavyClass

# GOOD: Lazy import when needed
def get_heavy_class():
    from heavy_dependency import HeavyClass
    return HeavyClass

Using importlib.util.find_spec

Check if a module is available without importing it:

import importlib.util

# Fast availability check (no import)
HEAVY_AVAILABLE = importlib.util.find_spec("heavy_module") is not None

# Lazy import helper
def _get_heavy_module():
    if HEAVY_AVAILABLE:
        import heavy_module
        return heavy_module
    return None

Lazy Loading with __getattr__

Python 3.7+ supports module-level __getattr__ for lazy loading:

# In your_package/__init__.py
_lazy_cache = {}

def __getattr__(name):
    if name in _lazy_cache:
        return _lazy_cache[name]
    
    if name == "HeavyClass":
        from .heavy_module import HeavyClass
        _lazy_cache[name] = HeavyClass
        return HeavyClass
    
    raise AttributeError(f"module has no attribute {name}")

Creating Timeline Diagrams

Visualize execution phases with ASCII timeline diagrams:

def create_timeline(phases):
    """
    phases: list of (name, duration_ms) tuples
    """
    total = sum(d for _, d in phases)
    scale = 50.0 / total
    
    # Top line
    line = "ENTER "
    for name, ms in phases:
        width = max(8, int(ms * scale))
        line += "─" * width
    line += "► END"
    print(line)
    
    # Phase names
    line = "      "
    for name, ms in phases:
        width = max(8, int(ms * scale))
        line += "│" + name.center(width - 1)
    line += "│"
    print(line)
    
    print(f"{'':>50} TOTAL: {total:.0f}ms")

Comparing SDK vs Wrapper Performance

When building wrappers around SDKs, measure the overhead:

# Baseline: Raw SDK
sdk_time = measure_sdk_call()

# Your wrapper
wrapper_time = measure_wrapper_call()

overhead = wrapper_time - sdk_time
print(f'Wrapper overhead: {overhead:.0f}ms ({overhead/sdk_time*100:.1f}%)')

Target: Keep wrapper overhead under 5% of SDK time.

CLI vs Python API Performance

CLI tools have additional overhead from subprocess spawning:

# Python API (faster)
from your_package import YourClass
result = YourClass().run()

# CLI (slower due to subprocess)
subprocess.run(['your-cli', 'command'])

Typical CLI overhead: 100-300ms for subprocess spawn + Python startup.

Caching Strategies

Module-Level Caching

class MyClass:
    _cached_client = None
    
    @classmethod
    def get_client(cls):
        if cls._cached_client is None:
            cls._cached_client = ExpensiveClient()
        return cls._cached_client

Configuration Caching

_config_applied = False

def apply_config():
    global _config_applied
    if _config_applied:
        return
    # Expensive configuration
    _config_applied = True

Profiling Pitfalls

1. Measuring in Development Mode

Debug mode, assertions, and development dependencies add overhead. Profile in production-like conditions.

2. Ignoring Variance

Always run multiple iterations and report standard deviation:

times = [measure() for _ in range(10)]
print(f'{statistics.mean(times):.0f}ms (±{statistics.stdev(times):.0f}ms)')

3. Profiler Overhead

cProfile adds ~10-20% overhead. For accurate timing, use time.perf_counter() for wall-clock measurements.

4. Network Variance

API calls have high variance. Separate network timing from local computation.

Performance Targets

Reasonable targets for Python packages:

MetricTarget
CLI –help< 500ms
Package import< 100ms
Wrapper overhead vs SDK< 5%
Profiling overhead< 5%

Summary

Key techniques for Python package performance:

  1. Measure first: Use time.perf_counter() and cProfile
  2. Separate phases: Import, init, network, execution
  3. Lazy load: Use __getattr__ and importlib.util.find_spec
  4. Cache wisely: Module-level caching for expensive operations
  5. Multiple runs: Report mean and standard deviation
  6. Timeline diagrams: Visualize where time is spent

Performance optimization is iterative. Measure, identify bottlenecks, fix, and measure again.

Categories
Praison AI

agents.yaml vs Templates: Technical Analysis

EXHAUSTIVE TECHNICAL ANALYSIS: agents.yaml/tools.py vs Templates/Recipes

EXECUTIVE SUMMARY

Finding: NO DUPLICATION – Clean Separation Exists

After comprehensive code analysis across 5 repositories, agents.yaml + tools.py and Templates/Recipes serve fundamentally different purposes with no functional overlap.


A) INVENTORY (Evidence-Backed)

1. agents.yaml + tools.py Behavior

Files/Paths Involved:

ComponentPathPurpose
CLI Entry/Users/praison/praisonai-package/src/praisonai/praisonai/cli/main.py:161PraisonAI(agent_file="agents.yaml") default
YAML Parser/Users/praison/praisonai-package/src/praisonai/praisonai/agents_generator.py:309-320Loads and parses agents.yaml
tools.py Loader/Users/praison/praisonai-package/src/praisonai/praisonai/agents_generator.py:251-290load_tools_from_tools_py()
Workflow Parser (Core)/Users/praison/praisonai-package/src/praisonai-agents/praisonaiagents/workflows/yaml_parser.py:1-200YAMLWorkflowParser with normalization

Schema & Defaults:

  • agents.yaml uses rolesagents normalization (lines 147-151 in yaml_parser.py)
  • tools.py auto-discovered from CWD: os.path.join(root_directory, 'tools.py') (line 364)
  • Precedence: tools.py in CWD > tools/ folder

Key Call Sites:

agents_generator.py:364-369 - tools.py discovery\nagents_generator.py:789-903 - _run_praisonai() execution\nyaml_parser.py:109-176 - _normalize_yaml_config()

2. Templates/Recipes System

Files/Paths Involved:

ComponentPathPurpose
Template Loader/Users/praison/praisonai-package/src/praisonai/praisonai/templates/loader.py:1-555TemplateLoader, TemplateConfig
Template Discovery/Users/praison/praisonai-package/src/praisonai/praisonai/templates/discovery.py:1-392Multi-directory precedence
Template Registry/Users/praison/praisonai-package/src/praisonai/praisonai/templates/registry.pyRemote/cache management
Dependency Checker/Users/praison/praisonai-package/src/praisonai/praisonai/templates/dependency_checker.pyPackage/env/tool validation
CLI Handler/Users/praison/praisonai-package/src/praisonai/praisonai/cli/features/templates.py:1-843praisonai templates commands
Agent-Recipes/Users/praison/Agent-Recipes/agent_recipes/templates/30+ pre-built templates
Manifest/Users/praison/Agent-Recipes/manifest.yaml:1-300Template index with metadata
Recipe Runtime/Users/praison/Agent-Recipes/agent_recipes/recipe_runtime/core.py:1-386RecipeRunner, RecipeConfig

Discovery Precedence (from discovery.py:58-62):

  1. ~/.praison/templates (custom, priority 1)
  2. ~/.config/praison/templates (custom, priority 2)
  3. ./.praison/templates (project, priority 3)
  4. agent_recipes.templates (package, lowest priority)

Template Structure (TEMPLATE.yaml):

name: transcript-generator\nversion: "1.0.0"\nrequires:\n  tools: [youtube_tool]\n  packages: [openai]\n  env: [OPENAI_API_KEY]\nworkflow: workflow.yaml\nagents: agents.yaml\nconfig: {...}  # Schema for user inputs\ndefaults: {...}\ncli: {...}  # CLI integration

B) SEMANTIC MODEL (Taxonomy)

Precise Definitions

ConceptDefinitionScopePortability
Project Config (agents.yaml + tools.py)Local, ad-hoc agent/workflow definition for immediate execution in CWDSingle projectNot portable (no metadata, no dependency declaration)
Template (TEMPLATE.yaml + workflow.yaml)Distributable, dependency-aware agent configuration bundle with metadataShareable across projectsFully portable (versioned, dependency-checked)
RecipeTemplate + specialized tools from praisonai-tools.recipe_tools for domain-specific tasks (video, audio, docs)Shareable + domain-specificFully portable with external dependency declarations
ExampleDocumentation-only code snippetReference onlyCopy-paste starting point

Key Differentiators

AspectProject ConfigTemplate/Recipe
Dependency ModelImplicit (tools.py must exist)Explicit (requires.packages, requires.env, requires.external)
VersioningNoneversion: "1.0.0" in TEMPLATE.yaml
Override ModelDirect file editconfig schema + defaults + CLI args
Security PostureArbitrary code execution (tools.py)TemplateSecurity class with allowlists
ReproducibilityLow (no lockfiles)Medium (package versions in requires)
DistributionManual copypraisonai templates install github:...
CLI Paritypraisonai agents.yamlpraisonai templates run <name>

C) SIDE-BY-SIDE COMPARISON MATRIX

Dimensionagents.yaml + tools.pyTemplates/Recipes
DiscoveryCWD only4-level precedence (custom → config → project → package)
Precedence Rulestools.py > tools/ folderHigher priority dir overrides lower
Input SchemaFreeform YAMLconfig section with types, defaults, required
Expressive PowerFull (any YAML structure)Full + variable substitution {{var}}
Tool IntegrationAuto-load from tools.pyrequires.tools + tool_override.py resolution
Tool PackagingNone (file must exist)Bundled in template dir or from praisonai-tools
Dependency ManagementNonerequires.packages, requires.env, requires.external
ReproducibilityLowMedium (declared deps, no lockfile)
DistributionGit clone / copypraisonai templates install, cache, registry
Onboarding UXCreate agents.yaml + tools.pypraisonai templates run <name> [args]
Failure Modes“tools.py not found” (silent)Dependency checker with install hints
Multi-agent AttributionVia framework (agent_id in logs)Same + template metadata in run.json
SecurityArbitrary code executionTemplateSecurity.is_source_allowed(), sanitization
CLI Commandspraisonai [agents.yaml]praisonai templates list/search/info/run/init
ExtensibilityAdd functions to tools.pyAdd templates to ~/.praison/templates
Backwards CompatN/A (original)_normalize_yaml_config() accepts old field names

D) USER CONFUSION AUDIT

Identified Overlap Points

  1. Both use agents.yaml filename
    • Project config: praisonai agents.yaml
    • Template: agents_file: str = "agents.yaml" in TemplateConfig (line 36 of loader.py)
    • Confusion: User sees “agents.yaml” in both contexts
  2. Both use tools.py
    • Project config: Auto-loaded from CWD
    • Template: Can include tools.py in template directory
    • Confusion: Same filename, different discovery mechanisms
  3. CLI overlap
    • praisonai agents.yaml (project config)
    • praisonai templates run transcript-generator (template)
    • Confusion: Two different invocation patterns for similar outcomes
  4. Documentation uses “template” loosely
    • Docs reference “agents.yaml” as a “template” in some places
    • Example: /Users/praison/PraisonAIDocs/docs/nocode/introduction.mdx uses “agents.yaml” terminology
    • No clear distinction in docs between “project config” vs “distributable template”
  5. Workflow vs agents.yaml naming
    • Templates use workflow.yaml as primary
    • Project config uses agents.yaml as primary
    • Both are valid entry points with normalization

Specific Confusion Sources

SourceLocationIssue
CLI helpmain.py:701Lists both templates and agents.yaml without explaining difference
DocsMultiple .mdx files“agents.yaml” used without clarifying it’s project-local
TEMPLATE.yamlagents_file: strSuggests templates contain agents.yaml (they do, but relationship unclear)
Error messagesagents_generator.pyNo hint about templates when agents.yaml fails

E) RECOMMENDATION

Decision: KEEP BOTH with CLEAR SEPARATION

Rationale:

  1. Different Use Cases
    • Project Config (agents.yaml + tools.py): Rapid prototyping, local development, iteration
    • Templates/Recipes: Distribution, sharing, reproducibility, dependency management
  2. No Functional Duplication
    • Project config has NO dependency checking, NO versioning, NO distribution
    • Templates have ALL of these features
    • They serve fundamentally different lifecycle stages
  3. Minimal Changes Required
    • The systems are already cleanly separated in code
    • Only documentation and messaging need clarification

Proposed Changes (Minimal)

  1. Terminology Clarification in Docs
    • Rename “agents.yaml” references to “Project Configuration” or “Local Config”
    • Add clear section: “Project Config vs Templates”
  2. CLI Help Enhancement
    • Add explanatory text distinguishing the two modes
  3. Error Message Improvement
    • When agents.yaml not found, suggest templates as alternative
  4. No Code Changes Required
    • The architecture is sound
    • Templates are a superset of project config functionality

F) IMPLEMENTATION

Based on analysis, the systems are already cleanly separated. The confusion is purely terminological/documentation-based, not architectural.

Added clarifying section to /Users/praison/Agent-Recipes/README.md:5-14:

## Templates vs Project Config\n\n| Concept | What It Is | When to Use |\n|---------|------------|-------------|\n| **Project Config** (agents.yaml + tools.py) | Local, ad-hoc agent definition | Rapid prototyping |\n| **Templates/Recipes** (this repo) | Distributable, versioned bundles | Production use |

G) VERIFICATION

# README updated successfully\ncat /Users/praison/Agent-Recipes/README.md | head -15\n# Shows new "Templates vs Project Config" section

CONCLUSION

Missing = 0

  • ✅ No code duplication exists
  • ✅ Systems serve different purposes
  • ✅ Architecture is sound
  • ✅ Documentation clarification added
  • ✅ No breaking changes required

The user’s concern stemmed from terminology confusion, not architectural duplication. The README update provides clear guidance on when to use each approach.

Categories
Praison AI

Creating AI Powered Applications the Simplest Way

Building AI-powered applications doesn’t have to be complicated. With PraisonAI’s modular architecture, you can go from zero to a working AI agent in minutes.

Quick Start with CLI

Run a pre-built AI template with one command:

pip install praisonai agent-recipes
praisonai templates run ai-video-editor --input video.mp4

List available templates:

praisonai templates list

Add custom tools:

praisonai tools add ./my_tools.py

Python Code Approach

Create an agent with custom tools:

from praisonaiagents import Agent

def analyze_data(csv_data: str) -> str:
    """Analyze CSV data.
    
    Args:
        csv_data: CSV string
    
    Returns:
        Analysis summary
    """
    import pandas as pd
    from io import StringIO
    df = pd.read_csv(StringIO(csv_data))
    return f"Rows: {len(df)}, Columns: {list(df.columns)}"

agent = Agent(
    name="analyst",
    role="Data Analyst",
    tools=[analyze_data]
)

result = agent.start("Analyze: name,age\nAlice,30")

Why Modular Architecture?

PraisonAI uses three separate repositories:

RepositoryPurpose
praisonaiCore framework and CLI
praisonai-toolsCustom tool base classes
agent-recipesReady-to-use templates

Benefits:

  • Independent versioning – Tools update without breaking core
  • Community contributions – Easy to add templates without touching core
  • Selective installation – Install only what you need
  • Separation of concerns – Core logic vs extensions vs recipes

Auto-Discovery

Tools in ~/.praison/tools/ are automatically discovered. Just drop a Python file there:

# ~/.praison/tools/my_tools.py
def my_tool(query: str) -> str:
    """Process query."""
    return f"Result: {query}"

Your tool is now available to all agents without any configuration.


Create Your Own Template

mkdir my-template
cd my-template

# TEMPLATE.yaml
name: my-template
version: "1.0.0"
variables:
  task:
    required: true

# agents.yaml  
roles:
  agent:
    role: Assistant
    tasks:
      main:
        description: "{{task}}"

Add and run:

praisonai templates add ./my-template
praisonai templates run my-template --task "Hello World"

Resources

Categories
Praison AI

PraisonAI TypeScript SDK

Build AI agents in TypeScript with the same simplicity as Python. The PraisonAI TypeScript SDK brings powerful agent orchestration to Node.js with a clean, intuitive API.

Installation

npm install praisonai

Quick Start

Create your first agent in 3 lines:

import { Agent } from 'praisonai';
const agent = new Agent({ instructions: "You are helpful" });
await agent.chat("Hello!");

Core Features

1. Simple Agent

import { Agent } from 'praisonai';

const agent = new Agent({ 
  instructions: "You are a helpful assistant" 
});

const response = await agent.chat("What is TypeScript?");
console.log(response);

2. Agent with Tools

Pass functions directly – schemas are auto-generated:

import { Agent } from 'praisonai';

const getWeather = (city: string) => `Weather in ${city}: 22°C`;
const calculate = (expr: string) => eval(expr).toString();

const agent = new Agent({
  instructions: "You help with weather and calculations",
  tools: [getWeather, calculate]
});

await agent.chat("What's the weather in Paris?");

3. Multi-Agent System

import { Agent, Agents } from 'praisonai';

const researcher = new Agent({ 
  instructions: "Research topics thoroughly" 
});

const writer = new Agent({ 
  instructions: "Write based on research" 
});

const agents = new Agents([researcher, writer]);
await agents.start();

4. Persistent Memory

import { Agent, db } from 'praisonai';

const agent = new Agent({
  instructions: "You remember conversations",
  db: db("sqlite:./conversations.db"),
  sessionId: "user-123"
});

await agent.chat("My name is Alice");
await agent.chat("What's my name?"); // Remembers: Alice

5. Different LLM Providers

import { Agent } from 'praisonai';

// OpenAI
const gptAgent = new Agent({
  instructions: "You are helpful",
  llm: "gpt-4o-mini"
});

// Anthropic Claude
const claudeAgent = new Agent({
  instructions: "You are helpful",
  llm: "anthropic/claude-3-5-sonnet"
});

// Google Gemini
const geminiAgent = new Agent({
  instructions: "You are helpful",
  llm: "google/gemini-2.0-flash"
});

How to Run

1. Create a TypeScript file

touch agent.ts

2. Add your code

import { Agent } from 'praisonai';

const agent = new Agent({ 
  instructions: "You are helpful" 
});

const response = await agent.chat("Hello!");
console.log(response);

3. Set your API key

export OPENAI_API_KEY="your-key-here"

4. Run with tsx or ts-node

npx tsx agent.ts

Key Features

  • Simple API: One import, one class, one method
  • Auto-schema: Pass functions directly as tools
  • Multi-provider: OpenAI, Anthropic, Google support
  • Persistence: SQLite, PostgreSQL, Redis adapters
  • Memory: Semantic memory for context recall
  • Workflows: Sequential, parallel, conditional flows
  • TypeScript-first: Full type safety

Learn More

Categories
Praison AI

Knowledge Export Import with Agent

Using Knowledge Export/Import with PraisonAI Agents

PraisonAI Agents now support programmatic export and import of knowledge bases, enabling seamless backup and migration workflows in your AI applications.

Creating an Agent with Knowledge

from praisonaiagents import Agent

# Create agent with knowledge
agent = Agent(
    name='KnowledgeAgent',
    instructions='Answer questions based on the provided knowledge.',
    knowledge=['document.pdf', './docs/'],
    verbose=True
)

# Query the knowledge base
result = agent.start('What are the key features?')

Exporting Knowledge Programmatically

from praisonai.cli.features.knowledge import KnowledgeHandler

# Initialize handler
handler = KnowledgeHandler()

# Export knowledge base
handler.action_export(['backup.json'])

# Export with custom path
handler.action_export(['/backups/knowledge_2024.json'])

Importing Knowledge Programmatically

from praisonai.cli.features.knowledge import KnowledgeHandler

# Initialize handler
handler = KnowledgeHandler()

# Import knowledge base
handler.action_import(['backup.json'])

# Import from custom path
handler.action_import(['/backups/knowledge_2024.json'])

Complete Example: Backup and Restore

from praisonaiagents import Agent
from praisonai.cli.features.knowledge import KnowledgeHandler
import tempfile
import os

# Create agent with knowledge
doc_path = 'healthcare_guide.txt'
agent = Agent(
    name='HealthcareAgent',
    role='Healthcare AI Expert',
    knowledge=[doc_path],
    instructions='Provide accurate healthcare information.'
)

# Query before backup
result1 = agent.start('What are AI applications in healthcare?')

# Backup knowledge base
handler = KnowledgeHandler()
backup_file = 'knowledge_backup.json'
handler.action_export([backup_file])

# Later: Restore knowledge base
handler.action_import([backup_file])

# Verify restored knowledge
result2 = agent.start('What are AI applications in healthcare?')

# Cleanup
os.unlink(backup_file)

Key Features

  • JSON format with version metadata
  • Preserves document content and metadata
  • Multi-agent safe with session support
  • Works with all vector stores (Chroma, Qdrant, Pinecone, etc.)
  • Automatic timestamped filenames

The export/import functionality integrates seamlessly with PraisonAI’s knowledge management system, supporting all retrieval strategies and rerankers.

Categories
Praison AI

Knowledge Export Import CLI

Knowledge Export/Import CLI Commands

PraisonAI now supports exporting and importing knowledge bases via CLI commands, enabling easy backup, migration, and sharing of your RAG knowledge bases.

Export Knowledge Base

# Export to default timestamped file
praisonai knowledge export

# Export to specific file
praisonai knowledge export backup.json

# Export to specific path
praisonai knowledge export /path/to/knowledge_backup.json

Import Knowledge Base

# Import from JSON file
praisonai knowledge import backup.json

# Import from specific path
praisonai knowledge import /path/to/knowledge_backup.json

All Knowledge CLI Commands

praisonai knowledge add <source>        # Add documents
praisonai knowledge query "<question>"   # Query with RAG
praisonai knowledge list                 # List documents
praisonai knowledge clear                # Clear knowledge base
praisonai knowledge stats                # Show statistics
praisonai knowledge export <file.json>   # Export to JSON
praisonai knowledge import <file.json>   # Import from JSON
praisonai knowledge help                 # Show help

Use Cases

  • Backup knowledge bases before updates
  • Migrate knowledge between environments
  • Share knowledge bases with team members
  • Version control for RAG data
  • Disaster recovery

The export format is JSON with version metadata, making it easy to track and manage your knowledge base snapshots.

Categories
AI

Knowledge Stack: Query Engines SDK

Answer questions using sub-question decomposition and summarization with Python SDK.

Quick Start

from praisonaiagents.knowledge.query_engine import (
    QueryMode,
    decompose_question,
    SimpleQueryEngine,
    SubQuestionEngine
)

# Decompose complex questions
sub_questions = decompose_question(
    "What is Python and how do I install it?"
)
# Returns: ["What is Python?", "How do I install Python?"]

# Use sub-question engine
engine = SubQuestionEngine()
result = engine.query(
    "Compare Python and Java",
    retriever=my_retriever
)

Query Modes

from praisonaiagents.knowledge.query_engine import QueryMode

class QueryMode(Enum):
    DEFAULT = "default"
    SUB_QUESTION = "sub_question"
    SUMMARIZE = "summarize"

Custom Query Engines

from praisonaiagents.knowledge.query_engine import QueryResult

class MyQueryEngine:
    name = "my_engine"
    mode = QueryMode.DEFAULT
    
    def query(self, question, retriever, top_k=10):
        docs = retriever.retrieve(question, top_k=top_k)
        answer = self._synthesize(question, docs)
        return QueryResult(
            answer=answer,
            sources=[{"text": d.text} for d in docs]
        )

Learn more in the documentation.

Categories
AI

Knowledge Stack: Query Engines CLI

Configure query processing modes for answering questions with CLI flags.

Available Modes

  • default – Direct retrieval + synthesis
  • sub_question – Decompose into sub-questions
  • summarize – Summarize all retrieved content

Usage

# Default mode
praisonai knowledge query "What is Python?" --query-mode default

# Sub-question mode (complex queries)
praisonai knowledge query "Compare Python and Java for web development" \
  --query-mode sub_question

# Summarize mode
praisonai knowledge query "Summarize the documentation" --query-mode summarize

Selection Guide

  • Simple factual questions → default
  • Multi-part questions → sub_question
  • Overview requests → summarize

Learn more in the documentation.

Categories
AI

Knowledge Stack: Index Types SDK

Implement vector, keyword (BM25), and hybrid indices for document retrieval in Python.

Quick Start

from praisonaiagents.knowledge.index import (
    IndexType,
    KeywordIndex,
    get_index_registry
)

# Use built-in keyword index (BM25)
index = KeywordIndex()

# Add documents
index.add_documents([
    "Python is a programming language",
    "Machine learning with Python",
    "Java enterprise development"
])

# Query with BM25 scoring
results = index.query("Python programming", top_k=2)
for result in results:
    print(f"{result['text']} (score: {result['score']})")

Index Types

from praisonaiagents.knowledge.index import IndexType

class IndexType(Enum):
    VECTOR = "vector"     # Semantic embeddings
    KEYWORD = "keyword"   # BM25 term matching
    HYBRID = "hybrid"     # Combined scoring

Custom Indices

class MyIndex:
    name = "my_index"
    index_type = IndexType.KEYWORD
    
    def add_documents(self, documents, metadatas=None, ids=None):
        # Implementation
        pass
    
    def query(self, query, top_k=10):
        # Implementation
        pass

# Register
registry = get_index_registry()
registry.register("my_index", MyIndex)

Learn more in the documentation.