The NetworkShowcase
Q&A Bot (lagbot_qa)
LangChain-based conversational agent for knowledge retrieval from the Lagrange era
Q&A Bot (lagbot_qa)
Status: Archived (concepts active) | Era: 2022-2023 | Repository: lagbot_qa-main
The lagbot_qa was a LangChain-based conversational agent designed for knowledge retrieval and user assistance, representing our first foray into AI-powered interfaces.
Overview
Purpose
Create an intelligent assistant capable of:
- Answering questions about the fXYZ ecosystem
- Retrieving relevant documentation
- Providing context-aware responses
- Integrating with messaging platforms
Key Features
| Feature | Description |
|---|---|
| Natural Language Processing | LangChain agent for query understanding |
| Vector Embeddings | Towhee ML pipelines for semantic search |
| Knowledge Retrieval | RAG (Retrieval Augmented Generation) pattern |
| Telegram Integration | Direct user interaction via bot |
| Conversational Memory | Context preservation across messages |
Architecture
System Design
┌─────────────────────────────────────────────────────────────┐
│ User Interfaces │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │
│ │ Telegram │ │ Discord │ │ Web Widget │ │
│ └──────┬──────┘ └──────┬──────┘ └────────┬────────┘ │
│ │ │ │ │
└─────────┼─────────────────┼────────────────────┼────────────┘
│ │ │
└─────────────────┼────────────────────┘
│
┌───────▼───────┐
│ LangChain │
│ Agent Core │
│ │
│ - Query Parse │
│ - Tool Select │
│ - Response │
└───────┬───────┘
│
┌─────────────────┼─────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Vector │ │ Knowledge │ │ Tool │
│ Store │ │ Graph │ │ Actions │
│ (Towhee) │ │ Query │ │ │
└─────────────┘ └─────────────┘ └─────────────┘RAG Pipeline
User Query: "What is the Florin token?"
│
▼
┌─────────────────────────────────────┐
│ 1. Query Embedding │
│ - Convert to vector (Towhee) │
│ - Dimension: 768 │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 2. Similarity Search │
│ - Search vector store │
│ - Top-k nearest neighbors (k=5) │
│ - Filter by relevance threshold │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 3. Context Assembly │
│ - Retrieve source documents │
│ - Rank by relevance score │
│ - Assemble context window │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 4. Response Generation │
│ - Prompt with context + query │
│ - Generate response (GPT-3.5) │
│ - Format for platform │
└─────────────────────────────────────┘Implementation
LangChain Agent
from langchain.agents import AgentExecutor, create_react_agent
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
class LagbotQA:
def __init__(self, vector_store, tools):
self.llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.7
)
self.memory = ConversationBufferWindowMemory(
memory_key="chat_history",
k=10, # Keep last 10 exchanges
return_messages=True
)
self.tools = tools + [
self._create_retrieval_tool(vector_store)
]
self.agent = create_react_agent(
llm=self.llm,
tools=self.tools,
prompt=self._create_prompt()
)
self.executor = AgentExecutor(
agent=self.agent,
tools=self.tools,
memory=self.memory,
verbose=True
)
async def query(self, user_input: str) -> str:
"""Process user query and return response"""
result = await self.executor.ainvoke({
"input": user_input
})
return result["output"]
def _create_retrieval_tool(self, vector_store):
"""Create tool for knowledge retrieval"""
from langchain.tools import Tool
def search_knowledge(query: str) -> str:
docs = vector_store.similarity_search(query, k=5)
return "\n\n".join([doc.page_content for doc in docs])
return Tool(
name="knowledge_search",
description="Search the fXYZ knowledge base for relevant information",
func=search_knowledge
)Towhee Embeddings
from towhee import pipe, ops
class TowheePipeline:
"""ML pipeline for document embeddings using Towhee"""
def __init__(self):
# Embedding pipeline
self.embed_pipe = (
pipe.input('text')
.map('text', 'vec', ops.sentence_embedding.sbert(
model_name='all-MiniLM-L6-v2'
))
.output('vec')
)
# Search pipeline
self.search_pipe = (
pipe.input('query', 'collection')
.map('query', 'query_vec', ops.sentence_embedding.sbert(
model_name='all-MiniLM-L6-v2'
))
.flat_map(('query_vec', 'collection'), 'result',
ops.ann_search.milvus_client(
host='localhost',
port='19530'
))
.output('result')
)
def embed_documents(self, documents: list) -> list:
"""Generate embeddings for documents"""
embeddings = []
for doc in documents:
result = self.embed_pipe(doc)
embeddings.append(result.get())
return embeddings
def search(self, query: str, k: int = 5) -> list:
"""Search for similar documents"""
results = self.search_pipe(query, 'knowledge_base')
return results.get()[:k]Telegram Integration
from telegram import Update
from telegram.ext import Application, CommandHandler, MessageHandler, filters
class TelegramBot:
def __init__(self, token: str, qa_agent: LagbotQA):
self.app = Application.builder().token(token).build()
self.qa = qa_agent
# Register handlers
self.app.add_handler(CommandHandler("start", self.start))
self.app.add_handler(CommandHandler("help", self.help))
self.app.add_handler(MessageHandler(
filters.TEXT & ~filters.COMMAND,
self.handle_message
))
async def start(self, update: Update, context):
await update.message.reply_text(
"Welcome to Lagrange Q&A Bot! "
"Ask me anything about the fXYZ ecosystem."
)
async def help(self, update: Update, context):
await update.message.reply_text(
"Commands:\n"
"/start - Start conversation\n"
"/help - Show this help\n\n"
"Just type your question and I'll do my best to answer!"
)
async def handle_message(self, update: Update, context):
user_query = update.message.text
# Show typing indicator
await update.message.chat.send_action("typing")
# Get response from QA agent
response = await self.qa.query(user_query)
await update.message.reply_text(response)
def run(self):
self.app.run_polling()What Evolved
Current Fixie Architecture
The lagbot_qa concepts directly evolved into the Fixie AI agent system:
| lagbot_qa Component | Current Fixie Component |
|---|---|
| LangChain agent | Letta agent framework |
| Towhee embeddings | Letta archival memory |
| Vector store search | Graphiti temporal memory |
| Telegram bot | Multi-platform integration |
| Conversation memory | Letta core memory |
Improvements Made
- Agent Framework: Migrated from LangChain to Letta for better memory management
- Embedding Model: Upgraded from sentence-transformers to specialized embedding models
- Memory System: Added temporal knowledge graph (Graphiti) for relationship tracking
- Tool System: Custom tools for Neo4j queries, market data, network operations
- Multi-Agent: Support for member, project, and network-level agents
Current Integration
// Modern Fixie chat using Vercel AI SDK
import { useChat } from '@ai-sdk/react';
function FixieChat({ agentId }: { agentId: string }) {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: `/api/fixie/${agentId}/stream`,
});
return (
<div>
{messages.map(m => (
<div key={m.id}>
{m.role}: {m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}Technical Details
Knowledge Sources
The bot was trained on:
| Source | Documents | Tokens |
|---|---|---|
| Whitepaper versions | 19 | ~150k |
| Technical documentation | 45 | ~200k |
| Development notes | 271 | ~100k |
| Archived conversations | 4,948 | ~2M |
Performance Metrics
| Metric | Value | Notes |
|---|---|---|
| Response latency | 2-5s | Including embedding + generation |
| Relevance accuracy | ~85% | Based on user feedback |
| Context window | 4096 tokens | GPT-3.5 limitation |
| Concurrent users | 50+ | Telegram bot capacity |
Lessons Learned
What Worked
- RAG Pattern: Retrieval-augmented generation significantly improved accuracy
- Conversational Memory: Context preservation made multi-turn conversations natural
- Towhee Pipelines: Declarative ML pipelines simplified embedding management
- Platform Integration: Telegram provided easy user access
What Didn't Work
- Context Window Limits: GPT-3.5's 4k tokens often truncated important context
- Embedding Quality: General-purpose embeddings missed domain-specific nuances
- Memory Persistence: Session-only memory lost valuable conversation history
- Single Agent: One agent couldn't specialize for different use cases
Key Insights
These learnings directly informed Fixie architecture:
- Use agent-specific memory (core + archival)
- Implement temporal knowledge graphs for relationship tracking
- Support multi-agent specialization (market analysis, query building, etc.)
- Persist all conversations for learning
Archive Location
archive/fxyz-knowledge-project/old-projects/lagrange-repos/lagbot_qa-main/
├── src/
│ ├── agent/
│ │ ├── langchain_agent.py
│ │ └── tools.py
│ ├── embeddings/
│ │ ├── towhee_pipeline.py
│ │ └── vector_store.py
│ ├── integrations/
│ │ ├── telegram_bot.py
│ │ └── discord_bot.py
│ └── main.py
├── data/
│ ├── knowledge_base/
│ └── embeddings/
├── config/
│ └── settings.yaml
└── requirements.txt