A Multi-Agent LLM Framework for Collaborative Software Development
An innovative open-source framework that implements a council of AI agents working together to analyze, plan, and design software projects. Through structured debate and consensus-based decision-making, the agents produce comprehensive software specifications, milestone plans, and system architecture diagrams—all before a single line of production code is written.
- Features
- How It Works
- Project Structure
- Tech Stack
- Installation
- Quick Start
- Architecture
- Contributing
- License
Multi-Agent Collaboration - Multiple specialized LLM agents work together to analyze requirements and design solutions
Automated SRS Generation - Creates IEEE 830-compliant Software Requirements Specification documents
Milestone Planning - Automatically generates project milestones with task breakdowns and LLM assignments
System Architecture Diagrams - Generates Mermaid-based flow diagrams for system design visualization
Local LLM Support - Runs on local Ollama instances (Qwen, DeepSeek, Mistral, etc.) - no API keys required
End-to-End Workflow - From user request to comprehensive project documentation in minutes
Dev Council's workflow demonstrates how multiple AI agents can collaborate to solve complex planning problems:
User Request
↓
┌─────────────────────────────────────────┐
│ Manager Agent (Orchestrator) │
└─────────────────────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ Project Lead Agent │
│ └─ Analyzes requirements │
│ └─ Creates SRS document │
│ └─ Breaks down into subtasks │
│ └─ Assigns each to specialized LLMs │
└─────────────────────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ Milestone Agent │
│ └─ Extracts milestones from SRS │
│ └─ Creates planning timeline │
│ └─ Assigns LLMs to milestones │
└─────────────────────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ Flow Diagram Agent │
│ └─ Visualizes system architecture │
│ └─ Generates Mermaid diagrams │
└─────────────────────────────────────────┘
↓
Comprehensive Project Documentation
dev-council/
├── backend/ # Python backend with LLM agents
│ ├── app/
│ │ ├── agents/ # Agent implementations
│ │ │ ├── manager.py # Orchestration agent
│ │ │ ├── project_lead.py # SRS generation
│ │ │ ├── milestone.py # Milestone planning
│ │ │ ├── flow_diagram.py # Architecture diagrams
│ │ │ └── manager.py
│ │ ├── core/
│ │ │ └── config.py # Configuration settings
│ │ ├── structured_outputs/ # Output schemas
│ │ ├── tools/ # Utility tools
│ │ │ ├── llm_resources.py # LLM discovery
│ │ │ ├── mermaid.py # Diagram generation
│ │ │ └── save_file.py # File operations
│ │ └── main.py # Entry point
│ ├── outputs/ # Generated documentation
│ ├── requirements.txt
│ └── pyproject.toml
│
├── frontend/ # Next.js web interface
│ ├── app/ # Next.js app directory
│ ├── package.json
│ └── tsconfig.json
│
└── README.md
- Framework: LangChain, LangGraph
- LLM Engines: Ollama (local inference), supports Qwen, DeepSeek, Mistral
- Language: Python 3.10+
- Key Libraries:
langchain- Agent creation and orchestrationlanggraph- Agent workflow managementmermaidian- Diagram generationmarkdown-pdf- Document conversion
- Framework: Next.js 16
- UI: React 19, TypeScript
- Styling: Tailwind CSS 4
- Tooling: ESLint, PostCSS
- Python 3.10 or higher
- Node.js 18 or higher
- Ollama installed and running (for LLM inference)
cd backend
# Create virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create .env file with your configuration
cp .env.example .env # or manually create .envConfigure .env (backend/.env):
GPT_LLM=qwen2.5:1.5b
QWEN_LLM=qwen2.5:1.5b
DEEPSEEK_LLM=deepseek-r1:14b
MISTRAL_LLM=mistral-small:24b
OLLAMA_URL=http://localhost:11434
OLLAMA_TEMPERATURE=0cd frontend
# Install dependencies
npm install
# Build frontend (optional)
npm run buildMake sure Ollama is running and loaded with the models specified in your .env file:
ollama servecd backend
python main.pyEnter your project request when prompted:
User request: Create a real-time collaborative document editing application with user authentication, conflict resolution, and offline support
The system will:
- Analyze the request and generate an SRS document
- Break down requirements into milestones
- Create system architecture diagrams
- Save all outputs to
outputs/directory as.md,.pdffiles
Check the backend/outputs/ directory for:
project_plan.md/project_plan.pdf- Complete SRS documentmilestone.md/milestone.pdf- Milestone planning table- Flow diagrams (Mermaid format)
cd frontend
npm run devOpen http://localhost:3000 to view the web interface.
Each agent in the council specializes in a specific aspect of software planning:
| Agent | Role | Responsibility |
|---|---|---|
| Manager | Orchestrator | Coordinates workflow between specialized agents |
| Project Lead | Analyst & Planner | Creates IEEE 830-compliant SRS documents |
| Milestone Agent | Timeline Planner | Breaks down work into logical milestones |
| Flow Diagram Agent | Architect | Generates system architecture visualizations |
- Separation of Concerns - Each agent focuses on its domain
- Local-First - Uses Ollama for private, local LLM inference
- Structured Outputs - Generates standardized documentation formats
- Reusable Tools - Common utilities (file I/O, diagram generation, LLM discovery)
- Extensibility - Easy to add new agents or modify existing ones
Edit backend/app/core/config.py to customize LLM models and Ollama settings:
class Settings:
GPT_LLM = os.getenv("GPT_LLM", "qwen2.5:1.5b")
QWEN_LLM = os.getenv("QWEN_LLM", "qwen2.5:1.5b")
DEEPSEEK_LLM = os.getenv("DEEPSEEK_LLM", "deepseek-r1:14b")
MISTRAL_LLM = os.getenv("MISTRAL_LLM", "mistral-small:24b")
OLLAMA_URL = os.getenv("OLLAMA_URL", "http://localhost:11434")
OLLAMA_TEMPERATURE = float(os.getenv("OLLAMA_TEMPERATURE", 0))We welcome contributions! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
# Backend development
cd backend
pip install -r requirements.txt
# Make your changes and test
# Frontend development
cd frontend
npm install
npm run dev
# Make your changes and test the UI- Manager Agent - Orchestration and workflow coordination
- Project Lead Agent - SRS generation and requirements analysis
- Milestone Agent - Project breakdown and timeline planning
- Flow Diagram Agent - System architecture visualization
- Code Generation Agent - Generate implementation code from SRS
- Language-specific code generators (Python, JavaScript, TypeScript, etc.)
- Architecture implementation templates
- Database schema generation
- Code Review Agent - Automated code analysis and best practices validation
- Test Case Generation Agent - Create unit and integration tests
- Bug detection and security vulnerability scanning
- Performance optimization recommendations
- Finalization Agent - Consolidate generated code and documentation
- API documentation generation
- Deployment configuration generation (Docker, K8s, etc.)
- Project structure finalization and cleanup
- Web UI for project submissions and result visualization
- Support for additional LLM providers (OpenAI, Claude, etc.)
- Integration with Git for version control
- Docker containerization for easy deployment
- Multi-agent debate and consensus framework
- Custom agent creation framework
- Batch processing for multiple projects
- Result caching and optimization
- Built with LangChain and LangGraph
- LLM inference powered by Ollama
- UI built with Next.js and Tailwind CSS
Made with ❤️ by the OpenLLM-Council
