A local-first AI trading platform that integrates live cTrader market data with LLM-powered analysis to generate structured trade decisions. A multi-agent workflow enables optional autonomous execution.
Includes a Strategy Studio for creating, backtesting, and saving strategies that auto-load into the app—supporting both manual experimentation and agent-driven deployment.
Runs 100% locally with Docker + Ollama. No OpenAI keys required.
- Highlights
- Demo Screens
- Architecture
- Repository Layout
- Quickstart
- Configuration
- How It Works
- Agents
- API (Selected Endpoints)
- Strategy Studio
- Natural‑Language Control (LLM‑first)
- UI Walkthrough
- Performance & Model Tuning
- Troubleshooting
- Stability & Reliability
- Roadmap
- License
- Disclaimer
-
Two modes
- Manual: Pick a symbol/timeframe and run AI analysis for a structured trade idea.
- Autonomous Agent: Background agents monitor markets, emit signals, and can autotrade with your risk settings.
-
LLM-based analysis
- Sends latest OHLC rows + computed SMC features to your Ollama model.
- Strict, machine-readable output:
{ "signal": "long" | "short" | "no_trade", "sl": 3389.06, "tp": 3417.74, "confidence": 0.56, "reasons": ["plain English explanation"] }
-
Live trading integration (cTrader OpenAPI)
- Realtime candles, open positions & pending orders
- Market & pending order placement with SL/TP amendment logic
- Paper and Live modes
-
Multi-agent workflow (production-style roles)
- Watcher/Observer, Scout/Pattern Detector, Guardian/Risk, Executor/Trader, Scribe/Journal, Commander/Supervisor
-
Strategy Studio (integrated) and Natural‑language control (LLM‑first)
- Create strategies, generate code, run quick backtests, save to
backend/strategies_generatedand hot-reload. - Configure strategies, run backtests, and switch agent settings via chat — no code or sliders required.
- The system parses your intent, validates parameters, and executes safely with confirmations.
- Create strategies, generate code, run quick backtests, save to
-
Automatic trade journaling
- All executed trades (UI, Agent, or Chatbot) logged to local SQLite (
data/journal.db).
- All executed trades (UI, Agent, or Chatbot) logged to local SQLite (
-
Fast, modern frontend
- React 18 + TypeScript + Vite SPA served by NGINX. Lightweight-Charts with overlays (SMA/EMA/VWAP/BB), SL/TP lines, and health chips.
┌─────────────── UI (React + Vite + NGINX) ────────────────┐
│ Dashboard: Manual/Agent control; Lightweight-Charts │
│ Strategy Studio: create/backtest/save strategies │
└───────────────▲───────────────────────────────────────────┘
│ HTTP (FastAPI) via /api proxy
▼
┌──────────── Backend (llm-smc) ────────────┐
│ - cTrader client (candles, positions) │
│ - SMC feature extractor + LLM analyzer │
│ - Multi-agent runner + controller │
│ - Strategy registry (auto-load saved) │
│ - Order execution │
└───────────────▲───────────────┬───────────┘
│ │
│ ▼
│ ┌──────────────┐
│ │ Ollama │ (e.g., llama3.2)
│ └──────────────┘
│
▼
cTrader OpenAPI (live feed & orders)
graph TD
subgraph Main Dashboard
HD[Header - Strategy Select]
AN[Run AI Analysis]
AS[Agent Settings]
CB[Chatbot]
end
subgraph Strategy Studio
S[StrategyChat]
R[Result Panel]
end
subgraph Backend Services
H[FastAPI]
PA[ProgrammerAgent]
BA[BacktestingAgent]
CTD[cTrader OpenAPI]
OLL[Ollama LLM]
FS[(strategies_generated)]
end
%% Dashboard <-> Backend
HD -->|"GET /api/agent/status"| H
AN -->|"POST /api/analyze"| H
AS -->|"POST /api/agent/config"| H
CB -->|"POST /api/chat/stream"| H
%% Studio <-> Backend
S -->|"POST /api/agent/execute_task"| H
H -->|"task_type = strategy"| PA
H -->|"task_type = backtest"| BA
PA -->|"generated code → stdout"| R
BA -->|"metrics: JSON"| R
R -->|"Save Strategy"| FS
%% ⚠️ self-loop can be flaky on GitHub; delete if it breaks
H -->|"reload strategies"| H
%% Backend integrations
H -->|"market data / orders"| CTD
H -->|"LLM prompts"| OLL
For a deeper dive into the system components and agent loop, see ARCHITECTURE.md.
- backend/
app.py– FastAPI app and routes (incl. strategies reload/list)strategy.py– Base strategies (SMC, RSI) + loader for generated strategiesprogrammer_agent.py– Generates indicator/strategy code (Strategy Studio)backtesting_agent.py– Quick backtests (SMA crossover; optional vectorbt)strategies_generated/- Saved strategies (auto-loaded)llm_analyzer.py– LLM orchestration for analysisctrader_client.py– cTrader OpenAPI integrationjournal/– Trade journaling API + DBagents/– Autonomous agent runner (optional)- plus helpers:
data_fetcher.py,indicators.py,smc_features.py, etc.
- frontend/
src/App.tsx– Routes (/dashboard,/strategy-studio)src/pages/StrategyStudio/index.tsx– Strategy Studio pagesrc/components/– Header, Chart, SidePanel, Journal, AIOutput, AgentSettings, StrategyChat, CodeDisplay, BacktestResultsrc/services/api.ts– Backend calls (executeTask, strategies reload)
Related docs: ARCHITECTURE.md, STRATEGY_INTEGRATION_PLAN.md, docker_usage_guide.md.
- Clone & configure
git clone https://github.com/maghdam/GenAI-MultiAgent-TradingSystem.git
cd GenAI-MultiAgent-TradingSystemCreate backend/.env:
# ===== cTrader =====
CTRADER_CLIENT_ID=...
CTRADER_CLIENT_SECRET=...
CTRADER_HOST_TYPE=demo
CTRADER_ACCESS_TOKEN=...
CTRADER_ACCOUNT_ID=...
# ===== LLM =====
OLLAMA_URL=http://ollama:11434
OLLAMA_MODEL=llama3.2
# ===== Optional defaults =====
DEFAULT_SYMBOL=XAUUSD- Bring up the stack
docker compose up --build -d- Open the dashboard
http://localhost:8080
- Use Run AI Analysis for one-off insights
- Use Watch current to add the current pair to the agent watchlist
- Click Start Agent to begin the autonomous loop (adjust thresholds in Agent Settings)
Rebuild reminder: after frontend or backend changes, rebuild containers to keep UI & API in sync:
docker compose down && docker compose build --no-cache && docker compose up -d
| Key | Description |
|---|---|
CTRADER_CLIENT_ID |
cTrader client ID |
CTRADER_CLIENT_SECRET |
cTrader client secret |
CTRADER_HOST_TYPE |
demo or live |
CTRADER_ACCESS_TOKEN |
Auth token |
CTRADER_ACCOUNT_ID |
Account ID |
OLLAMA_URL |
e.g., http://ollama:11434 |
OLLAMA_MODEL |
Default model, e.g., llama3.2 |
DEFAULT_SYMBOL |
Optional initial chart symbol |
API_KEY |
Optional key required via x-api-key header for all API calls |
ALLOWED_ORIGINS |
Comma-separated CORS origins (default: http://localhost:8080) |
NEWS_SUMMARY_ENABLED |
Set to 1 to enable external news fetching/summarization |
SMC_AMEND_TOL |
Optional SL/TP amend tolerance (price units) for coarse-precision symbols |
Frontend (Vite) should set VITE_API_KEY when API_KEY is enabled so the SPA forwards the header on every request.
All /api/* requests are proxied to the backend (llm-smc:4000) via frontend/nginx.conf:
location /api {
proxy_pass http://llm-smc:4000;
}- UI requests
/api/candles→ backend fetches from cTrader → UI renders chart - Manual: UI posts to
/api/analyze→ backend builds SMC features → Ollama → returns{ signal, sl, tp, confidence, reasons } - Agent:
backend/agents/runner.pyloops over watchlist; repeats analysis; emits signals; ifautotrade=true&mode=live, places/updates trades - Chatbot:
backend/chat/service.pyparses intent and calls services (e.g.,place_order) with confirmation
When Start Agent is ON, the supervisor wakes up every interval_sec and:
- Pulls fresh candles for each
(symbol, timeframe)in watchlist - Builds features and queries the LLM
- Emits a signal with a confidence score
- If
autotrade=trueand mode is Live, the Executor opens/closes positions according to thresholds and SL/TP rules
- Watcher/Observer: fetches OHLC for each watchlist pair
- Scout/Pattern Detector: computes SMC features & prompt inputs
- Analyzer: queries the LLM and parses a structured
TradeDecision - Guardian/Risk: enforces confidence/SL/TP rules
- Executor/Trader: places, amends, or closes orders when
autotradeis ON (Live only) - Scribe/Journal: records signals and executed trades
- Commander/Supervisor: manages watchlist; starts/stops loops per pair
- Health & LLM
GET /api/health→{ status, connected }GET /api/llm_status→{ ollama: 200|"unreachable", model }
- Market Data
GET /api/symbolsGET /api/candles?symbol=EURUSD&timeframe=M15&indicators=SMA%20(20)&indicators=VWAP
- Manual Analysis
POST /api/analyze— optional overrides:model,max_bars,max_tokens,options
- Trading
POST /api/execute_tradeGET /api/open_positionsGET /api/pending_orders
- Agent Control
GET /api/agent/configPOST /api/agent/configPOST /api/agent/watchlist/add?symbol=XAUUSD&timeframe=H1POST /api/agent/watchlist/remove?symbol=XAUUSD&timeframe=H1GET /api/agent/signals?n=10GET /api/agent/status
- Journal
GET /api/journal/trades
- Chat
POST /api/chat/stream(streaming assistant)
Strategy Studio is a dedicated page focused on strategy creation and backtesting, integrated with the same backend.
- Access:
http://localhost:8080/strategy-studio(also linked from the dashboard) - Create strategies: prompt e.g., “Create an SMA crossover strategy”; result shows copyable code
- Backtest: select Symbol, Timeframe, Bars; returns compact metrics (Total Return, Win Rate, Max Drawdown, Sharpe, etc.)
- Save: “Save Strategy” persists code to
backend/strategies_generated/<name>.py(bind-mounted on the host)
Endpoints:
POST /api/agent/execute_tasktask_type:calculate_indicator | create_strategy | backtest_strategy | save_strategy- Backtest params:
{ symbol, timeframe, num_bars } - Save params:
{ strategy_name, code }
GET|POST /api/strategies/reload— re-scanbackend/strategies_generatedand register anysignals(df, ...)strategiesGET /api/strategies— list available strategy names and last load errors
Create your own strategies:
- File location
- Put files in
backend/strategies_generated/(e.g.,backend/strategies_generated/my_sma.py).
- Minimal template
import pandas as pd
def signals(df: pd.DataFrame, fast: int = 50, slow: int = 200) -> pd.Series:
"""Return +1 (long), 0 (flat), or -1 (short) per bar."""
f = df['close'].rolling(fast, min_periods=fast).mean()
s = df['close'].rolling(slow, min_periods=slow).mean()
return (f > s).astype(int).diff().fillna(0)- Make it appear in the UI
- Use Strategy Studio -> Save Strategy (auto-reload), or click "Reload Strategies" in the header, or call
GET /api/strategies/reload. - Verify with
GET /api/strategies.
Notes
- Keep top‑level code unindented (no spaces before
import/def). - You can add keyword params to
signals(...)(e.g.,fast,slow). - For richer backtests, install
vectorbtin the backend image.
For implementation notes, see STRATEGY_INTEGRATION_PLAN.md.
This system treats plain human languages like English as the control surface. You can configure strategies, run backtests, and change agent settings via chat — no coding or UI sliders required.
What you can say:
- “Analyze XAUUSD on M5 with an SMA crossover fast 20 slow 50.”
- “Backtest RSI length 14 on EURUSD H1 for 5000 bars with 2 bps fee and 1 bps slippage.”
- “Switch the agent to use my ‘ma’ strategy with fast 20 slow 50 and enable it.”
- “Save this strategy as ‘ma_fast20_slow50’ and reload strategies.”
- “Reload strategies and set the agent back to SMC.”
How it works:
- Intent parsing: the system extracts action (analyze, backtest, save, set strategy), instrument, timeframe, strategy, and parameters (e.g., fast/slow, RSI length, fees, slippage).
- Validation: parameters are coerced/clamped to safe ranges (e.g., fast ≥ 2, slow ≥ fast+1, fees/slippage ≥ 0).
- Tool execution: the request is routed to the right tool (Analyze, Backtest, Save Strategy, Reload Strategies, Agent Config).
- Confirmations: for agent changes or trading, the system presents the intended update and asks for confirmation before applying.
Tips for phrasing:
- Be explicit when you want more control: “fast 20 slow 50”, “length 14”, “for 5000 bars”, “2 bps fee”.
- Reference saved strategies by name: “use strategy ‘ma’” or “save as ‘ma_mytest’”.
- Ask for a reload when you add a new file manually: “reload strategies”.
Transparency:
- After parsing, the UI shows the interpreted parameters (where applicable) so you can confirm what will be executed.
- The backend returns a human‑readable summary and the applied parameters.
- Status chips: cTrader connectivity + current LLM model
- Indicators: add SMA/EMA/VWAP/BB to server-side candle fetch
- AI Output: renders JSON decision + explanation
- SL/TP: drawn on chart when provided
- Recent Signals: latest agent outputs (click to preview)
- Open Positions / Pending Orders: live from cTrader
- AI Assistant: chat widget (bottom-right)
Watch current
- Adds the current
symbol:timeframeto the agent watchlist - Frontend wiring:
frontend/src/services/api.ts(addToWatchlist) andfrontend/src/App.tsx(handleWatchCurrent)
Start/Stop Agent
- Toggle sends supported fields to
/api/agent/configand refreshes status upon success
Global defaults (backend/.env):
OLLAMA_URL=http://ollama:11434
OLLAMA_MODEL=llama3.2Per-request overrides (/api/analyze):
{
"symbol": "XAUUSD",
"timeframe": "H1",
"indicators": ["SMA (20)", "EMA (20)"],
"model": "llama3.2",
"max_bars": 200,
"max_tokens": 256,
"options": { "num_thread": 6 }
}Tips for CPU speed
- Keep
llama3.2 - Use
max_bars~ 150-250 andmax_tokens~ 192-256 - Ensure
OLLAMA_URLpoints to your running Ollama service
- LLM feels slow on CPU → use
llama3.2and reducemax_bars/max_tokens - Agent not trading → set Mode=Live and Autotrade=On; confirm cTrader connection
- No symbols → wait for cTrader to load or verify
.env - Watchlist & “Watch current” →
POST /api/agent/watchlist/add?...should return{ ok: true }; confirm/api/agent/status - Tasks not running → ensure Start Agent is enabled;
/api/healthshowsconnected: true;/api/llm_statusshows a reachable Ollama
Specific dependency versions are pinned to ensure stability. Recent updates resolved dependency conflicts (notably around Twisted) to provide reliable startup in Docker.
- Fundamental Analysis: integrate news/event data into the chatbot
- More strategies (MACD, Volume Profile, Order Flow)
- Backtesting & walk-forward
- Message-bus multi-agent comms + memory
- Risk dashboard (exposure, VaR)
- Cloud deploy templates (Render / Fly.io)
- Optional chart image analysis (vision model)
This project is licensed under the MIT License — see LICENSE.
This project is for education and research. It is not financial advice. Trading involves substantial risk. Do not use live trading with a real account before extensive testing on demo environments.




