- CRITICAL: Database can be wrong, Drift is source of truth - Incident Dec 9: Database -9.33, Drift -2.21 (missing .88) - Root cause: Retry loop chaos caused multi-chunk close, only first recorded - User mandate: 'drift tells the truth not you' - always verify with API - Pattern: Query Drift → Compare → Report discrepancies → Correct database - This is NON-NEGOTIABLE for real money trading system
236 KiB
AI Agent Instructions for Trading Bot v4
🚀 NEW AGENT QUICK START
First Time Here? Follow this sequence to get up to speed:
-
Read this file first (
.github/copilot-instructions.md)- Contains all AI agent guidelines and development standards
- Top 10 Critical Pitfalls summary (see
docs/COMMON_PITFALLS.mdfor full 72 pitfalls) - Financial system verification requirements (MANDATORY reading)
-
Navigate with
docs/README.md(Documentation Hub)- Comprehensive documentation structure with 8 organized categories
- Multiple navigation methods: by topic, date, or file type
- Quick Start workflows for different development tasks
- Links to all subdirectories: setup, architecture, bugs, roadmaps, etc.
-
Get project context from main
README.md- Live system status and configuration
- Architecture overview and key features
- File structure and deployment information
-
Explore specific topics via category subdirectories as needed
docs/setup/- Configuration and environment setupdocs/architecture/- Technical design and system overviewdocs/bugs/- Known issues and critical fixesdocs/roadmaps/- Planned features and optimization phasesdocs/guides/- Step-by-step implementation guidesdocs/deployments/- Deployment procedures and verificationdocs/analysis/- Performance analysis and data studiesdocs/history/- Project evolution and milestones
Key Principle: "NOTHING gets lost" - all documentation is cross-referenced, interconnected, and comprehensive.
🔍 "DO I ALREADY HAVE THIS?" - Quick Feature Discovery
Before implementing ANY feature, check if it already exists! This system has 70+ features built over months of development.
Quick Reference Table
| "I want to..." | Existing Feature | Search Term |
|---|---|---|
| Re-enter after stop-out | Stop Hunt Revenge System - Auto re-enters quality 85+ signals after price reverses through original entry | grep -i "stop hunt revenge" |
| Scale position by quality | Adaptive Leverage System - 10x for quality 95+, 5x for borderline signals | grep -i "adaptive leverage" |
| Test different timeframes | Multi-Timeframe Data Collection - Parallel data collection for 5min/15min/1H/4H/Daily | grep -i "multi-timeframe" |
| Monitor blocked signals | BlockedSignal Tracker - Tracks quality-blocked signals with price analysis | grep -i "blockedsignal" |
| Survive server failures | HA Failover - Secondary server with auto DNS failover (90s detection) | grep -i "high availability" |
| Validate re-entries | Re-Entry Analytics System - Fresh TradingView data + recent performance scoring | grep -i "re-entry analytics" |
| Backtest parameters | Distributed Cluster Backtester - 65,536 combo sweep on EPYC cluster | grep -i "cluster|backtester" |
| Handle RPC rate limits | Retry with Exponential Backoff - 5s → 10s → 20s retry for 429 errors | grep -i "retryWithBackoff" |
| Track best/worst P&L | MAE/MFE Tracking - Built into Position Manager, updated every 2s | grep -i "mae|mfe" |
Quick Search Commands
# Search main documentation
grep -i "KEYWORD" .github/copilot-instructions.md
# Search all documentation
grep -ri "KEYWORD" docs/
# Check live system logs
docker logs trading-bot-v4 | grep -i "KEYWORD" | tail -20
# List database tables (shows what data is tracked)
docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "\dt"
# Check environment variables
cat .env | grep -i "KEYWORD"
# Search codebase
grep -r "KEYWORD" lib/ app/ --include="*.ts"
Feature Discovery by Category
📊 Entry/Exit Logic:
- ATR-based TP/SL (dynamic targets based on volatility)
- TP2-as-runner (40% runner after TP1, configurable)
- ADX-based runner SL (adaptive positioning by trend strength)
- Adaptive trailing stop (real-time 1-min ADX adjustments)
- Emergency stop (-2% hard limit)
🛡️ Risk Management:
- Adaptive leverage (quality-based position sizing)
- Direction-specific thresholds (LONG 90+, SHORT 80+)
- Per-symbol sizing (SOL/ETH independent controls)
- Phantom trade auto-closure (size mismatch detection)
- Dual stops (soft TRIGGER_LIMIT + hard TRIGGER_MARKET)
🔄 Re-Entry & Recovery:
- Stop Hunt Revenge (auto re-entry after reversal)
- Re-Entry Analytics (validation with fresh data)
- Market Data Cache (5-min expiry TradingView data)
📈 Monitoring & Analysis:
- Position Manager (2s price checks, MAE/MFE tracking)
- BlockedSignal Tracker (quality-blocked signal analysis)
- Multi-timeframe collection (parallel data gathering)
- Rate limit monitoring (429 error tracking + analytics)
- Drift health monitor (memory leak detection + auto-restart)
🏗️ High Availability:
- Secondary server (Hostinger standby)
- Database replication (PostgreSQL streaming)
- DNS auto-failover (90s detection via INWX API)
- Orphan position recovery (startup validation)
🔧 Developer Tools:
- Distributed cluster (EPYC parameter sweep)
- Test suite (113 tests, 7 test files)
- CI/CD pipeline (GitHub Actions)
- Persistent logger (survives container restarts)
Decision Flowchart: Does This Feature Exist?
┌─────────────────────────────────────────────────────────────┐
│ 1. Search copilot-instructions.md │
│ grep -i "feature-name" .github/copilot-instructions.md │
│ │ │
│ ▼ │
│ Found? ──YES──► READ THE SECTION │
│ │ │
│ NO │
│ ▼ │
│ 2. Search docs/ directory │
│ grep -ri "feature-name" docs/ │
│ │ │
│ ▼ │
│ Found? ──YES──► READ THE DOCUMENTATION │
│ │ │
│ NO │
│ ▼ │
│ 3. Check database schema │
│ cat prisma/schema.prisma | grep -i "related-table" │
│ │ │
│ ▼ │
│ Found? ──YES──► FEATURE LIKELY EXISTS │
│ │ │
│ NO │
│ ▼ │
│ 4. Check docker logs │
│ docker logs trading-bot-v4 | grep -i "feature" | tail │
│ │ │
│ ▼ │
│ Found? ──YES──► FEATURE IS ACTIVE │
│ │ │
│ NO │
│ ▼ │
│ 5. Check git history │
│ git log --oneline --all | grep -i "feature" | head -10 │
│ │ │
│ ▼ │
│ Found? ──YES──► MAY BE ARCHIVED/DISABLED │
│ │ │
│ NO │
│ ▼ │
│ FEATURE DOES NOT EXIST - SAFE TO BUILD │
└─────────────────────────────────────────────────────────────┘
Why This Matters: Historical Examples
| Feature | Built Date | Trigger Event | Value |
|---|---|---|---|
| Stop Hunt Revenge | Nov 20, 2025 | Quality 90 signal stopped out, missed $490 profit on 8.8% reversal | Captures reversal moves |
| Adaptive Leverage | Nov 24, 2025 | Quality 95+ signals had 100% win rate, wanted to scale winners | 2× profit on high quality |
| HA Failover | Nov 25, 2025 | Server went down during active trades | Zero-downtime protection |
| Phantom Detection | Nov 16, 2025 | Position opened with wrong size, no monitoring | Prevents unprotected positions |
| BlockedSignal Tracker | Nov 22, 2025 | Needed data to optimize quality thresholds | Data-driven threshold tuning |
Don't rebuild what exists. Enhance what's proven.
⚠️ CRITICAL: VERIFICATION MANDATE - READ THIS FIRST ⚠️
THIS IS A REAL MONEY TRADING SYSTEM - EVERY CHANGE AFFECTS USER'S FINANCIAL FUTURE
🚨 IRON-CLAD RULES - NO EXCEPTIONS 🚨
1. NEVER SAY "DONE", "FIXED", "WORKING", OR "DEPLOYED" WITHOUT 100% VERIFICATION
This is NOT optional. This is NOT negotiable. This is the MOST IMPORTANT rule in this entire document.
"Working" means:
- ✅ Code deployed (container restarted AFTER commit timestamp)
- ✅ Logs show expected behavior in production
- ✅ Database state matches expectations (SQL verification)
- ✅ Test trade executed successfully (when applicable)
- ✅ All metrics calculated correctly (manual verification)
- ✅ Edge cases tested (0%, 100%, boundaries)
"Working" does NOT mean:
- ❌ "Code looks correct"
- ❌ "Should work in theory"
- ❌ "TypeScript compiled successfully"
- ❌ "Tests passed locally"
- ❌ "Committed to git"
2. TEST EVERY CHANGE IN PRODUCTION
Financial code verification requirements:
- Position Manager changes: Execute test trade, watch full cycle (TP1 → TP2 → exit)
- API endpoints: curl test with real payloads, verify database records
- Calculations: Add console.log for EVERY step, verify units (USD vs tokens, % vs decimal)
- Exit logic: Test actual TP1/TP2/SL triggers, not just code paths
3. DEPLOYMENT VERIFICATION IS MANDATORY
Before declaring anything "deployed":
# 1. Check container start time
docker logs trading-bot-v4 | grep "Server starting" | head -1
# 2. Check latest commit time
git log -1 --format='%ai'
# 3. Verify container NEWER than commit
# If container older: CODE NOT DEPLOYED, FIX NOT ACTIVE
# 4. Test feature-specific behavior
docker logs -f trading-bot-v4 | grep "expected new log message"
Container start time OLDER than commit = FIX NOT DEPLOYED = DO NOT SAY "FIXED"
4. DOCUMENT VERIFICATION RESULTS
Every change must include:
- What was tested
- How it was verified
- Actual logs/SQL results showing correct behavior
- Edge cases covered
- What user should watch for on next real trade
WHY THIS MATTERS:
User is building from $901 → $100,000+ with this system. Every bug costs money. Every unverified change is a financial risk. This is not a hobby project - this is the user's financial future.
Declaring something "working" without verification = causing financial loss
5. ALWAYS CHECK DOCUMENTATION BEFORE MAKING SUGGESTIONS
This is MANDATORY. This is NOT negotiable. DO NOT waste user's time with questions already answered in documentation.
Before making ANY suggestion or asking ANY question:
- ✅ Check
.github/copilot-instructions.md(THIS FILE - contains system knowledge, patterns, pitfalls) - ✅ Check
docs/README.md(Documentation hub with organized categories) - ✅ Check main
README.md(Live system status and configuration) - ✅ Search
docs/subdirectories for specific topics (setup, architecture, bugs, roadmaps, guides) - ✅ Grep search for keywords related to the topic
- ✅ Check Common Pitfalls section (bugs #1-71) for known issues
Examples of WASTING USER TIME (DO NOT DO THIS):
- ❌ Asking about TradingView rate limits when
docs/HELIUS_RATE_LIMITS.mdexists - ❌ Suggesting features already documented in roadmaps
- ❌ Asking about configuration when ENV variables documented
- ❌ Proposing solutions to bugs already fixed (check Common Pitfalls)
- ❌ Questions about architecture already explained in docs
Correct Workflow:
- Read user request
- SEARCH DOCUMENTATION FIRST (copilot-instructions.md + docs/ directory)
- Check if question is already answered
- Check if suggestion is already implemented
- Check if issue is already documented
- ONLY THEN make suggestions or ask questions
Why This Matters:
- User has spent MONTHS documenting this system comprehensively
- Asking clarified questions = disrespecting user's documentation effort
- "NOTHING gets lost" is the project principle - USE the documentation
- This is a financial system - wasting time = wasting money
- User expects AI to be KNOWLEDGEABLE, not forgetful
Red Flags Indicating You Didn't Check Docs:
- User responds: "we already have this documented"
- User responds: "check the docs first"
- User responds: "this is in Common Pitfalls"
- User responds: "read the roadmap"
- User has to point you to existing documentation
This rule applies to EVERYTHING: Features, bugs, configuration, architecture, deployment, troubleshooting, optimization, analysis.
📋 MANDATORY: ROADMAP MAINTENANCE - NO EXCEPTIONS
THIS IS A CRITICAL REQUIREMENT - NOT OPTIONAL
Why Roadmap Updates Are MANDATORY
User discovered critical documentation bug (Nov 27, 2025):
- Roadmap said: "Phase 3: Smart Entry Timing - NOT STARTED"
- Reality: Fully deployed as Phase 7.1 (718-line smart-entry-timer.ts operational)
- User confusion: "i thought that was already implemented?" → User was RIGHT
- Result: Documentation misleading, wasted time investigating "next feature" already deployed
IRON-CLAD RULES for Roadmap Updates
1. UPDATE ROADMAP IMMEDIATELY AFTER DEPLOYMENT
- ✅ Phase completed → Mark as COMPLETE with deployment date
- ✅ Phase started → Update status to IN PROGRESS
- ✅ Expected impact realized → Document actual data vs expected
- ✅ Commit roadmap changes SAME SESSION as feature deployment
2. VERIFY ROADMAP ACCURACY BEFORE RECOMMENDING FEATURES
- ❌ NEVER suggest implementing features based ONLY on roadmap status
- ✅ ALWAYS grep codebase for existing implementation before recommending
- ✅ Check: Does file exist? Is it integrated? Is ENV variable set?
- ✅ Example: Phase 3 "not started" but smart-entry-timer.ts exists = roadmap WRONG
3. MAINTAIN PHASE NUMBERING CONSISTENCY
- If code says "Phase 7.1" but roadmap says "Phase 3", consolidate naming
- Update ALL references (roadmap files, code comments, documentation)
- Prevent confusion from multiple names for same feature
4. ROADMAP FILES TO UPDATE
1MIN_DATA_ENHANCEMENTS_ROADMAP.md(main detailed roadmap)docs/1MIN_DATA_ENHANCEMENTS_ROADMAP.md(documentation copy)OPTIMIZATION_MASTER_ROADMAP.md(high-level consolidated view)- Website roadmap API endpoint (if applicable)
- This file's "When Making Changes" section (if new pattern learned)
5. ROADMAP UPDATE CHECKLIST When completing ANY feature or phase:
- Mark phase status: NOT STARTED → IN PROGRESS → COMPLETE
- Add deployment date: ✅ COMPLETE (Nov 27, 2025)
- Document actual impact vs expected (after 50-100 trades data)
- Update phase numbering if inconsistencies exist
- Commit with message: "docs: Update roadmap - Phase X complete"
- Verify website roadmap updated (if applicable)
6. BEFORE RECOMMENDING "NEXT FEATURE"
# 1. Read roadmap to identify highest-impact "NOT STARTED" feature
cat 1MIN_DATA_ENHANCEMENTS_ROADMAP.md | grep "NOT STARTED"
# 2. VERIFY it's actually not implemented (grep for files/classes)
grep -r "SmartEntryTimer" lib/
grep -r "SMART_ENTRY_ENABLED" .env
# 3. If files exist → ROADMAP WRONG, update it first
# 4. Only then recommend truly unimplemented features
WHY THIS MATTERS:
User relies on roadmap for strategic planning. Wrong roadmap = wrong decisions = wasted development time = delayed profit optimization. In a real money system, time wasted = money not earned.
Outdated roadmap = wasted user time = lost profits
📝 MANDATORY: DOCUMENTATION + GIT COMMIT: INSEPARABLE WORKFLOW - NUMBER ONE PRIORITY
⚠️ CRITICAL: THIS IS THE #1 MANDATORY RULE - DOCUMENTATION GOES HAND-IN-HAND WITH EVERY GIT COMMIT ⚠️
USER MANDATE (Dec 1, 2025): "in the actual documentation it shall be a number one priority mandatory thing, that which each git commit and push there must be an update to the documentation. this HAS to go hand in hand"
Universal Rule: EVERY Git Commit REQUIRES Documentation Update
IRON-CLAD WORKFLOW - NO EXCEPTIONS:
# ❌ WRONG (INCOMPLETE - NEVER DO THIS):
git add [files]
git commit -m "feat: Added new feature"
git push
# STOP! This is INCOMPLETE work. Documentation is MISSING.
# ✅ CORRECT (COMPLETE - ALWAYS DO THIS):
git add [files]
git commit -m "feat: Added new feature"
# MANDATORY NEXT STEP - UPDATE DOCUMENTATION:
# Edit .github/copilot-instructions.md with:
# - What changed and why
# - New patterns/insights/learnings
# - Configuration changes
# - API endpoints added/modified
# - Database schema changes
# - Integration points affected
git add .github/copilot-instructions.md
git commit -m "docs: Document new feature insights and patterns"
git push
# ✅ NOW work is COMPLETE - Code + Documentation together
This is NOT a suggestion. This is NOT optional. This is MANDATORY.
Code without documentation = INCOMPLETE WORK = DO NOT PUSH
Why This is #1 Priority (User's Direct Mandate):
- "I am sick and tired of reminding you" - User has repeatedly emphasized this
- This is a real money trading system - Undocumented changes cause financial losses
- Knowledge preservation - Insights are lost without documentation
- Future AI agents - Need complete context to maintain system integrity
- Time savings - Documented patterns prevent re-investigation
- Financial protection - Trading system knowledge prevents costly errors
When Documentation is MANDATORY (EVERY TIME):
You MUST update .github/copilot-instructions.md when:
- ✅ Adding ANY new feature or component
- ✅ Fixing ANY bug (add to Common Pitfalls section)
- ✅ Changing configuration (ENV variables, defaults, precedence)
- ✅ Modifying API endpoints (add to API Endpoints section)
- ✅ Updating database schema (add to Important fields section)
- ✅ Discovering system behaviors or quirks
- ✅ Implementing optimizations or enhancements
- ✅ Adding new integrations or dependencies
- ✅ Changing data flows or architecture
- ✅ Learning ANY lesson worth remembering
If you learned something valuable → Document it BEFORE pushing If you solved a problem → Document the solution BEFORE pushing If you discovered a pattern → Document the pattern BEFORE pushing
The Correct Mindset:
- Documentation is NOT separate work - It's part of completing the task
- Documentation is NOT optional - It's a requirement for "done"
- Documentation is NOT an afterthought - It's planned from the start
- Every git commit is a learning opportunity - Capture the knowledge
Examples of Commits Requiring Documentation:
# Scenario 1: Bug fix reveals system behavior
git commit -m "fix: Correct P&L calculation for partial closes"
# → MUST document: Why averageExitPrice doesn't work, must use realizedPnL
# → MUST add to: Common Pitfalls section
# Scenario 2: New feature with integration requirements
git commit -m "feat: Smart Entry Validation Queue system"
# → MUST document: How it works, when it triggers, integration points
# → MUST add to: Critical Components section
# Scenario 3: Performance optimization reveals insight
git commit -m "perf: Adaptive leverage based on quality score"
# → MUST document: Quality thresholds, why tiers chosen, expected impact
# → MUST add to: Configuration System or relevant section
# Scenario 4: Data analysis reveals filtering requirement
git commit -m "fix: Exclude manual trades from indicator analysis"
# → MUST document: signalSource field, SQL filtering patterns, why it matters
# → MUST add to: Important fields and Analysis patterns sections
Red Flags That Documentation is Missing:
- ❌ User says: "please add in the documentation"
- ❌ User asks: "is this documented?"
- ❌ User asks: "everything documented?"
- ❌ Code commit has NO corresponding documentation commit
- ❌ Bug fix with NO Common Pitfall entry
- ❌ New feature with NO integration notes
- ❌ You push code without updating copilot-instructions.md
Integration with Existing Sections:
When documenting, update these sections as appropriate:
- Common Pitfalls: Add bugs/mistakes/lessons learned
- Critical Components: Add new systems/services
- Configuration System: Add new ENV variables
- When Making Changes: Add new development patterns
- API Endpoints: Add new routes and their purposes
- Database Schema: Add new tables/fields and their meaning
- Architecture Overview: Add new integrations or data flows
Remember:
Documentation is not bureaucracy - it's protecting future profitability by preserving hard-won knowledge. In a real money trading system, forgotten lessons = repeated mistakes = financial losses.
Git commit + Documentation = Complete work. One without the other = Incomplete.
This is the user's #1 priority. Make it yours too.
IRON-CLAD RULE: UPDATE THIS FILE FOR EVERY SIGNIFICANT CHANGE
When to update .github/copilot-instructions.md (MANDATORY):
- New system behaviors discovered (like 1-minute signal direction field artifacts)
- Data integrity requirements (what fields are meaningful vs meaningless)
- Analysis patterns (how to query data correctly, what to filter out)
- Architecture changes (new components, integrations, data flows)
- Database schema additions (new tables, fields, their purpose and usage)
- Configuration patterns (ENV variables, feature flags, precedence rules)
- Common mistakes (add to Common Pitfalls section immediately)
- Verification procedures (how to test features, what to check)
This file is the PRIMARY KNOWLEDGE BASE for all future AI agents and developers.
What MUST be documented here:
- ✅ Why things work the way they do (not just what they do)
- ✅ What fields/data should be filtered out in analysis
- ✅ How to correctly query and interpret database data
- ✅ Known artifacts and quirks (like direction field in 1-min signals)
- ✅ Data collection vs trading signal distinctions
- ✅ When features are truly deployed vs just committed
DO NOT make user remind you to update this file. It's AUTOMATIC:
Change → Code → Test → Git Commit → UPDATE COPILOT-INSTRUCTIONS.MD → Git Commit
If you implement something without documenting it here, the work is INCOMPLETE.
📝 DOCUMENTATION + GIT COMMIT: INSEPARABLE WORKFLOW - NUMBER ONE PRIORITY
⚠️ CRITICAL: THIS IS THE #1 MANDATORY RULE - DOCUMENTATION GOES HAND-IN-HAND WITH EVERY GIT COMMIT ⚠️
USER MANDATE (Dec 1, 2025): "in the actual documentation it shall be a number one priority mandatory thing, that which each git commit and push there must be an update to the documentation. this HAS to go hand in hand"
Universal Rule: EVERY Git Commit REQUIRES Documentation Update
IRON-CLAD WORKFLOW - NO EXCEPTIONS:
# ❌ WRONG (INCOMPLETE - NEVER DO THIS):
git add [files]
git commit -m "feat: Added new feature"
git push
# STOP! This is INCOMPLETE work. Documentation is MISSING.
# ✅ CORRECT (COMPLETE - ALWAYS DO THIS):
git add [files]
git commit -m "feat: Added new feature"
# MANDATORY NEXT STEP - UPDATE DOCUMENTATION:
# Edit .github/copilot-instructions.md with:
# - What changed and why
# - New patterns/insights/learnings
# - Configuration changes
# - API endpoints added/modified
# - Database schema changes
# - Integration points affected
git add .github/copilot-instructions.md
git commit -m "docs: Document new feature insights and patterns"
git push
# ✅ NOW work is COMPLETE - Code + Documentation together
This is NOT a suggestion. This is NOT optional. This is MANDATORY.
Code without documentation = INCOMPLETE WORK = DO NOT PUSH
Why This is #1 Priority (User's Direct Mandate):
- "I am sick and tired of reminding you" - User has repeatedly emphasized this
- This is a real money trading system - Undocumented changes cause financial losses
- Knowledge preservation - Insights are lost without documentation
- Future AI agents - Need complete context to maintain system integrity
- Time savings - Documented patterns prevent re-investigation
- Financial protection - Trading system knowledge prevents costly errors
When Documentation is MANDATORY (EVERY TIME):
You MUST update .github/copilot-instructions.md when:
- ✅ Adding ANY new feature or component
- ✅ Fixing ANY bug (add to Common Pitfalls section)
- ✅ Changing configuration (ENV variables, defaults, precedence)
- ✅ Modifying API endpoints (add to API Endpoints section)
- ✅ Updating database schema (add to Important fields section)
- ✅ Discovering system behaviors or quirks
- ✅ Implementing optimizations or enhancements
- ✅ Adding new integrations or dependencies
- ✅ Changing data flows or architecture
- ✅ Learning ANY lesson worth remembering
If you learned something valuable → Document it BEFORE pushing If you solved a problem → Document the solution BEFORE pushing If you discovered a pattern → Document the pattern BEFORE pushing
The Correct Mindset:
- Documentation is NOT separate work - It's part of completing the task
- Documentation is NOT optional - It's a requirement for "done"
- Documentation is NOT an afterthought - It's planned from the start
- Every git commit is a learning opportunity - Capture the knowledge
Examples of Commits Requiring Documentation:
# Scenario 1: Bug fix reveals system behavior
git commit -m "fix: Correct P&L calculation for partial closes"
# → MUST document: Why averageExitPrice doesn't work, must use realizedPnL
# → MUST add to: Common Pitfalls section
# Scenario 2: New feature with integration requirements
git commit -m "feat: Smart Entry Validation Queue system"
# → MUST document: How it works, when it triggers, integration points
# → MUST add to: Critical Components section
# Scenario 3: Performance optimization reveals insight
git commit -m "perf: Adaptive leverage based on quality score"
# → MUST document: Quality thresholds, why tiers chosen, expected impact
# → MUST add to: Configuration System or relevant section
# Scenario 4: Data analysis reveals filtering requirement
git commit -m "fix: Exclude manual trades from indicator analysis"
# → MUST document: signalSource field, SQL filtering patterns, why it matters
# → MUST add to: Important fields and Analysis patterns sections
Red Flags That Documentation is Missing:
- ❌ User says: "please add in the documentation"
- ❌ User asks: "is this documented?"
- ❌ User asks: "everything documented?"
- ❌ Code commit has NO corresponding documentation commit
- ❌ Bug fix with NO Common Pitfall entry
- ❌ New feature with NO integration notes
- ❌ You push code without updating copilot-instructions.md
Integration with Existing Sections:
When documenting, update these sections as appropriate:
- Common Pitfalls: Add bugs/mistakes/lessons learned
- Critical Components: Add new systems/services
- Configuration System: Add new ENV variables
- When Making Changes: Add new development patterns
- API Endpoints: Add new routes and their purposes
- Database Schema: Add new tables/fields and their meaning
- Architecture Overview: Add new integrations or data flows
Remember:
Documentation is not bureaucracy - it's protecting future profitability by preserving hard-won knowledge. In a real money trading system, forgotten lessons = repeated mistakes = financial losses.
Git commit + Documentation = Complete work. One without the other = Incomplete.
This is the user's #1 priority. Make it yours too.
What qualifies as "valuable insights" requiring documentation:
- System behaviors discovered during implementation or debugging
- Lessons learned from bugs, failures, or unexpected outcomes
- Design decisions and WHY specific approaches were chosen
- Integration patterns that future changes must follow
- Data integrity rules discovered through analysis
- Common mistakes that cost time/money to discover
- Verification procedures that proved critical
- Performance insights from production data
Why this matters:
- Knowledge preservation: Insights are lost without documentation
- Future AI agents: Need context to avoid repeating mistakes
- Time savings: Documented patterns prevent re-investigation
- Financial protection: Trading system knowledge prevents costly errors
- User expectation: "please add in the documentation" shouldn't be necessary
The mindset:
- Every git commit = potential learning opportunity
- If you learned something valuable → document it
- If you solved a tricky problem → document the solution
- If you discovered a pattern → document the pattern
- Documentation is not separate work - it's part of completing the task
Examples of commits requiring documentation:
# Scenario 1: Bug fix reveals system behavior
git commit -m "fix: Correct P&L calculation for partial closes"
# → Document: Why averageExitPrice doesn't work, must use realizedPnL field
# → Add to: Common Pitfalls section
# Scenario 2: New feature with integration requirements
git commit -m "feat: Smart Entry Validation Queue system"
# → Document: How it works, when it triggers, integration points, monitoring
# → Add to: Common Pitfalls or Critical Components section
# Scenario 3: Performance optimization reveals insight
git commit -m "perf: Adaptive leverage based on quality score"
# → Document: Quality thresholds, why tiers chosen, expected impact
# → Add to: Configuration System or relevant feature section
# Scenario 4: Data analysis reveals filtering requirement
git commit -m "fix: Exclude manual trades from indicator analysis"
# → Document: signalSource field, SQL filtering patterns, why it matters
# → Add to: Important fields and Analysis patterns sections
Red flags indicating missing documentation:
- ❌ User says: "please add in the documentation"
- ❌ User asks: "is this documented?"
- ❌ User asks: "everything documented?"
- ❌ Code commit has no corresponding documentation commit
- ❌ Bug fix with no Common Pitfall entry
- ❌ New feature with no integration notes
Integration with existing sections:
- Common Pitfalls: Add bugs/mistakes/lessons learned
- Critical Components: Add new systems/services
- Configuration System: Add new ENV variables
- When Making Changes: Add new development patterns
- API Endpoints: Add new routes and their purposes
Remember:
Documentation is not bureaucracy - it's protecting future profitability by preserving hard-won knowledge. In a real money trading system, forgotten lessons = repeated mistakes = financial losses.
Git commit + Documentation = Complete work. One without the other = Incomplete.
📚 Common Pitfalls Documentation Structure (Dec 5, 2025)
Purpose: Centralized documentation of all production incidents, bugs, and lessons learned from real trading operations.
Documentation Reorganization (PR #1):
- Problem Solved: Original copilot-instructions.md was 6,575 lines with 72 pitfalls mixed throughout
- Solution: Extracted to dedicated
docs/COMMON_PITFALLS.md(1,556 lines) - Result: 45% reduction in main file size (6,575 → 3,608 lines)
New Structure:
docs/COMMON_PITFALLS.md
├── Quick Reference Table (all 72 pitfalls with severity, category, date)
├── 🔴 CRITICAL Pitfalls (Financial/Data Integrity)
│ ├── Race Conditions & Duplicates (#27, #41, #48, #49, #59, #60, #61, #67)
│ ├── P&L Calculation Errors (#41, #49, #50, #54, #57)
│ └── SDK/API Integration (#2, #24, #36, #44, #66)
├── ⚠️ HIGH Pitfalls (System Stability)
│ ├── Deployment & Verification (#1, #31, #47)
│ └── Database Operations (#29, #35, #58)
├── 🟡 MEDIUM Pitfalls (Performance/UX)
├── 🔵 LOW Pitfalls (Code Quality)
├── Pattern Analysis (common root causes)
└── Contributing Guidelines (how to add new pitfalls)
Top 10 Critical Pitfalls (Summary):
- Position Manager Never Monitors (#77) - Logs say "added" but isMonitoring=false = $1,000+ losses
- Silent SL Placement Failure (#76) - placeExitOrders() returns SUCCESS with 2/3 orders, no SL protection
- Orphan Cleanup Removes Active Orders (#78) - cancelAllOrders() affects ALL positions on symbol
- Wrong Year in SQL Queries (#75) - Query 2024 dates when current is 2025 = 12× inflated results
- Drift SDK Memory Leak (#1) - JS heap OOM after 10+ hours → Smart health monitoring
- Wrong RPC Provider (#2) - Alchemy breaks Drift SDK → Use Helius only
- P&L Compounding Race Condition (#48, #49, #61) - Multiple closures → Atomic Map.delete()
- Database-First Pattern (#29) - Save DB before Position Manager
- Container Deployment Verification (#31) - Always check container timestamp
- External Closure Race Condition (#67) - 16 duplicate notifications → Atomic lock
How to Use:
- Quick lookup: Check Quick Reference Table in
docs/COMMON_PITFALLS.md - By category: Navigate to severity/category sections
- Pattern recognition: See Pattern Analysis for common root causes
- Adding new pitfalls: Follow Contributing Guidelines template
When Adding New Pitfalls:
- Add full details to
docs/COMMON_PITFALLS.mdwith standard template - Assign severity (🔴 Critical, ⚠️ High, 🟡 Medium, 🔵 Low)
- Include: symptom, incident details, root cause, fix, prevention, code example
- Update Quick Reference Table
- If more critical than existing Top 10, update this section
🎯 BlockedSignal Minute-Precision Tracking (Dec 2, 2025 - OPTIMIZED)
Purpose: Track exact minute-by-minute price movements for blocked signals to determine EXACTLY when TP1/TP2 would have been hit
CRITICAL: Data Contamination Discovery (Dec 5, 2025):
- Problem: All TradingView alerts (15min, 1H, 4H, Daily) were attached to OLD v9 version with different settings
- Impact: 31 BlockedSignal records from wrong indicator version (multi-timeframe data collection)
- Solution: Marked contaminated data with
blockReason='DATA_COLLECTION_OLD_V9_VERSION' - Exception: 1-minute data (11,398 records) kept as
DATA_COLLECTION_ONLY- not affected by alert version issue (pure market data sampling) - SQL Filter: Exclude old data:
WHERE blockReason != 'DATA_COLLECTION_OLD_V9_VERSION' - Fresh Start: New signals from corrected alerts will use
blockReason='DATA_COLLECTION_ONLY' - Database State: Old data preserved for historical reference, clearly marked to prevent analysis contamination
Critical Optimization (Dec 2, 2025):
- Original Threshold: 30 minutes (arbitrary, inefficient)
- User Insight: "we have 1 minute data, so use it"
- Optimized Threshold: 1 minute (matches data granularity)
- Performance Impact: 30× faster processing (96.7% reduction in wait time)
- Result: 0 signals → 15 signals immediately eligible for analysis
System Architecture:
Data Collection: Every 1 minute (MarketData table) ✓
Processing Wait: 1 minute (OPTIMIZED from 30 min) ✓
Analysis Detail: Every 1 minute (480 points/8h) ✓
Result Storage: Exact minute timestamps ✓
Perfect alignment - all components at 1-minute granularity
Validation Results (Dec 2, 2025):
- Batch Processing: 15 signals analyzed immediately after optimization
- Win Rate (recent 25): 48% TP1 hits, 0 SL losses
- Historical Baseline: 15.8% TP1 win rate (7,427 total signals)
- Recent Performance: 3× better than historical baseline
- Exact Timestamps:
- Signal cmiolsiaq005: Created 13:18:02, TP1 13:26:04 (8.0 min exactly)
- Signal cmiolv2hw005: Created 13:20:01, TP1 13:26:04 (6.0 min exactly)
Code Location:
// File: lib/analysis/blocked-signal-tracker.ts, Line 528
// CRITICAL FIX (Dec 2, 2025): Changed from 30min to 1min
// Rationale: We collect 1-minute data, so use it! No reason to wait longer.
// Impact: 30× faster processing eligibility (0 → 15 signals immediately qualified)
const oneMinuteAgo = new Date(Date.now() - 1 * 60 * 1000)
Why This Matters:
- Matches Data Granularity: 1-minute data collection = 1-minute processing threshold
- Eliminates Arbitrary Delays: No reason to wait 30 minutes when data is available
- Immediate Analysis: Signals qualify for batch processing within 1 minute of creation
- Exact Precision: Database stores exact minute timestamps (6-8 min resolution typical)
- User Philosophy: "we have 1 minute data, so use it" - use available precision fully
Database Fields (Minute Precision):
signalCreatedTime- Exact timestamp when signal generated (YYYY-MM-DD HH:MM:SS)tp1HitTime- Exact minute when TP1 target reachedtp2HitTime- Exact minute when TP2 target reachedslHitTime- Exact minute when SL triggeredminutesToTP1- Decimal minutes from signal to TP1 (e.g., 6.0, 8.0)minutesToTP2- Decimal minutes from signal to TP2minutesToSL- Decimal minutes from signal to SL
Git Commits:
d156abc"docs: Add mandatory git workflow and critical feedback requirements" (Dec 2, 2025)- [Next] "perf: Optimize BlockedSignal processing threshold from 30min to 1min"
Lesson Learned: When you have high-resolution data (1 minute), use it immediately. Arbitrary delays (30 minutes) waste processing time without providing value. Match all system components to the same granularity for consistency and efficiency.
<EFBFBD>📊 1-Minute Data Collection System (Nov 27, 2025)
Purpose: Real-time market data collection via TradingView 1-minute alerts for Phase 7.1/7.2/7.3 enhancements
Data Flow:
- TradingView 1-minute chart → Alert fires every minute with metrics
- n8n Parse Signal Enhanced → Bot execute endpoint
- Timeframe='1' detected → Saved to BlockedSignal (DATA_COLLECTION_ONLY)
- Market data cache updated every 60 seconds
- Used by: Smart Entry Timer validation, Revenge system ADX checks, Adaptive trailing stops
CRITICAL: Direction Field is Meaningless
- All 1-minute signals in BlockedSignal have
direction='long'populated - This is an artifact of TradingView alert syntax (requires buy/sell trigger word to fire)
- These are NOT trading signals - they are pure market data samples
- For analysis: ALWAYS filter out or ignore direction field for timeframe='1'
- Focus on: ADX, ATR, RSI, volumeRatio, pricePosition (actual market conditions)
- Example wrong query:
WHERE timeframe='1' AND direction='long' AND signalQualityScore >= 90 - Example correct query:
WHERE timeframe='1' AND signalQualityScore >= 90(no direction filter)
Database Fields:
timeframe='1'→ 1-minute data collectionblockReason='DATA_COLLECTION_ONLY'→ Not a blocked trade, just data sampledirection='long'→ IGNORE THIS (TradingView artifact, not real direction)signalQualityScore→ Quality score calculated but NOT used for execution thresholdadx,atr,rsi,volumeRatio,pricePosition→ THESE ARE THE REAL DATA
Why This Matters:
- Prevents confusion when analyzing 1-minute data
- Ensures correct SQL queries for market condition analysis
- Direction-based analysis on 1-min data is meaningless and misleading
- Future developers won't waste time investigating "why all signals are long"
Mission & Financial Goals
Primary Objective: Build wealth systematically from $106 → $100,000+ through algorithmic trading
Current Phase: Phase 1 - Survival & Proof (Nov 2025 - Jan 2026)
- Current Capital: $540 USDC (zero debt, 100% health)
- Total Invested: $546 ($106 initial + $440 deposits)
- Trading P&L: -$6 (early v6/v7 testing before v8 optimization)
- Target: $2,500 by end of Phase 1 (Month 2.5) - 4.6x growth from current
- Strategy: Aggressive compounding, 0 withdrawals, data-driven optimization
- Position Sizing: 100% of free collateral (~$540 at 15x leverage = ~$8,100 notional)
- Risk Tolerance: HIGH - Proof-of-concept mode with increased capital cushion
- Win Target: 15-20% monthly returns to reach $2,500 (more achievable with larger base)
- Trades Executed: 170+ (as of Nov 19, 2025)
Why This Matters for AI Agents:
- Every dollar counts at this stage - optimize for profitability, not just safety
- User needs this system to work for long-term financial goals ($300-500/month withdrawals starting Month 3)
- No changes that reduce win rate unless they improve profit factor
- System must prove itself before scaling (see
TRADING_GOALS.mdfor full 8-phase roadmap)
Key Constraints:
- Can't afford extended drawdowns (limited capital)
- Must maintain 60%+ win rate to compound effectively
- Quality over quantity - only trade 81+ signal quality scores (raised from 60 on Nov 21, 2025 after v8 success)
- After 3 consecutive losses, STOP and review system
Architecture Overview
Type: Autonomous cryptocurrency trading bot with Next.js 15 frontend + Solana/Drift Protocol backend
Data Flow: TradingView → n8n webhook → Next.js API → Drift Protocol (Solana DEX) → Real-time monitoring → Auto-exit
CRITICAL: RPC Provider Choice
- MUST use Alchemy RPC (https://solana-mainnet.g.alchemy.com/v2/YOUR_API_KEY)
- DO NOT use Helius free tier - causes catastrophic rate limiting (239 errors in 10 minutes)
- Helius free: 10 req/sec sustained = TOO LOW for trade execution + Position Manager monitoring
- Alchemy free: 300M compute units/month = adequate for bot operations
- Symptom if wrong RPC: Trades hit SL immediately, duplicate closes, Position Manager loses tracking, database save failures
- Fixed Nov 14, 2025: Switched to Alchemy, system now works perfectly (TP1/TP2/runner all functioning)
Key Design Principle: Dual-layer redundancy - every trade has both on-chain orders (Drift) AND software monitoring (Position Manager) as backup.
Exit Strategy: ATR-Based TP2-as-Runner system (CURRENT - Nov 17, 2025):
- ATR-BASED TP/SL (PRIMARY): TP1/TP2/SL calculated from ATR × multipliers
- TP1: ATR × 2.0 (typically ~0.86%, closes 60% default)
- TP2: ATR × 4.0 (typically ~1.72%, activates trailing stop)
- SL: ATR × 3.0 (typically ~1.29%)
- Safety bounds: MIN/MAX caps prevent extremes
- Falls back to fixed % if ATR unavailable
- Runner: 40% remaining after TP1 (configurable via
TAKE_PROFIT_1_SIZE_PERCENT=60) - Runner SL after TP1: ADX-based adaptive positioning (Nov 19, 2025):
- ADX < 20: SL at 0% (breakeven) - Weak trend, preserve TP1 profit
- ADX 20-25: SL at -0.3% - Moderate trend, some retracement room
- ADX > 25: SL at -0.55% - Strong trend, full retracement tolerance
- Rationale: Entry at candle close = always at top, natural -1% to -1.5% pullbacks common
- Risk management: Only accept runner drawdown on high-probability strong trends
- Worst case examples: ADX 18 → +$38.70 total, ADX 29 → +$22.20 if runner stops (but likely catches big move)
- Trailing Stop: ATR-based with ADX multiplier (Nov 19, 2025 enhancement):
- Base: ATR × 1.5 multiplier
- ADX-based widening (graduated):
- ADX > 30: 1.5× multiplier (very strong trends)
- ADX 25-30: 1.25× multiplier (strong trends)
- ADX < 25: 1.0× multiplier (base trail, weak/moderate trends)
- Profit acceleration: Profit > 2%: additional 1.3× multiplier
- Combined effect: ADX 29.3 + 2% profit = trail multiplier 1.5 × 1.3 = 1.95×
- Purpose: Capture more of massive trend moves (e.g., 38% MFE trades)
- Backward compatible: Trades without ADX use base 1.5× multiplier
- Activates after TP2 trigger
- Benefits: Regime-agnostic (adapts to bull/bear automatically), asset-agnostic (SOL vs BTC different ATR), trend-strength adaptive (wider trail for strong trends)
- Note: All UI displays dynamically calculate runner% as
100 - TAKE_PROFIT_1_SIZE_PERCENT
Exit Reason Tracking (Nov 24, 2025 - TRAILING_SL Distinction):
- Regular SL: Stop loss hit before TP2 reached (initial stop loss or breakeven SL after TP1)
- TRAILING_SL: Stop loss hit AFTER TP2 trigger when trailing stop is active (runner protection)
- Detection Logic:
- If
tp2Hit=trueANDtrailingStopActive=trueAND price pulled back from peak (>1%) - Then
exitReason='TRAILING_SL'(not regular 'SL') - Distinguishes runner exits from early stops
- If
- Database: Both stored in same
exitReasoncolumn, but TRAILING_SL separate value - Analytics UI: Trailing stops display with purple styling + 🏃 emoji, regular SL shows blue
- Purpose: Analyze runner system performance separately from hard stop losses
- Code locations:
- Position Manager exit detection:
lib/trading/position-manager.tsline ~937, ~1457 - External closure handler:
lib/trading/position-manager.tsline ~927-945 - Frontend display:
app/analytics/page.tsxline ~776-792
- Position Manager exit detection:
- Implementation: Nov 24, 2025 (commit
9d7932f)
Per-Symbol Configuration: SOL and ETH have independent enable/disable toggles and position sizing:
SOLANA_ENABLED,SOLANA_POSITION_SIZE,SOLANA_LEVERAGE(defaults: true, 100%, 15x)ETHEREUM_ENABLED,ETHEREUM_POSITION_SIZE,ETHEREUM_LEVERAGE(defaults: true, 100%, 1x)- BTC and other symbols fall back to global settings (
MAX_POSITION_SIZE_USD,LEVERAGE) - Priority: Per-symbol ENV → Market config → Global ENV → Defaults
Signal Quality System: Filters trades based on 5 metrics (ATR, ADX, RSI, volumeRatio, pricePosition) scored 0-100. Direction-specific thresholds (Nov 28, 2025): LONG signals require 90+, SHORT signals require 80+. Scores stored in database for future optimization.
Frequency penalties (overtrading / flip-flop / alternating) now ignore 1-minute data-collection alerts automatically: getRecentSignals() filters to timeframe='5' (or whatever timeframe is being scored) and drops blockReason='DATA_COLLECTION_ONLY'. This prevents the overtrading penalty from triggering when the 1-minute telemetry feeds multiple samples per minute for BlockedSignal analysis.
Direction-Specific Quality Thresholds (Nov 28, 2025):
- LONG threshold: 90 (straightforward)
- SHORT threshold: 80 (more permissive due to higher baseline difficulty)
- Configuration:
MIN_SIGNAL_QUALITY_SCORE_LONG=90,MIN_SIGNAL_QUALITY_SCORE_SHORT=80in .env - Fallback logic: Direction-specific ENV → Global ENV → Default (60)
- Helper function:
getMinQualityScoreForDirection(direction, config)in config/trading.ts - Implementation: check-risk endpoint uses direction-specific thresholds before execution
- See:
docs/DIRECTION_SPECIFIC_QUALITY_THRESHOLDS.mdfor historical analysis
Adaptive Leverage System (Nov 24, 2025 - RISK-ADJUSTED POSITION SIZING):
- Purpose: Automatically adjust leverage based on signal quality score - high confidence gets full leverage, borderline signals get reduced risk exposure
- Quality-Based Leverage Tiers:
- Quality 95-100: 15x leverage ($540 × 15x = $8,100 notional position)
- Quality 90-94: 10x leverage ($540 × 10x = $5,400 notional position)
- Quality <90: Blocked by direction-specific thresholds
- Risk Impact: Quality 90-94 signals save $2,700 exposure (33% risk reduction) vs fixed 15x
- Data-Driven Justification: v8 indicator quality 95+ = 100% WR (4/4 wins), quality 90-94 more volatile
- Configuration:
USE_ADAPTIVE_LEVERAGE=true,HIGH_QUALITY_LEVERAGE=15,LOW_QUALITY_LEVERAGE=10,QUALITY_LEVERAGE_THRESHOLD=95in .env - Implementation: Quality score calculated EARLY in execute endpoint (before position sizing), passed to
getActualPositionSizeForSymbol(qualityScore), leverage determined viagetLeverageForQualityScore()helper - Log Message:
📊 Adaptive leverage: Quality X → Yx leverage (threshold: 95) - Trade-off: ~$21 less profit on borderline wins, but ~$21 less loss on borderline stops = better risk-adjusted returns
- Future Enhancements: Multi-tier (20x for 97+, 5x for 85-89), per-direction multipliers, streak-based adjustments
- See:
ADAPTIVE_LEVERAGE_SYSTEM.mdfor complete implementation details, code examples, monitoring procedures
Timeframe-Aware Scoring: Signal quality thresholds adjust based on timeframe (5min vs daily):
- 5min: ADX 12+ trending (vs 18+ for daily), ATR 0.2-0.7% healthy (vs 0.4%+ for daily)
- Anti-chop filter: -20 points for extreme sideways (ADX <10, ATR <0.25%, Vol <0.9x)
- Pass
timeframeparam toscoreSignalQuality()from TradingView alerts (e.g.,timeframe: "5")
MAE/MFE Tracking: Every trade tracks Maximum Favorable Excursion (best profit %) and Maximum Adverse Excursion (worst loss %) updated every 2s. Used for data-driven optimization of TP/SL levels.
Manual Trading via Telegram: Send plain-text messages like long sol, short eth, long btc to open positions instantly (bypasses n8n, calls /api/trading/execute directly with preset healthy metrics). CRITICAL: Manual trades are marked with signalSource='manual' and excluded from TradingView indicator analysis (prevents data contamination).
Telegram Manual Trade Presets (Nov 17, 2025 - Data-Driven):
- ATR: 0.43 (median from 162 SOL trades, Nov 2024-Nov 2025)
- ADX: 32 (strong trend assumption)
- RSI: 58 long / 42 short (neutral-favorable)
- Volume: 1.2x average (healthy)
- Price Position: 45 long / 55 short (mid-range)
- Purpose: Enables quick manual entries when TradingView signals unavailable
- Note: Re-entry analytics validate against fresh TradingView data when cached (<5min)
Manual Trade Quality Bypass (Dec 4, 2025 - USER MANDATE):
- User requirement: "when i say short or long it shall do it straight away and DO it"
- Manual trades (
timeframe='manual') bypass ALL quality scoring checks - Execute endpoint detects
isManualTradeflag and skips quality threshold validation - Logs show:
✅ MANUAL TRADE BYPASS: Quality scoring skipped (Telegram command - executes immediately) - Purpose: Instant execution for user-initiated trades without automated filtering
- Implementation:
app/api/trading/execute/route.tsline ~237-242 (commit0982578, Dec 4, 2025) - Behavior: Manual trades execute regardless of ADX/ATR/RSI/quality score
- --force flag: No longer needed (all manual trades bypass by default)
Re-Entry Analytics System (OPTIONAL VALIDATION): Manual trades CAN be validated before execution using fresh TradingView data:
- Market data cached from TradingView signals (5min expiry)
/api/analytics/reentry-checkscores re-entry based on fresh metrics + recent performance- Telegram bot blocks low-quality re-entries unless
--forceflag used - Uses real TradingView ADX/ATR/RSI when available, falls back to historical data
- Penalty for recent losing trades, bonus for winning streaks
- Note: Analytics check is advisory only - manual trades execute even if rejected by analytics
Smart Validation Queue (Dec 7, 2025 - TIMEOUT EXTENDED):
- Purpose: Monitor blocked signals for 30 minutes to confirm price moves
- Timeout: 30 minutes (extended from 10 min based on data analysis)
- Rationale: Analysis of 10 blocked signals showed 30% hit TP1, most moves develop after 15-30 minutes
- Example: Quality 70 signal (ADX 29.7) hit TP1 at 0.41% after 30+ minutes ($22 profit missed with 10-min timeout)
- Protection: -0.4% drawdown limit prevents holding bad signals too long
- Configuration:
entryWindowMinutes: 30in smart-validation-queue.ts - Trade-off: Slightly longer hold on losing signals, but data shows most profitable moves take 15-30 min to develop
- Implementation: lib/trading/smart-validation-queue.ts line 105
- Status: ✅ DEPLOYED Dec 7, 2025 10:30 CET (commit
c9c987a)
🧪 Test Infrastructure (Dec 5, 2025 - PR #2)
Purpose: Comprehensive integration test suite for Position Manager - the 1,938-line core trading logic managing real capital.
Test Suite Structure:
tests/
├── setup.ts # Global mocks (Drift, Pyth, DB, Telegram)
├── helpers/
│ └── trade-factory.ts # Factory functions for mock trades
└── integration/
└── position-manager/
├── tp1-detection.test.ts # 16 tests - TP1 triggers for LONG/SHORT
├── breakeven-sl.test.ts # 14 tests - SL moves to entry after TP1
├── adx-runner-sl.test.ts # 18 tests - ADX-based runner SL tiers
├── trailing-stop.test.ts # 16 tests - ATR-based trailing with peak tracking
├── edge-cases.test.ts # 15 tests - Token vs USD, phantom detection
├── price-verification.test.ts # 18 tests - Size AND price verification
└── decision-helpers.test.ts # 16 tests - shouldStopLoss, shouldTakeProfit1/2
Total: 7 test files, 113 tests
Test Configuration:
- Framework: Jest + ts-jest
- Config:
jest.config.jsat project root (created by PR #2) - Coverage Threshold: 60% minimum
- Mocks: Drift SDK, Pyth price feeds, PostgreSQL, Telegram notifications
How to Run Tests:
# Run all tests
npm test
# Run tests in watch mode (development)
npm run test:watch
# Run with coverage report
npm run test:coverage
# Run specific test file
npm test -- tests/integration/position-manager/tp1-detection.test.ts
Trade Factory Helpers:
import { createLongTrade, createShortTrade, createTradeAfterTP1 } from '../helpers/trade-factory'
// Create basic trades
const longTrade = createLongTrade({ entryPrice: 140, adx: 26.9 })
const shortTrade = createShortTrade({ entryPrice: 140, atr: 0.43 })
// Create runner after TP1
const runner = createTradeAfterTP1('short', { positionSize: 8000 })
Common Pitfalls Prevented by Tests:
- #24: Position.size tokens vs USD conversion
- #43: TP1 false detection without price verification
- #45: Wrong entry price for breakeven SL (must use DB entry, not Drift)
- #52: ADX-based runner SL tier calculations
- #54: MAE/MFE stored as percentages, not dollars
- #67: Duplicate closure race conditions
Test Data (Standard Values):
| Parameter | LONG | SHORT |
|---|---|---|
| Entry | $140.00 | $140.00 |
| TP1 (+0.86%) | $141.20 | $138.80 |
| TP2 (+1.72%) | $142.41 | $137.59 |
| SL (-0.92%) | $138.71 | $141.29 |
| ATR | 0.43 | 0.43 |
| ADX | 26.9 | 26.9 |
| Position Size | $8,000 | $8,000 |
Why Tests Matter:
- Position Manager handles real money ($540 capital, targeting $100k)
- Zero test coverage before this PR despite 170+ trades and 71 documented bugs
- Prevents regressions when refactoring critical trading logic
- Validates calculations match documented behavior
🔄 CI/CD Pipeline (Dec 5, 2025 - PR #5)
Purpose: Automated quality gates ensuring code reliability before deployment to production trading system.
Workflows:
1. Test Workflow (test.yml)
Triggers: Push/PR to main/master/develop
- npm ci # Install dependencies
- npm test # Run 113 tests
- npm run build # Verify TypeScript compiles
Blocking: ✅ PRs cannot merge if tests fail
2. Build Workflow (build.yml)
Triggers: Push/PR to main/master
- docker build # Build production image
- Buildx caching # Layer caching for speed
Blocking: ✅ PRs cannot merge if Docker build fails
3. Lint Workflow (lint.yml)
Triggers: Every push/PR
- ESLint check # Code quality
- console.log scan # Find debug statements in production code
- TypeScript strict # Type checking
Blocking: ⚠️ Warnings only (does not block merge)
4. Security Workflow (security.yml)
Triggers: Push/PR + weekly schedule
- npm audit # Check for vulnerable dependencies
- Secret scanning # Basic credential detection
Blocking: ✅ Fails on high/critical vulnerabilities
Status Badges (README.md):



Branch Protection Recommendations: Enable in GitHub Settings → Branches → Add rule:
- ✅ Require status checks to pass (test, build)
- ✅ Require PR reviews before merging
- ✅ Require branches to be up to date
Troubleshooting Common Failures:
| Failure | Cause | Fix |
|---|---|---|
| Test failure | Position Manager logic changed | Update tests or fix regression |
| Build failure | TypeScript error | Check npm run build locally |
| Lint warning | console.log in code | Remove or use proper logging |
| Security alert | Vulnerable dependency | npm audit fix or update package |
Why CI/CD Matters:
- Real money at stake: Bugs cost actual dollars
- Confidence to deploy: Green pipeline = safe to merge
- Fast feedback: Know within minutes if change breaks something
- Professional practice: Industry standard for production systems
VERIFICATION MANDATE: Financial Code Requires Proof
CRITICAL: THIS IS A REAL MONEY TRADING SYSTEM - NOT A TOY PROJECT
Core Principle: In trading systems, "working" means "verified with real data", NOT "code looks correct".
NEVER declare something working without:
- Observing actual logs showing expected behavior
- Verifying database state matches expectations
- Comparing calculated values to source data
- Testing with real trades when applicable
- CONFIRMING CODE IS DEPLOYED - Check container start time vs commit time
- VERIFYING ALL RELATED FIXES DEPLOYED - Multi-fix sessions require complete deployment verification
CODE COMMITTED ≠ CODE DEPLOYED
- Git commit at 15:56 means NOTHING if container started at 15:06
- ALWAYS verify:
docker logs trading-bot-v4 | grep "Server starting" | head -1 - Compare container start time to commit timestamp
- If container older than commit: CODE NOT DEPLOYED, FIX NOT ACTIVE
- Never say "fixed" or "protected" until deployment verified
MULTI-FIX DEPLOYMENT VERIFICATION When multiple related fixes are developed in same session:
# 1. Check container start time
docker inspect trading-bot-v4 --format='{{.State.StartedAt}}'
# Example: 2025-11-16T09:28:20.757451138Z
# 2. Check all commit timestamps
git log --oneline --format='%h %ai %s' -5
# Example output:
# b23dde0 2025-11-16 09:25:10 fix: Add needsVerification field
# c607a66 2025-11-16 09:00:42 critical: Fix close verification
# 673a493 2025-11-16 08:45:21 critical: Fix breakeven SL
# 3. Verify container newer than ALL commits
# Container 09:28:20 > Latest commit 09:25:10 ✅ ALL FIXES DEPLOYED
# 4. Test-specific verification for each fix
docker logs -f trading-bot-v4 | grep "expected log message from fix"
DEPLOYMENT CHECKLIST FOR MULTI-FIX SESSIONS:
- All commits pushed to git
- Container rebuilt successfully (no TypeScript errors)
- Container restarted with
--force-recreate - Container start time > ALL commit timestamps
- Specific log messages from each fix observed (if testable)
- Database state reflects changes (if applicable)
Example: Nov 16, 2025 Session (Breakeven SL + Close Verification)
- Fix 1: Breakeven SL (commit
673a493, 08:45:21) - Fix 2: Close verification (commit
c607a66, 09:00:42) - Fix 3: TypeScript interface (commit
b23dde0, 09:25:10) - Container restart: 09:28:20 ✅ All three fixes deployed
- Verification: Log messages include "Using original entry price" and "Waiting 5s for Drift state"
Critical Path Verification Requirements
MANDATORY: ALWAYS VERIFY DRIFT STATE BEFORE ANY POSITION OPERATIONS (Dec 9, 2025)
- NEVER trust bot logs, API responses, or database state alone
- ALWAYS query Drift API first:
curl -X POST /api/trading/sync-positions -H "Authorization: Bearer $API_SECRET_KEY" - Verify actual position.size, entry price, current P&L from Drift response
- Only AFTER Drift verification: proceed with close, modify orders, or state changes
- Incident: Agent closed position based on stale bot data when user explicitly said NOT to close
- Why: Bot logs showed "closed" but Drift still had open position - catastrophic if user wants to keep position open
- This is NON-NEGOTIABLE - verify Drift state before ANY position operation
MANDATORY: ALWAYS VERIFY DATABASE WITH DRIFT API BEFORE REPORTING NUMBERS (Dec 9, 2025)
- NEVER trust database P&L, exitPrice, or trade details without Drift confirmation
- ALWAYS cross-check database against Drift when reporting losses/gains to user
- Query Drift account health:
curl http://localhost:3001/api/drift/account-healthfor actual balance - Compare database totalCollateral with actual Drift balance - database can be wrong
- Incident (Dec 9, 2025): Database showed -$19.33 loss, Drift showed -$22.21 actual loss ($2.88 missing)
- Root Cause: Retry loop chaos caused position to close in multiple chunks, only first chunk recorded
- User Frustration: "drift tells the truth not you" - agent trusted incomplete database
- Why This Matters: In real money system, wrong numbers = wrong financial decisions
- The Rule: QUERY DRIFT FIRST → COMPARE TO DATABASE → REPORT DISCREPANCIES → CORRECT DATABASE
- Verification Pattern:
# 1. Check Drift account balance curl -s http://localhost:3001/api/drift/account-health | jq '.totalCollateral' # 2. Query database for trade details psql -c "SELECT realizedPnL FROM Trade WHERE id='...'" # 3. If mismatch: Correct database to match Drift reality psql -c "UPDATE Trade SET realizedPnL = DRIFT_ACTUAL WHERE id='...'" - This is NON-NEGOTIABLE - Drift is source of truth for financial data, not database
Position Manager Changes:
- Execute test trade with DRY_RUN=false (small size)
- Watch docker logs for full TP1 → TP2 → exit cycle
- SQL query: verify
tp1Hit,slMovedToBreakeven,currentSizematch Position Manager logs - Compare Position Manager tracked size to actual Drift position size
- Check exit reason matches actual trigger (TP1/TP2/SL/trailing)
- VERIFY VIA DRIFT API before declaring anything "working" or "closed"
Exit Logic Changes (TP/SL/Trailing):
- Log EXPECTED values (TP1 price, SL price after breakeven, trailing stop distance)
- Log ACTUAL values from Drift position and Position Manager state
- Verify: Does TP1 hit when price crosses TP1? Does SL move to breakeven?
- Test: Open position, let it hit TP1, verify 75% closed + SL moved
- Document: What SHOULD happen vs what ACTUALLY happened
API Endpoint Changes:
- curl test with real payload from TradingView/n8n
- Check response JSON matches expectations
- Verify database record created with correct fields
- Check Telegram notification shows correct values (leverage, size, etc.)
- SQL query: confirm all fields populated correctly
Calculation Changes (P&L, Position Sizing, Percentages):
- Add console.log for EVERY step of calculation
- Verify units match (tokens vs USD, percent vs decimal, etc.)
- SQL query with manual calculation: does code result match hand calculation?
- Test edge cases: 0%, 100%, negative values, very small/large numbers
SDK/External Data Integration:
- Log raw SDK response to verify assumptions about data format
- NEVER trust documentation - verify with console.log
- Example: position.size doc said "USD" but logs showed "tokens"
- Document actual behavior in Common Pitfalls section
Red Flags Requiring Extra Verification
High-Risk Changes:
- Unit conversions (tokens ↔ USD, percent ↔ decimal)
- State transitions (TP1 hit → move SL to breakeven)
- Configuration precedence (per-symbol vs global vs defaults)
- Display values from complex calculations (leverage, size, P&L)
- Timing-dependent logic (grace periods, cooldowns, race conditions)
Verification Steps for Each:
- Before declaring working: Show proof (logs, SQL results, test output)
- After deployment: Monitor first real trade closely, verify behavior
- Edge cases: Test boundary conditions (0, 100%, max leverage, min size)
- Regression: Check that fix didn't break other functionality
🔴 EXAMPLE: What NOT To Do (Nov 25, 2025 - Health Monitor Bug)
What the AI agent did WRONG:
- ❌ Fixed code (moved interceptWebSocketErrors() call)
- ❌ Built Docker image successfully
- ❌ Deployed container
- ❌ Saw "Drift health monitor started" in logs
- ❌ DECLARED IT "WORKING" AND "DEPLOYED" ← CRITICAL ERROR
- ❌ Did NOT verify error interception was actually functioning
- ❌ Did NOT test the health API to see if errors were being recorded
- ❌ Did NOT add logging to confirm the fix was executing
What ACTUALLY happened:
- Code was deployed ✅
- Monitor was starting ✅
- But error interception was still broken ❌
- System still vulnerable to memory leak ❌
- User had to point out: "Never say it's done without testing"
What the AI agent SHOULD have done:
- ✅ Fix code
- ✅ Build and deploy
- ✅ ADD LOGGING to confirm fix executes:
console.log('🔧 Setting up error interception...') - ✅ Verify logs show the new message
- ✅ TEST THE API:
curl http://localhost:3001/api/drift/health - ✅ Verify errorCount field exists and updates
- ✅ SIMULATE ERRORS or wait for natural errors
- ✅ Verify errorCount increases when errors occur
- ✅ ONLY THEN declare it "working"
The lesson:
- Deployment ≠ Working
- Logs showing service started ≠ Feature functioning
- "Code looks correct" ≠ Verified with real data
- ALWAYS ADD LOGGING for critical changes
- ALWAYS TEST THE FEATURE before declaring success
SQL Verification Queries
After Position Manager changes:
-- Verify TP1 detection worked correctly
SELECT
symbol, entryPrice, currentSize, realizedPnL,
tp1Hit, slMovedToBreakeven, exitReason,
TO_CHAR(createdAt, 'MM-DD HH24:MI') as time
FROM "Trade"
WHERE exitReason IS NULL -- Open positions
OR createdAt > NOW() - INTERVAL '1 hour' -- Recent closes
ORDER BY createdAt DESC
LIMIT 5;
-- Compare Position Manager state to expectations
SELECT configSnapshot->'positionManagerState' as pm_state
FROM "Trade"
WHERE symbol = 'SOL-PERP' AND exitReason IS NULL;
After calculation changes:
-- Verify P&L calculations
SELECT
symbol, direction, entryPrice, exitPrice,
positionSize, realizedPnL,
-- Manual calculation:
CASE
WHEN direction = 'long' THEN
positionSize * ((exitPrice - entryPrice) / entryPrice)
ELSE
positionSize * ((entryPrice - exitPrice) / entryPrice)
END as expected_pnl,
-- Difference:
realizedPnL - CASE
WHEN direction = 'long' THEN
positionSize * ((exitPrice - entryPrice) / entryPrice)
ELSE
positionSize * ((entryPrice - exitPrice) / entryPrice)
END as pnl_difference
FROM "Trade"
WHERE exitReason IS NOT NULL
AND createdAt > NOW() - INTERVAL '24 hours'
ORDER BY createdAt DESC
LIMIT 10;
Example: How Position.size Bug Should Have Been Caught
What went wrong:
- Read code: "Looks like it's comparing sizes correctly"
- Declared: "Position Manager is working!"
- Didn't verify with actual trade
What should have been done:
// In Position Manager monitoring loop - ADD THIS LOGGING:
console.log('🔍 VERIFICATION:', {
positionSizeRaw: position.size, // What SDK returns
positionSizeUSD: position.size * currentPrice, // Converted to USD
trackedSizeUSD: trade.currentSize, // What we're tracking
ratio: (position.size * currentPrice) / trade.currentSize,
tp1ShouldTrigger: (position.size * currentPrice) < trade.currentSize * 0.95
})
Then observe logs on actual trade:
🔍 VERIFICATION: {
positionSizeRaw: 12.28, // ← AH! This is SOL tokens, not USD!
positionSizeUSD: 1950.84, // ← Correct USD value
trackedSizeUSD: 1950.00,
ratio: 1.0004, // ← Should be near 1.0 when position full
tp1ShouldTrigger: false // ← Correct
}
Lesson: One console.log would have exposed the bug immediately.
CRITICAL: Documentation is MANDATORY (No Exceptions)
THIS IS A REAL MONEY TRADING SYSTEM - DOCUMENTATION IS NOT OPTIONAL
IRON-CLAD RULE: Every git commit MUST include updated copilot-instructions.md documentation. NO EXCEPTIONS.
Why this is non-negotiable:
- This is a financial system handling real money - incomplete documentation = financial losses
- Future AI agents need complete context to maintain data integrity
- User relies on documentation to understand what changed and why
- Undocumented fixes are forgotten fixes - they get reintroduced as bugs
- Common Pitfalls section prevents repeating expensive mistakes
MANDATORY workflow for ALL changes:
- Implement fix/feature
- Test thoroughly
- UPDATE copilot-instructions.md (Common Pitfalls, Architecture, etc.)
- Git commit code changes
- Git commit documentation changes
- Push both commits
What MUST be documented:
- Bug fixes: Add to Common Pitfalls section with:
- Symptom, Root Cause, Real incident details
- Complete before/after code showing the fix
- Files changed, commit hash, deployment timestamp
- Lesson learned for future AI agents
- New features: Update Architecture Overview, Critical Components, API Endpoints
- Database changes: Update Important fields section, add filtering requirements
- Configuration changes: Update Configuration System section
- Breaking changes: Add to "When Making Changes" section
Recent examples of MANDATORY documentation:
- Common Pitfall #56: Ghost orders after external closures (commit
a3a6222) - Common Pitfall #57: P&L calculation inaccuracy (commit
8e600c8) - Common Pitfall #55: BlockedSignalTracker Pyth cache bug (commit
6b00303)
If you commit code without updating documentation:
- User will be annoyed (rightfully so)
- Future AI agents will lack context
- Bug will likely recur
- System integrity degrades
This is not a suggestion - it's a requirement. Documentation updates are part of the definition of "done" for any change.
Deployment Checklist
MANDATORY PRE-DEPLOYMENT VERIFICATION:
- Check container start time:
docker logs trading-bot-v4 | grep "Server starting" | head -1 - Compare to commit timestamp: Container MUST be newer than code changes
- If container older: STOP - Code not deployed, fix not active
- Never declare "fixed" or "working" until container restarted with new code
Before marking feature complete:
- Code review completed
- Unit tests pass (if applicable)
- Integration test with real API calls
- Logs show expected behavior
- Database state verified with SQL
- Edge cases tested
- Container restarted and verified running new code
- Documentation updated (including Common Pitfalls if applicable)
- User notified of what to verify during first real trade
When to Escalate to User
Don't say "it's working" if:
- You haven't observed actual logs showing the expected behavior
- SQL query shows unexpected values
- Test trade behaved differently than expected
- You're unsure about unit conversions or SDK behavior
- Change affects money (position sizing, P&L, exits)
- Container hasn't been restarted since code commit
Instead say:
- "Code is updated. Need to verify with test trade - watch for [specific log message]"
- "Fixed, but requires verification: check database shows [expected value]"
- "Deployed. First real trade should show [behavior]. If not, there's still a bug."
- "Code committed but NOT deployed - container running old version, fix not active yet"
Docker Build Best Practices
CRITICAL: Prevent build interruptions with background execution + live monitoring
Docker builds take 40-70 seconds and are easily interrupted by terminal issues. Use this pattern:
# Start build in background with live log tail
cd /home/icke/traderv4 && docker compose build trading-bot > /tmp/docker-build-live.log 2>&1 & BUILD_PID=$!; echo "Build started, PID: $BUILD_PID"; tail -f /tmp/docker-build-live.log
Why this works:
- Build runs in background (
&) - immune to terminal disconnects/Ctrl+C - Output redirected to log file - can review later if needed
tail -fshows real-time progress - see compilation, linting, errors- Can Ctrl+C the
tail -fwithout killing build - build continues - Verification after:
tail -50 /tmp/docker-build-live.logto check success
Success indicators:
✓ Compiled successfully in 27s✓ Generating static pages (30/30)#22 naming to docker.io/library/traderv4-trading-bot doneDONE X.Xson final step
Failure indicators:
Failed to compile.Type error:ERROR: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
After successful build:
# Deploy new container
docker compose up -d --force-recreate trading-bot
# Verify it started
docker logs --tail=30 trading-bot-v4
# Confirm deployed version
docker logs trading-bot-v4 | grep "Server starting" | head -1
DO NOT use: docker compose build trading-bot in foreground - one network hiccup kills 60s of work
When to Actually Rebuild vs Restart vs Nothing
⚠️ CRITICAL: Stop rebuilding unnecessarily - costs 40-70 seconds downtime per rebuild
See docs/ZERO_DOWNTIME_CHANGES.md for complete guide
Quick Decision Matrix:
| Change Type | Action | Downtime | When |
|---|---|---|---|
Documentation (.md) |
NONE | 0s | Just commit and push |
Workflows (.json, .pinescript) |
NONE | 0s | Import manually to TradingView/n8n |
ENV variables (.env) |
RESTART | 5-10s | docker compose restart trading-bot |
| Database schema | MIGRATE + RESTART | 10-15s | prisma migrate + restart |
Code (.ts, .tsx, .js) |
REBUILD | 40-70s | TypeScript must recompile |
Dependencies (package.json) |
REBUILD | 40-70s | npm install required |
Smart Batching Strategy:
- DON'T: Rebuild after every single code change (6× rebuilds = 6 minutes downtime)
- DO: Batch related changes together (6 fixes → 1 rebuild = 50 seconds total)
Example (GOOD):
# 1. Make multiple code changes
vim lib/trading/position-manager.ts
vim app/api/trading/execute/route.ts
vim lib/notifications/telegram.ts
# 2. Commit all together
git add -A && git commit -m "fix: Multiple improvements"
# 3. ONE rebuild for everything
docker compose build trading-bot
docker compose up -d --force-recreate trading-bot
# Total: 50 seconds (not 150 seconds)
Recent Mistakes to Avoid (Nov 27, 2025):
- ❌ Rebuilt for documentation updates (should be git commit only)
- ❌ Rebuilt for n8n workflow changes (should be manual import)
- ❌ Rebuilt 4 times for 4 code changes (should batch into 1 rebuild)
- ✅ Result: 200 seconds downtime that could have been 50 seconds
Docker Cleanup After Builds
CRITICAL: Prevent disk full issues from build cache accumulation
Docker builds create intermediate layers (1.3+ GB per build) that accumulate over time. Build cache can reach 40-50 GB after frequent rebuilds.
After successful deployment, clean up:
# Remove dangling images (old builds)
docker image prune -f
# Remove build cache (biggest space hog - 40+ GB typical)
docker builder prune -f
# Optional: Remove dangling volumes (if no important data)
docker volume prune -f
# Check space saved
docker system df
When to run:
- After each successful deployment (recommended)
- Weekly if building frequently
- When disk space warnings appear
- Before major updates/migrations
Space typically freed:
- Dangling images: 2-5 GB
- Build cache: 40-50 GB
- Dangling volumes: 0.5-1 GB
- Total: 40-55 GB per cleanup
What's safe to delete:
<none>tagged images (old builds)- Build cache (recreated on next build)
- Dangling volumes (orphaned from removed containers)
What NOT to delete:
- Named volumes (contain data:
trading-bot-postgres, etc.) - Active containers
- Tagged images currently in use
Docker Optimization & Build Cache Management (Nov 26, 2025)
Purpose: Prevent Docker cache accumulation (40+ GB) through automated cleanup and BuildKit optimizations
Three-Layer Optimization Strategy:
1. Multi-Stage Builds (ALREADY IMPLEMENTED)
# Dockerfile already uses multi-stage pattern:
FROM node:20-alpine AS deps # Install dependencies
FROM node:20-alpine AS builder # Build application
FROM node:20-alpine AS runner # Final minimal image
# Benefits:
# - Smaller final images (only runtime dependencies)
# - Faster builds (caches each stage independently)
# - Better layer reuse
2. BuildKit Auto-Cleanup (Nov 26, 2025)
# /etc/docker/daemon.json configuration:
{
"features": {
"buildkit": true
},
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "20GB"
}
}
}
# Restart Docker to apply:
sudo systemctl restart docker
# Verify BuildKit active:
docker buildx version # Should show v0.14.1+
Auto-Cleanup Behavior:
- Threshold: 20GB build cache limit
- Action: Automatically garbage collects when exceeded
- Safety: Keeps recent layers for build speed
- Monitoring: Check current usage:
docker system df
Current Disk Usage Baseline (Nov 26, 2025):
- Build Cache: 11.13GB (healthy, under 20GB threshold)
- Images: 59.2GB (33.3GB reclaimable)
- Volumes: 8.5GB (7.9GB reclaimable)
- Containers: 232.9MB
3. Automated Cleanup Script (READY TO USE)
# Script: /home/icke/traderv4/cleanup_trading_bot.sh (94 lines)
# Executable: -rwxr-xr-x (already set)
# Features:
# - Step 1: Keeps last 2 trading-bot images (rollback safety)
# - Step 2: Removes dangling images (untagged layers)
# - Step 3: Prunes build cache (biggest space saver)
# - Step 4: Safe volume handling (protects postgres)
# - Reporting: Shows disk space before/after
# Manual usage (recommended after builds):
cd /home/icke/traderv4
docker compose build trading-bot && ./cleanup_trading_bot.sh
# Automated usage (daily cleanup at 2 AM):
# Add to crontab: crontab -e
0 2 * * * /home/icke/traderv4/cleanup_trading_bot.sh
# Check current disk usage:
docker system df
Script Safety Measures:
- Never removes: Named volumes (trading-bot-postgres, etc.)
- Never removes: Running containers
- Never removes: Tagged images currently in use
- Keeps: Last 2 trading-bot images for quick rollback
- Reports: Space freed after cleanup (typical: 40-50 GB)
When to Run Cleanup:
- After builds: Most effective, immediate cleanup
- Weekly: If building frequently during development
- On demand: When disk space warnings appear
- Before deployments: Clean slate for major updates
Typical Space Savings:
- Manual script run: 40-50 GB (build cache + dangling images)
- BuildKit auto-cleanup: Maintains 20GB cap automatically
- Combined approach: Prevents accumulation entirely
Monitoring Commands:
# Check current disk usage
docker system df
# Detailed breakdown
docker system df -v
# Check BuildKit cache
docker buildx du
# Verify auto-cleanup threshold
grep -A10 "builder" /etc/docker/daemon.json
Why This Matters:
- Problem: User previously hit 40GB cache accumulation
- Solution: BuildKit auto-cleanup (20GB cap) + manual script (on-demand)
- Result: System self-maintains, prevents disk full scenarios
- Team benefit: Documented process for all developers
Implementation Status:
- ✅ Multi-stage builds: Already present in Dockerfile (builder → runner)
- ✅ BuildKit auto-cleanup: Configured in daemon.json (20GB threshold)
- ✅ Cleanup script: Exists and ready (/home/icke/traderv4/cleanup_trading_bot.sh)
- ✅ Docker daemon: Restarted with new config (BuildKit v0.14.1 active)
- ✅ Current state: Healthy (11.13GB cache, under threshold)
Multi-Timeframe Price Tracking System (Nov 19, 2025)
Purpose: Automated data collection and analysis for signals across multiple timeframes (5min, 15min, 1H, 4H, Daily) to determine which timeframe produces the best trading results. Also tracks quality-blocked signals to analyze if threshold adjustments are filtering too many winners.
Architecture:
- 5min signals: Execute trades (production)
- 15min/1H/4H/Daily signals: Save to BlockedSignal table with
blockReason='DATA_COLLECTION_ONLY' - Quality-blocked signals: Save with
blockReason='QUALITY_SCORE_TOO_LOW'(Nov 21: threshold raised to 91+) - Background tracker: Runs every 5 minutes, monitors price movements for 30 minutes
- Analysis: After 50+ signals per category, compare win rates and profit potential
Components:
-
BlockedSignalTracker (
lib/analysis/blocked-signal-tracker.ts)- Background job running every 5 minutes
- Tracks BOTH quality-blocked AND data collection signals (Nov 22, 2025 enhancement)
- Tracks price at 1min, 5min, 15min, 30min intervals
- Detects if TP1/TP2/SL would have been hit using ATR-based targets
- Records max favorable/adverse excursion (MFE/MAE)
- Auto-completes after 30 minutes (
analysisComplete=true) - Singleton pattern: Use
getBlockedSignalTracker()orstartBlockedSignalTracking() - Purpose: Validate if quality 91 threshold filters winners or losers (data-driven optimization)
-
Database Schema (BlockedSignal table)
entryPrice FLOAT -- Price at signal time (baseline) priceAfter1Min FLOAT? -- Price 1 minute after priceAfter5Min FLOAT? -- Price 5 minutes after priceAfter15Min FLOAT? -- Price 15 minutes after priceAfter30Min FLOAT? -- Price 30 minutes after wouldHitTP1 BOOLEAN? -- Would TP1 have been hit? wouldHitTP2 BOOLEAN? -- Would TP2 have been hit? wouldHitSL BOOLEAN? -- Would SL have been hit? maxFavorablePrice FLOAT? -- Price at max profit maxAdversePrice FLOAT? -- Price at max loss maxFavorableExcursion FLOAT? -- Best profit % during 30min maxAdverseExcursion FLOAT? -- Worst loss % during 30min analysisComplete BOOLEAN -- Tracking finished (30min elapsed) -
API Endpoints
GET /api/analytics/signal-tracking- View tracking status, metrics, recent signalsPOST /api/analytics/signal-tracking- Manually trigger tracking update (auth required)
-
Integration Points
- Execute endpoint: Captures entry price when saving DATA_COLLECTION_ONLY signals
- Startup: Auto-starts tracker via
initializePositionManagerOnStartup() - Check-risk endpoint: Bypasses quality checks for non-5min signals (lines 147-159)
How It Works:
- TradingView sends 15min/1H/4H/Daily signal → n8n →
/api/trading/execute - Execute endpoint detects
timeframe !== '5' - Gets current price from Pyth, saves to BlockedSignal with
entryPrice - Background tracker wakes every 5 minutes
- Queries current price, calculates profit % based on direction
- Checks if TP1 (~0.86%), TP2 (~1.72%), or SL (~1.29%) would have hit
- Updates price fields at appropriate intervals (1/5/15/30 min)
- Tracks MFE/MAE throughout 30-minute window
- After 30 minutes, marks
analysisComplete=true
Analysis Queries (After 50+ signals per timeframe):
-- Compare win rates across timeframes
SELECT
timeframe,
COUNT(*) as total_signals,
COUNT(CASE WHEN wouldHitTP1 = true THEN 1 END) as tp1_wins,
COUNT(CASE WHEN wouldHitSL = true THEN 1 END) as sl_losses,
ROUND(100.0 * COUNT(CASE WHEN wouldHitTP1 = true THEN 1 END) / COUNT(*), 1) as win_rate,
ROUND(AVG(maxFavorableExcursion), 2) as avg_mfe,
ROUND(AVG(maxAdverseExcursion), 2) as avg_mae
FROM "BlockedSignal"
WHERE analysisComplete = true
AND blockReason = 'DATA_COLLECTION_ONLY'
GROUP BY timeframe
ORDER BY win_rate DESC;
Decision Making: After sufficient data collected:
- Multi-timeframe: Compare 5min vs 15min vs 1H vs 4H vs Daily win rates
- Quality threshold: Analyze if blocked signals (quality <91) would've been winners
- Evaluation: Signal frequency vs win rate trade-off, threshold optimization
- Query example:
-- Would quality-blocked signals have been winners?
SELECT
COUNT(*) as blocked_count,
SUM(CASE WHEN "wouldHitTP1" THEN 1 ELSE 0 END) as would_be_winners,
SUM(CASE WHEN "wouldHitSL" THEN 1 ELSE 0 END) as would_be_losers,
ROUND(100.0 * SUM(CASE WHEN "wouldHitTP1" THEN 1 ELSE 0 END) / COUNT(*), 1) as missed_win_rate
FROM "BlockedSignal"
WHERE "blockReason" = 'QUALITY_SCORE_TOO_LOW'
AND "analysisComplete" = true;
- Action: Adjust thresholds or switch production timeframe based on data
Key Features:
- Autonomous: No manual work needed, runs in background
- Accurate: Uses same TP/SL calculations as live trades (ATR-based)
- Risk-free: Data collection only, no money at risk
- Comprehensive: Tracks best/worst case scenarios (MFE/MAE)
- API accessible: Check status anytime via
/api/analytics/signal-tracking
Current Status (Nov 26, 2025):
- ✅ System deployed and running in production
- ✅ Enhanced Nov 22: Now tracks quality-blocked signals (QUALITY_SCORE_TOO_LOW) in addition to multi-timeframe data collection
- ✅ Enhanced Nov 26: Quality scoring now calculated for ALL timeframes (not just 5min production signals)
- Execute endpoint calculates
scoreSignalQuality()BEFORE timeframe check (line 112) - Data collection signals now get real quality scores (not hardcoded 0)
- BlockedSignal records include:
signalQualityScore(0-100),signalQualityVersion('v9'),minScoreRequired(90/95) - Enables SQL queries:
WHERE signalQualityScore >= minScoreRequiredto compare quality-filtered win rates - Commit:
dbada47"feat: Calculate quality scores for all timeframes (not just 5min)"
- Execute endpoint calculates
- ✅ TradingView alerts configured for 15min and 1H
- ✅ Background tracker runs every 5 minutes autonomously
- 📊 Data collection: Multi-timeframe (50+ per timeframe) + quality-blocked (20-30 signals)
- 🎯 Dual goals:
- Determine which timeframe has best win rate (now with quality filtering capability)
- Validate if quality 91 threshold filters winners or losers
- 📈 First result (Nov 21, 16:50): Quality 80 signal blocked (weak ADX 16.6), would have profited +0.52% (+$43) within 1 minute - FALSE NEGATIVE confirmed
Critical Components
1. Persistent Logger System (lib/utils/persistent-logger.ts)
Purpose: Survive-container-restarts logging for critical errors and trade failures
Key features:
- Writes to
/app/logs/errors.log(Docker volume mounted from host) - Logs survive container restarts, rebuilds, crashes
- Daily log rotation with 30-day retention
- Structured JSON logging with timestamps, context, stack traces
- Used for database save failures, Drift API errors, critical incidents
Usage:
import { persistentLogger } from '../utils/persistent-logger'
try {
await createTrade({...})
} catch (error) {
persistentLogger.logError('DATABASE_SAVE_FAILED', error, {
symbol: 'SOL-PERP',
entryPrice: 133.69,
transactionSignature: '5Yx2...',
// ALL data needed to reconstruct trade
})
throw error
}
Infrastructure:
- Docker volume:
./logs:/app/logs(docker-compose.yml line 63) - Directory:
/home/icke/traderv4/logs/with.gitkeep - Log format:
{"timestamp":"2025-11-21T00:40:14.123Z","context":"DATABASE_SAVE_FAILED","error":"...","stack":"...","metadata":{...}}
Why it matters:
- Console logs disappear on container restart
- Database failures need persistent record for recovery
- Enables post-mortem analysis of incidents
- Orphan position detection can reference logs to reconstruct trades
Implemented: Nov 21, 2025 as part of 5-layer database protection system
2. Phantom Trade Auto-Closure System
Purpose: Automatically close positions when size mismatch detected (position opened but wrong size)
When triggered:
- Position opened on Drift successfully
- Expected size: $50 (50% @ 1x leverage)
- Actual size: $1.37 (7% fill - likely oracle price stale or exchange rejection)
- Size ratio < 50% threshold → phantom detected
Automated response (all happens in <1 second):
- Immediate closure: Market order closes 100% of phantom position
- Database logging: Creates trade record with
status='phantom', saves P&L - n8n notification: Returns HTTP 200 with full details (not 500 - allows workflow to continue)
- Telegram alert: Message includes entry/exit prices, P&L, reason, transaction IDs
Why auto-close instead of manual intervention:
- User may be asleep, away from devices, unavailable for hours
- Unmonitored position = unlimited risk exposure
- Position Manager won't track phantom (by design)
- No TP/SL protection, no trailing stop, no monitoring
- Better to exit with small loss/gain than leave position exposed
- Re-entry always possible if setup was actually good
Example notification:
⚠️ PHANTOM TRADE AUTO-CLOSED
Symbol: SOL-PERP
Direction: LONG
Expected Size: $48.75
Actual Size: $1.37 (2.8%)
Entry: $168.50
Exit: $168.45
P&L: -$0.02
Reason: Size mismatch detected - likely oracle price issue or exchange rejection
Action: Position auto-closed for safety (unmonitored positions = risk)
TX: 5Yx2Fm8vQHKLdPaw...
Database tracking:
status='phantom'field identifies these tradesisPhantom=true,phantomReason='ORACLE_PRICE_MISMATCH'expectedSizeUSD,actualSizeUSDfields for analysis- Exit reason:
'manual'(phantom auto-close category) - Enables post-trade analysis of phantom frequency and patterns
Code location: app/api/trading/execute/route.ts lines 322-445
2. Signal Quality Scoring (lib/trading/signal-quality.ts)
Purpose: Unified quality validation system that scores trading signals 0-100 based on 5 market metrics
Timeframe-aware thresholds:
scoreSignalQuality({
atr, adx, rsi, volumeRatio, pricePosition,
timeframe?: string // "5" for 5min, undefined for higher timeframes
})
5min chart adjustments:
- ADX healthy range: 12-22 (vs 18-30 for daily)
- ATR healthy range: 0.2-0.7% (vs 0.4%+ for daily)
- Anti-chop filter: -20 points for extreme sideways (ADX <10, ATR <0.25%, Vol <0.9x)
Price position penalties (all timeframes):
- Long at 90-95%+ range: -15 to -30 points (chasing highs)
- Short at <5-10% range: -15 to -30 points (chasing lows)
- Prevents flip-flop losses from entering range extremes
Key behaviors:
- Returns score 0-100 and detailed breakdown object
- Minimum score 91 required to execute trade (raised Nov 21, 2025)
- Called by both
/api/trading/check-riskand/api/trading/execute - Scores saved to database for post-trade analysis
Data-Proven Threshold (Nov 21, 2025):
- Analysis of 7 v8 trades revealed perfect separation:
- All 4 winners: Quality 95, 95, 100, 105 (100% success rate ≥95)
- All 3 losers: Quality 80, 90, 90 (100% failure rate ≤90)
- 91 threshold eliminates borderline entries (ADX 18-20 weak trends)
- Would have prevented all historical losses totaling -$624.90
- Pattern validates that quality ≥95 signals are high-probability setups
Threshold Validation In Progress (Nov 22, 2025):
- Discovery: First quality-blocked signal (quality 80, ADX 16.6) would have profited +0.52% (+$43)
- User observation: "Green dots shot up" - visual confirmation of missed opportunity
- System response: BlockedSignalTracker now tracks quality-blocked signals (QUALITY_SCORE_TOO_LOW)
- Data collection target: 20-30 blocked signals over 2-4 weeks
- Decision criteria:
- If blocked signals show <40% win rate → Keep threshold at 91 (correct filtering)
- If blocked signals show 50%+ win rate → Lower to 85 (too restrictive)
- If quality 80-84 wins but 85-90 loses → Adjust to 85 threshold
- Possible outcomes: Keep 91, lower to 85, adjust ADX/RSI weights, add context filters
2. Position Manager Health Monitoring System (lib/health/position-manager-health.ts)
Purpose: Detect Position Manager failures within 30 seconds to prevent $1,000+ loss scenarios
CRITICAL (Dec 8, 2025): Created after discovering three bugs that caused $1,000+ losses:
- Bug #77: Position Manager logs "added" but never actually monitors (isMonitoring=false)
- Bug #76: placeExitOrders() returns SUCCESS but SL order missing (silent failure)
- Bug #78: Orphan detection removes active position orders (cancelAllOrders affects all)
Key Functions:
checkPositionManagerHealth(): Returns comprehensive health check result- DB open trades vs PM monitoring status
- PM has trades but monitoring OFF
- Missing SL orders (checks slOrderTx, softStopOrderTx, hardStopOrderTx)
- Missing TP1/TP2 orders
- DB vs PM vs Drift count mismatches
startPositionManagerHealthMonitor(): Runs automatically every 30 seconds- Logs CRITICAL alerts when issues found
- Silent operation when system healthy
- Started automatically in startup sequence
Health Checks Performed:
- DB open trades but PM not monitoring → CRITICAL ALERT
- PM has trades but monitoring OFF → CRITICAL ALERT
- Open positions missing SL orders → CRITICAL ALERT per position
- Open positions missing TP orders → WARNING per position
- DB vs PM trade count mismatch → WARNING
- PM vs Drift position count mismatch → WARNING
Alert Format:
🚨 CRITICAL: Position Manager not monitoring!
DB: 2 open trades
PM: 2 trades in Map
Monitoring: false ← BUG!
🚨 CRITICAL: Position cmix773hk019gn307fjjhbikx missing SL order
Symbol: SOL-PERP
Size: $2,003
slOrderTx: NULL
softStopOrderTx: NULL
hardStopOrderTx: NULL
Integration:
- File:
lib/startup/init-position-manager.tsline ~78 - Starts automatically after Drift state verifier
- Runs alongside: data cleanup, blocked signals, stop hunt, smart validation
- No manual intervention needed
Test Suite:
- File:
tests/integration/position-manager/monitoring-verification.test.ts(201 lines) - 4 test suites, 8 test cases:
- "CRITICAL: Monitoring Actually Starts" (4 tests)
- "CRITICAL: Price Updates Actually Trigger Checks" (2 tests)
- "CRITICAL: Monitoring Stops When No Trades" (2 tests)
- "CRITICAL: Error Handling Doesnt Break Monitoring" (1 test)
- Validates: startMonitoring() calls Pyth monitor, isMonitoring flag set, price updates processed
- Mocks: drift/client, pyth/price-monitor, database/trades, notifications/telegram
Why This Matters:
- This is a REAL MONEY system - Position Manager is the safety net
- User lost $1,000+ because PM said "monitoring" but wasn't
- Positions appeared protected but had no monitoring whatsoever
- Health monitor detects failures within 30 seconds
- Prevents catastrophic silent failures
Deployment Status:
- ✅ Code complete and committed (Dec 8, 2025)
- ⏳ Deployment pending (Docker build blocked by DNS)
- ✅ Startup integration complete
- ✅ Test suite created
3. Position Manager (lib/trading/position-manager.ts)
Purpose: Software-based monitoring loop that checks prices every 2 seconds and closes positions via market orders
CRITICAL BUG (#77): Logs say "added to monitoring" but isMonitoring stays false - see Health Monitoring System above for detection
Singleton pattern: Always use getInitializedPositionManager() - never instantiate directly
const positionManager = await getInitializedPositionManager()
await positionManager.addTrade(activeTrade)
Key behaviors:
- Tracks
ActiveTradeobjects in a Map - TP2-as-Runner system: TP1 (configurable %, default 60%) → TP2 trigger (no close, activate trailing) → Runner (remaining 40%) with ATR-based trailing stop
- ADX-based runner SL after TP1 (Nov 19, 2025): Adaptive positioning based on trend strength
- ADX < 20: SL at 0% (breakeven) - Weak trend, preserve capital
- ADX 20-25: SL at -0.3% - Moderate trend, some retracement room
- ADX > 25: SL at -0.55% - Strong trend, full retracement tolerance
- Implementation: Checks
trade.adxAtEntryin TP1 handler, calculates SL dynamically - Logging: Shows ADX and selected SL:
🔒 ADX-based runner SL: 29.3 → -0.55% - Rationale: Entry at candle close = top of candle, -1% to -1.5% pullbacks are normal
- Data collection: After 50-100 trades, will optimize ADX thresholds (20/25) based on stop-out rates
- On-chain order synchronization: After TP1 hits, calls
cancelAllOrders()thenplaceExitOrders()with updated SL price (usesretryWithBackoff()for rate limit handling) - PHASE 7.3: Adaptive Trailing Stop with Real-Time ADX (Nov 27, 2025 - DEPLOYED):
- Purpose: Dynamically adjust trailing stop based on current trend strength changes, not static entry-time ADX
- Implementation: Queries market data cache for fresh 1-minute ADX every monitoring loop (2-second interval)
- Adaptive Multiplier Logic:
- Base:
trailingStopAtrMultiplier(1.5×) × ATR percentage - Current ADX Strength Tier (uses fresh 1-min ADX):
- Current ADX > 30: 1.5× multiplier (very strong trend) - log "📈 1-min ADX very strong"
- Current ADX 25-30: 1.25× multiplier (strong trend) - log "📈 1-min ADX strong"
- Current ADX < 25: 1.0× base multiplier
- ADX Acceleration Bonus (NEW): If ADX increased >5 points since entry → Additional 1.3× multiplier
- Example: Entry ADX 22.5 → Current ADX 29.5 (+7 points) → Widens trail to capture extended move
- Log: "🚀 ADX acceleration (+X points): Trail multiplier Y× → Z×"
- ADX Deceleration Penalty (NEW): If ADX decreased >3 points since entry → 0.7× multiplier (tightens trail)
- Log: "⚠️ ADX deceleration (-X points): tighter to protect"
- Profit Acceleration (existing): Profit > 2% → Additional 1.3× multiplier
- Log: "💰 Large profit (X%): Trail multiplier Y× → Z×"
- Combined Max: 1.5 (base) × 1.5 (strong ADX) × 1.3 (acceleration) × 1.3 (profit) = 3.16× multiplier
- Base:
- Example Calculation:
- Entry: SOL $140.00, ADX 22.5, ATR 0.43
- After 30 min: Price $143.50 (+2.5%), Current ADX 29.5 (+7 points)
- OLD (entry ADX): 0.43 / 140 × 100 = 0.307% → 0.307% × 1.5 = 0.46% trail = stop at $142.84
- NEW (adaptive): 0.307% × 1.5 (base) × 1.25 (strong) × 1.3 (accel) × 1.3 (profit) = 0.99% trail = stop at $141.93
- Impact: $0.91 more room (2.15× wider) = captures $43 MFE instead of $23
- Logging:
- "📊 1-min ADX update: Entry X → Current Y (±Z change)" - Shows ADX progression
- "📊 Adaptive trailing: ATR X (Y%) × Z× = W%" - Shows final trail calculation
- Fallback: Uses
trade.adxAtEntryif market cache unavailable (backward compatible) - Safety: Trail distance clamped between min/max % bounds (0.25%-0.9%)
- Code:
lib/trading/position-manager.tslines 1356-1450, importsgetMarketDataCache() - Expected Impact: +$2,000-3,000 over 100 trades by capturing trend acceleration moves (like MA crossover ADX 22.5→29.5 pattern)
- Risk Profile: Only affects 25% runner position (main 75% already closed at TP1)
- See:
PHASE_7.3_ADAPTIVE_TRAILING_DEPLOYED.mdand1MIN_DATA_ENHANCEMENTS_ROADMAP.mdPhase 7.3 section
- Trailing stop: Activates when TP2 price hit, tracks
peakPriceand trails dynamically - Closes positions via
closePosition()market orders when targets hit - Acts as backup if on-chain orders don't fill
- State persistence: Saves to database, restores on restart via
configSnapshot.positionManagerState - Startup validation: On container restart, cross-checks last 24h "closed" trades against Drift to detect orphaned positions (see
lib/startup/init-position-manager.ts) - Grace period for new trades: Skips "external closure" detection for positions <30 seconds old (Drift positions take 5-10s to propagate)
- Exit reason detection: Uses trade state flags (
tp1Hit,tp2Hit) and realized P&L to determine exit reason, NOT current price (avoids misclassification when price moves after order fills) - Real P&L calculation: Calculates actual profit based on entry vs exit price, not SDK's potentially incorrect values
- Rate limit-aware exit: On 429 errors during close, keeps trade in monitoring (doesn't mark closed), retries naturally on next price update
3. Telegram Bot (telegram_command_bot.py)
Purpose: Python-based Telegram bot for manual trading commands and position status monitoring
Manual trade commands via plain text:
# User sends plain text message (not slash commands)
"long sol" → Validates via analytics, then opens SOL-PERP long
"short eth" → Validates via analytics, then opens ETH-PERP short
"long btc --force" → Skips analytics validation, opens BTC-PERP long immediately
Key behaviors:
- MessageHandler processes all text messages (not just commands)
- Maps user-friendly symbols (sol, eth, btc) to Drift format (SOL-PERP, etc.)
- Analytics validation: Calls
/api/analytics/reentry-checkbefore execution- Blocks trades with score <55 unless
--forceflag used - Uses fresh TradingView data (<5min old) when available
- Falls back to historical metrics with penalty
- Considers recent trade performance (last 3 trades)
- Blocks trades with score <55 unless
- Calls
/api/trading/executedirectly with preset healthy metrics (ATR=0.45, ADX=32, RSI=58/42) - Bypasses n8n workflow and TradingView requirements
- 60-second timeout for API calls
- Responds with trade confirmation or analytics rejection message
Status command:
/status → Returns JSON of open positions from Drift
Implementation details:
- Uses
python-telegram-botlibrary - Deployed via
docker-compose.telegram-bot.yml - Requires
TELEGRAM_BOT_TOKENandTELEGRAM_CHANNEL_IDin .env - API calls to
http://trading-bot:3000/api/trading/execute
Drift client integration:
- Singleton pattern: Use
initializeDriftService()andgetDriftService()- maintains single connection
const driftService = await initializeDriftService()
const health = await driftService.getAccountHealth()
- Wallet handling: Supports both JSON array
[91,24,...]and base58 string formats from Phantom wallet
4. Rate Limit Monitoring (lib/drift/orders.ts + app/api/analytics/rate-limits)
Purpose: Track and analyze Solana RPC rate limiting (429 errors) to prevent silent failures
Helius RPC Limits (Free Tier):
- Burst: 100 requests/second
- Sustained: 10 requests/second
- Monthly: 100k requests
- See
docs/HELIUS_RATE_LIMITS.mdfor upgrade recommendations
Retry mechanism with exponential backoff (Nov 14, 2025 - Updated):
await retryWithBackoff(async () => {
return await driftClient.cancelOrders(...)
}, maxRetries = 3, baseDelay = 5000) // Increased from 2s to 5s
Progression: 5s → 10s → 20s (vs old 2s → 4s → 8s) Rationale: Gives Helius time to recover, reduces cascade pressure by 2.5x
Database logging: Three event types in SystemEvent table:
rate_limit_hit: Each 429 error (logged with attempt #, delay, error snippet)rate_limit_recovered: Successful retry (logged with total time, retry count)rate_limit_exhausted: Failed after max retries (CRITICAL - order operation failed)
Analytics endpoint:
curl http://localhost:3001/api/analytics/rate-limits
Returns: Total hits/recoveries/failures, hourly patterns, recovery times, success rate
Key behaviors:
- Only RPC calls wrapped:
cancelAllOrders(),placeExitOrders(),closePosition() - Position Manager monitoring: Event-driven via Pyth WebSocket (not polling)
- Rate limit-aware exit: Position Manager keeps monitoring on 429 errors (retries naturally)
- Logs to both console and database for post-trade analysis
Monitoring queries: See docs/RATE_LIMIT_MONITORING.md for SQL queries
Startup Position Validation (Nov 14, 2025 - Added): On container startup, cross-checks last 24h of "closed" trades against actual Drift positions:
- If DB says closed but Drift shows open → reopens in DB to restore Position Manager tracking
- Prevents orphaned positions from failed close transactions
- Logs:
🔴 CRITICAL: ${symbol} marked as CLOSED in DB but still OPEN on Drift! - Implementation:
lib/startup/init-position-manager.ts-validateOpenTrades()
5. Order Placement (lib/drift/orders.ts)
Critical functions:
openPosition()- Opens market position with transaction confirmationclosePosition()- Closes position with transaction confirmationplaceExitOrders()- Places TP/SL orders on-chaincancelAllOrders()- Cancels all reduce-only orders for a market
CRITICAL BUG (#76 - Dec 8, 2025): placeExitOrders() can return SUCCESS with missing SL order
- Symptom: Logs "Exit orders placed: [2 signatures]" but SL missing (expected 3)
- Impact: Position completely unprotected from downside
- Detection: Health monitor checks slOrderTx/softStopOrderTx/hardStopOrderTx every 30s
- Fix required: Validate signatures.length before returning, add error handling around SL placement
CRITICAL: Transaction Confirmation Pattern
Both openPosition() and closePosition() MUST confirm transactions on-chain:
const txSig = await driftClient.placePerpOrder(orderParams)
console.log('⏳ Confirming transaction on-chain...')
const connection = driftService.getConnection()
const confirmation = await connection.confirmTransaction(txSig, 'confirmed')
if (confirmation.value.err) {
throw new Error(`Transaction failed: ${JSON.stringify(confirmation.value.err)}`)
}
console.log('✅ Transaction confirmed on-chain')
Without this, the SDK returns signatures for transactions that never execute, causing phantom trades/closes.
CRITICAL: Drift SDK position.size is BASE ASSET TOKENS, not USD
The Drift SDK returns position.size as token quantity (SOL/ETH/BTC), NOT USD notional:
// CORRECT: Convert tokens to USD by multiplying by current price
const positionSizeUSD = Math.abs(position.size) * currentPrice
// WRONG: Using position.size directly as USD (off by 150x+ for SOL!)
const positionSizeUSD = Math.abs(position.size)
This affects Position Manager's TP1/TP2 detection - if position.size is not converted to USD before comparing to tracked USD values, the system will never detect partial closes correctly. See Common Pitfall #22 for the full bug details and fix applied Nov 12, 2025.
Solana RPC Rate Limiting with Exponential Backoff Solana RPC endpoints return 429 errors under load. Always use retry logic for order operations:
export async function retryWithBackoff<T>(
operation: () => Promise<T>,
maxRetries: number = 3,
initialDelay: number = 5000 // Increased from 2000ms to 5000ms (Nov 14, 2025)
): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await operation()
} catch (error: any) {
if (error?.message?.includes('429') && attempt < maxRetries - 1) {
const delay = initialDelay * Math.pow(2, attempt)
console.log(`⏳ Rate limited, retrying in ${delay/1000}s... (attempt ${attempt + 1}/${maxRetries})`)
await new Promise(resolve => setTimeout(resolve, delay))
continue
}
throw error
}
}
throw new Error('Max retries exceeded')
}
// Usage in cancelAllOrders
await retryWithBackoff(() => driftClient.cancelOrders(...))
Note: Increased from 2s to 5s base delay to give Helius RPC more recovery time. See docs/HELIUS_RATE_LIMITS.md for detailed analysis.
Without this, order cancellations fail silently during TP1→breakeven order updates, leaving ghost orders that cause incorrect fills.
Dual Stop System (USE_DUAL_STOPS=true):
// Soft stop: TRIGGER_LIMIT at -1.5% (avoids wicks)
// Hard stop: TRIGGER_MARKET at -2.5% (guarantees exit)
Order types:
- Entry: MARKET (immediate execution)
- TP1/TP2: LIMIT reduce-only orders
- Soft SL: TRIGGER_LIMIT reduce-only
- Hard SL: TRIGGER_MARKET reduce-only
6. Database (lib/database/trades.ts + prisma/schema.prisma)
Purpose: PostgreSQL via Prisma ORM for trade history and analytics
Models: Trade, PriceUpdate, SystemEvent, DailyStats, BlockedSignal
Singleton pattern: Use getPrismaClient() - never instantiate PrismaClient directly
Key functions:
createTrade()- Save trade after execution (includes dual stop TX signatures + signalQualityScore)updateTradeExit()- Record exit with P&LaddPriceUpdate()- Track price movements (called by Position Manager)getTradeStats()- Win rate, profit factor, avg win/lossgetLastTrade()- Fetch most recent trade for analytics dashboardcreateBlockedSignal()- Save blocked signals for data-driven optimization analysisgetRecentBlockedSignals()- Query recent blocked signalsgetBlockedSignalsForAnalysis()- Fetch signals needing price analysis (future automation)
Important fields:
signalSource(String?) - Identifies trade origin: 'tradingview', 'manual', or NULL (old trades)- CRITICAL: Manual Telegram trades are marked
signalSource='manual'and excluded from TradingView indicator analysis - Use filter:
WHERE ("signalSource" IS NULL OR "signalSource" != 'manual')for indicator optimization queries - See
docs/MANUAL_TRADE_FILTERING.mdfor complete SQL filtering guide
- CRITICAL: Manual Telegram trades are marked
signalQualityScore(Int?) - 0-100 score for data-driven optimizationsignalQualityVersion(String?) - Tracks which scoring logic was used ('v1', 'v2', 'v3', 'v4')- v1: Original logic (price position < 5% threshold)
- v2: Added volume compensation for low ADX (2025-11-07)
- v3: Stricter breakdown requirements: positions < 15% require (ADX > 18 AND volume > 1.2x) OR (RSI < 35 for shorts / RSI > 60 for longs)
- v4: CURRENT - Blocked signals tracking enabled for data-driven threshold optimization (2025-11-11)
- All new trades tagged with current version for comparative analysis
maxFavorableExcursion/maxAdverseExcursion- Track best/worst P&L during trade lifetimemaxFavorablePrice/maxAdversePrice- Track prices at MFE/MAE pointsconfigSnapshot(Json) - Stores Position Manager state for crash recoveryatr,adx,rsi,volumeRatio,pricePosition- Context metrics from TradingView
BlockedSignal model fields (NEW):
- Signal metrics:
atr,adx,rsi,volumeRatio,pricePosition,timeframe - Quality scoring:
signalQualityScore,signalQualityVersion,scoreBreakdown(JSON),minScoreRequired - Indicator provenance (Nov 28, 2025):
indicatorVersionnow stored for every blocked signal (defaults tov5if alert omits it). Older rows haveNULLhere—only new entries track v8/v9/v10 so quality vs indicator comparisons work going forward. - Block tracking:
blockReason(QUALITY_SCORE_TOO_LOW, COOLDOWN_PERIOD, HOURLY_TRADE_LIMIT, etc.),blockDetails - Future analysis:
priceAfter1/5/15/30Min,wouldHitTP1/TP2/SL,analysisComplete - Automatically saved by check-risk endpoint when signals are blocked
- Enables data-driven optimization: collect 10-20 blocked signals → analyze patterns → adjust thresholds
Per-symbol functions:
getLastTradeTimeForSymbol(symbol)- Get last trade time for specific coin (enables per-symbol cooldown)- Each coin (SOL/ETH/BTC) has independent cooldown timer to avoid missed opportunities
ATR-Based Risk Management (Nov 17, 2025)
Purpose: Regime-agnostic TP/SL system that adapts to market volatility automatically instead of using fixed percentages that work in one market regime but fail in another.
Core Concept: ATR (Average True Range) measures actual market volatility - when volatility increases (trending markets), targets expand proportionally. When volatility decreases (choppy markets), targets tighten. This solves the "bull/bear optimization bias" problem where fixed % targets optimized in bearish markets underperform in bullish conditions.
Calculation Formula:
function calculatePercentFromAtr(
atrValue: number, // Absolute ATR value (e.g., 0.43 for SOL)
entryPrice: number, // Position entry price (e.g., $140)
multiplier: number, // ATR multiplier (2.0, 4.0, 3.0)
minPercent: number, // Safety floor (e.g., 0.5%)
maxPercent: number // Safety ceiling (e.g., 1.5%)
): number {
// Convert absolute ATR to percentage of price
const atrPercent = (atrValue / entryPrice) * 100
// Apply multiplier (TP1=2x, TP2=4x, SL=3x)
const targetPercent = atrPercent * multiplier
// Clamp between min/max bounds for safety
return Math.max(minPercent, Math.min(maxPercent, targetPercent))
}
Example Calculation (SOL at $140 with ATR 0.43):
// ATR as percentage: 0.43 / 140 = 0.00307 = 0.307%
// TP1 (close 60%):
// 0.307% × 2.0 = 0.614% → clamped to [0.5%, 1.5%] = 0.614%
// Price target: $140 × 1.00614 = $140.86
// TP2 (activate trailing):
// 0.307% × 4.0 = 1.228% → clamped to [1.0%, 3.0%] = 1.228%
// Price target: $140 × 1.01228 = $141.72
// SL (emergency exit):
// 0.307% × 3.0 = 0.921% → clamped to [0.8%, 2.0%] = 0.921%
// Price target: $140 × 0.99079 = $138.71
Configuration (ENV variables):
# Enable ATR-based system
USE_ATR_BASED_TARGETS=true
# ATR multipliers (tuned for SOL volatility)
ATR_MULTIPLIER_TP1=2.0 # TP1: 2× ATR (first target)
ATR_MULTIPLIER_TP2=4.0 # TP2: 4× ATR (trailing stop activation)
ATR_MULTIPLIER_SL=3.0 # SL: 3× ATR (stop loss)
# Safety bounds (prevent extreme targets)
MIN_TP1_PERCENT=0.5 # Don't go below 0.5% for TP1
MAX_TP1_PERCENT=1.5 # Don't go above 1.5% for TP1
MIN_TP2_PERCENT=1.0 # Don't go below 1.0% for TP2
MAX_TP2_PERCENT=3.0 # Don't go above 3.0% for TP2
MIN_SL_PERCENT=0.8 # Don't go below 0.8% for SL
MAX_SL_PERCENT=2.0 # Don't go above 2.0% for SL
# Legacy fallback (used when ATR unavailable)
STOP_LOSS_PERCENT=-1.5
TAKE_PROFIT_1_PERCENT=0.8
TAKE_PROFIT_2_PERCENT=0.7
Data-Driven ATR Values:
- SOL-PERP: Median ATR 0.43 (from 162 trades, Nov 2024-Nov 2025)
- Range: 0.0-1.17 (extreme outliers during high volatility)
- Typical: 0.32%-0.40% of price
- Used in Telegram manual trade presets
- ETH-PERP: TBD (collect 50+ trades with ATR tracking)
- BTC-PERP: TBD (collect 50+ trades with ATR tracking)
When ATR is Available:
- TradingView signals include
atrfield in webhook payload - Execute endpoint calculates dynamic TP/SL using ATR × multipliers
- Logs show:
📊 ATR-based targets: TP1 0.86%, TP2 1.72%, SL 1.29% - Database saves
atrAtEntryfor post-trade analysis
When ATR is NOT Available:
- Falls back to fixed percentages from ENV (STOP_LOSS_PERCENT, etc.)
- Logs show:
⚠️ No ATR data, using fixed percentages - Less optimal but still functional
Regime-Agnostic Benefits:
- Bull markets: Higher volatility → ATR increases → targets expand automatically
- Bear markets: Lower volatility → ATR decreases → targets tighten automatically
- Asset-agnostic: SOL volatility ≠ BTC volatility, ATR adapts to each
- No re-optimization needed: System adapts in real-time without manual tuning
Performance Analysis (Nov 17, 2025):
- Old fixed targets: v6 shorts captured 3% of avg +20.74% MFE moves (TP2 at +0.7%)
- New ATR targets: TP2 at ~1.72% + 40% runner with trailing stop
- Expected improvement: Capture 8-10% of move (3× better than fixed targets)
- Real-world validation: Awaiting 50+ trades with ATR-based exits for statistical confirmation
Code Locations:
config/trading.ts- ATR multiplier fields in TradingConfig interfaceapp/api/trading/execute/route.ts- calculatePercentFromAtr() functiontelegram_command_bot.py- MANUAL_METRICS with ATR 0.43.env- ATR_MULTIPLIER_* and MIN/MAX_*_PERCENT variables
Integration with TradingView: Ensure alerts include ATR field:
{
"symbol": "{{ticker}}",
"direction": "{{strategy.order.action}}",
"atr": {{ta.atr(14)}}, // CRITICAL: Include 14-period ATR
"adx": {{ta.dmi(14, 14)}},
"rsi": {{ta.rsi(14)}},
// ... other fields
}
Lesson Learned (Nov 17, 2025): Optimizing fixed % targets in one market regime (bearish Nov 2024) creates bias that fails when market shifts (bullish Dec 2024+). ATR-based targets eliminate this bias by adapting to actual volatility, not historical patterns. This is the correct long-term solution for regime-agnostic trading.
Configuration System
Three-layer merge:
DEFAULT_TRADING_CONFIG(config/trading.ts)- Environment variables (.env) via
getConfigFromEnv() - Runtime overrides via
getMergedConfig(overrides)
Always use: getMergedConfig() to get final config - never read env vars directly in business logic
Per-symbol position sizing: Use getPositionSizeForSymbol(symbol, config) which returns { size, leverage, enabled }
const { size, leverage, enabled } = getPositionSizeForSymbol('SOL-PERP', config)
if (!enabled) {
return NextResponse.json({ success: false, error: 'Symbol trading disabled' }, { status: 400 })
}
Symbol normalization: TradingView sends "SOLUSDT" → must convert to "SOL-PERP" for Drift
const driftSymbol = normalizeTradingViewSymbol(body.symbol)
Adaptive Leverage Configuration:
- Helper function:
getLeverageForQualityScore(qualityScore, config)returns leverage tier based on quality - Quality threshold: Configured via
QUALITY_LEVERAGE_THRESHOLD(default: 95) - Leverage tiers: HIGH_QUALITY_LEVERAGE (default: 15x), LOW_QUALITY_LEVERAGE (default: 10x)
- Integration: Pass
qualityScoreparameter togetActualPositionSizeForSymbol(symbol, config, qualityScore?) - Flow: Quality score → getLeverageForQualityScore() → returns 15x or 10x → applied to position sizing
- Logging: System logs adaptive leverage decisions for monitoring and validation
// Example usage in execute endpoint
const qualityResult = scoreSignalQuality({ atr, adx, rsi, volumeRatio, pricePosition, timeframe })
const { size, leverage } = getActualPositionSizeForSymbol(driftSymbol, config, qualityResult.score)
// leverage is now 15x for quality ≥95, or 10x for quality 90-94
API Endpoints Architecture
Authentication: All /api/trading/* endpoints (except /test) require Authorization: Bearer API_SECRET_KEY
Pattern: Each endpoint follows same flow:
- Auth check
- Get config via
getMergedConfig() - Initialize Drift service
- Check account health
- Execute operation
- Save to database
- Add to Position Manager if applicable
Key endpoints:
/api/trading/execute- Main entry point from n8n (production, requires auth), auto-caches market data/api/trading/check-risk- Pre-execution validation (duplicate check, quality score ≥91, per-symbol cooldown, rate limits, symbol enabled check, saves blocked signals automatically)/api/trading/test- Test trades from settings UI (no auth required, respects symbol enable/disable)/api/trading/close- Manual position closing (requires symbol normalization)/api/trading/sync-positions- Force Position Manager sync with Drift (POST, requires auth) - restores tracking for orphaned positions/api/trading/cancel-orders- Manual order cleanup (for stuck/ghost orders after rate limit failures)/api/trading/positions- Query open positions from Drift/api/trading/market-data- Webhook for TradingView market data updates (GET for debug, POST for data)/api/drift/account-health- GET account metrics (Dec 1, 2025) - Returns { totalCollateral, freeCollateral, totalLiability, marginRatio } from Drift Protocol for real-time UI display/api/settings- Get/update config (writes to .env file, includes per-symbol settings and direction-specific leverage thresholds)/api/analytics/last-trade- Fetch most recent trade details for dashboard (includes quality score)/api/analytics/reentry-check- Validate manual re-entry with fresh TradingView data + recent performance/api/analytics/version-comparison- Compare performance across signal quality logic versions (v1/v2/v3/v4)/api/restart- Create restart flag for watch-restart.sh script
Critical Workflows
Execute Trade (Production)
TradingView alert → n8n Parse Signal Enhanced (extracts metrics + timeframe + MA crossover flags)
↓ /api/trading/check-risk [validates quality score ≥81, checks duplicates, per-symbol cooldown]
↓ /api/trading/execute
↓ normalize symbol (SOLUSDT → SOL-PERP)
↓ getMergedConfig()
↓ scoreSignalQuality({ ..., timeframe }) [CRITICAL: calculate EARLY for ALL timeframes - line 112, Nov 26]
↓ IF timeframe !== '5': Save to BlockedSignal with quality scores → return success
↓ IF timeframe === '5': Continue to execution (production trade)
↓ getPositionSizeForSymbol(qualityScore) [adaptive leverage based on quality score]
↓ openPosition() [MARKET order with adaptive leverage]
↓ calculate dual stop prices if enabled
↓ placeExitOrders() [on-chain TP1/TP2/SL orders]
↓ createTrade() [CRITICAL: save to database FIRST - see Common Pitfall #27]
↓ positionManager.addTrade() [ONLY after DB save succeeds - prevents unprotected positions]
n8n Parse Signal Enhanced Workflow (Nov 27, 2025 - Updated Dec 7, 2025):
- File:
workflows/trading/parse_signal_enhanced.json - CRITICAL: Symbol Normalization Happens HERE (Dec 7, 2025 discovery):
- TradingView sends raw symbol (SOLUSDT, FARTCOIN, etc.)
- n8n extracts symbol from message body and normalizes to Drift format (*-PERP)
- Bot receives ALREADY NORMALIZED symbols (SOL-PERP, FARTCOIN-PERP)
- Bot normalization code is NOT used - n8n does it first
- To add new symbols: Update n8n workflow regex + mapping logic, then import to n8n
- Symbol Extraction Regex (Dec 7, 2025):
const symbolMatch = body.match(/\b(FARTCOIN|FART|SOL|BTC|ETH)\b/i); // CRITICAL: FARTCOIN checked BEFORE SOL (substring match issue) if (matched === 'FARTCOIN' || matched === 'FART') { symbol = 'FARTCOIN-PERP'; } else { symbol = matched + '-PERP'; // SOL → SOL-PERP, BTC → BTC-PERP, etc. } - Extracts from TradingView alerts:
- Standard metrics: symbol, direction, timeframe, ATR, ADX, RSI, VOL, POS, MAGAP, signalPrice, indicatorVersion
- MA Crossover Detection (NEW):
isMACrossover,isDeathCross,isGoldenCrossflags
- Detection logic: Searches for "crossing" keyword (case-insensitive) in alert message
isMACrossover = trueif "crossing" foundisDeathCross = trueif MA50 crossing below MA200 (short/sell direction)isGoldenCross = trueif MA50 crossing above MA200 (long/buy direction)
- Purpose: Enables data collection for MA crossover pattern validation (ADX weak→strong hypothesis)
- TradingView Alert Setup: "MA50&200 Crossing" condition, once per bar close, 5-minute chart
- Goal: Collect 5-10 crossover examples to validate v9's early detection pattern (signals 35 min before actual cross)
CRITICAL EXECUTION ORDER (Nov 26, 2025 - Multi-Timeframe Quality Scoring): Quality scoring MUST happen BEFORE timeframe filtering - this is NOT arbitrary:
- All timeframes (5min, 15min, 1H, 4H, Daily) need real quality scores for analysis
- Data collection signals (15min+) save to BlockedSignal with full quality metadata
- Enables SQL queries:
WHERE blockReason = 'DATA_COLLECTION_ONLY' AND signalQualityScore >= X - Purpose: Compare quality-filtered win rates across timeframes to determine optimal trading interval
- Old flow: Timeframe check → Quality score only for 5min → Data collection signals get hardcoded 0
- New flow: Quality score ALL signals → Timeframe routing → Data collection gets real scores
CRITICAL EXECUTION ORDER (Nov 24, 2025 - Adaptive Leverage): The order of quality scoring → position sizing is NOT arbitrary - it's a requirement:
- Quality score MUST be calculated BEFORE position sizing
- Adaptive leverage depends on quality score value
- Old flow: Open position → Calculate quality → Save to DB (quality used for records only)
- New flow: Calculate quality → Determine leverage → Open position with adaptive size
- Never calculate quality after position opening - leverage must be determined first
CRITICAL EXECUTION ORDER (Nov 13, 2025 Fix): The order of database save → Position Manager add is NOT arbitrary - it's a safety requirement:
- If database save fails, API returns HTTP 500 with critical warning
- User sees: "CLOSE POSITION MANUALLY IMMEDIATELY" with transaction signature
- Position Manager only tracks database-persisted trades
- Container restarts can restore all positions from database
- Never add to Position Manager before database save - creates unprotected positions
Position Monitoring Loop
Position Manager every 2s:
↓ Verify on-chain position still exists (detect external closures)
↓ getPythPriceMonitor().getLatestPrice()
↓ Calculate current P&L and update MAE/MFE metrics
↓ Check emergency stop (-2%) → closePosition(100%)
↓ Check SL hit → closePosition(100%)
↓ Check TP1 hit → closePosition(75%), cancelAllOrders(), placeExitOrders() with SL at breakeven
↓ Check profit lock trigger (+1.2%) → move SL to +configured%
↓ Check TP2 hit → closePosition(80% of remaining), activate runner
↓ Check trailing stop (if runner active) → adjust SL dynamically based on peakPrice
↓ addPriceUpdate() [save to database every N checks]
↓ saveTradeState() [persist Position Manager state + MAE/MFE for crash recovery]
Settings Update
Web UI → /api/settings POST
↓ Validate new settings
↓ Write to .env file using string replacement
↓ Return success
↓ User clicks "Restart Bot" → /api/restart
↓ Creates /tmp/trading-bot-restart.flag
↓ watch-restart.sh detects flag
↓ Executes: docker restart trading-bot-v4
Docker Context
Multi-stage build: deps → builder → runner (Node 20 Alpine)
Critical Dockerfile steps:
- Install deps with
npm install --production - Copy source and
npx prisma generate(MUST happen before build) npm run build(Next.js standalone output)- Runner stage copies standalone + static + node_modules + Prisma client
Container networking:
- External:
trading-bot-v4on port 3001 - Internal: Next.js on port 3000
- Database:
trading-bot-postgreson 172.28.0.0/16 network
DATABASE_URL caveat: Use trading-bot-postgres (container name) in .env for runtime, but localhost:5432 for Prisma CLI migrations from host
High Availability Infrastructure (Nov 25, 2025 - PRODUCTION READY)
Status: ✅ FULLY AUTOMATED - Zero-downtime failover validated in production
Architecture Overview:
Primary Server (srvdocker02) Secondary Server (Hostinger)
95.216.52.28:3001 72.62.39.24:3001
├── trading-bot-v4 (Docker) ├── trading-bot-v4-secondary (Docker)
├── trading-bot-postgres ├── trading-bot-postgres (replica)
├── nginx (HTTPS/SSL) ├── nginx (HTTPS/SSL)
└── Source: Active deployment └── Source: Standby (real-time sync)
↓
DNS: tradervone.v4.dedyn.io
(INWX automatic failover)
↓
Monitoring: dns-failover.service
(systemd service on secondary)
Key Components:
-
Database Replication (PostgreSQL Streaming)
- Type: Asynchronous streaming replication
- Lag: <1 second typical
- Config:
/home/icke/traderv4/docs/DEPLOY_SECONDARY_MANUAL.md - Verify:
ssh root@72.62.39.24 'docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "SELECT status, write_lag FROM pg_stat_replication;"'
-
DNS Failover Monitor (Automated)
- Service:
/etc/systemd/system/dns-failover.service - Script:
/usr/local/bin/dns-failover-monitor.py - Check interval: 30 seconds
- Failure threshold: 3 consecutive failures (90 seconds total)
- Health endpoint:
http://95.216.52.28:3001/api/health(must return valid JSON) - Logs:
/var/log/dns-failover.log - Status:
ssh root@72.62.39.24 'systemctl status dns-failover'
- Service:
-
Automatic Failover Sequence:
Primary Failure Detected (3 × 30s checks = 90s) ↓ DNS Update via INWX API (<1 second) tradervone.v4.dedyn.io: 95.216.52.28 → 72.62.39.24 ↓ Secondary Takes Over (0s downtime) TradingView webhooks → Secondary bot ↓ Primary Recovery Detected ↓ Automatic Failback (<1 second) tradervone.v4.dedyn.io: 72.62.39.24 → 95.216.52.28 -
Live Test Results (Nov 25, 2025 21:53-22:00 CET):
- Detection Time: 90 seconds (3 × 30s health checks)
- Failover Execution: <1 second (DNS update)
- Service Downtime: 0 seconds (seamless takeover)
- Failback: Automatic and immediate when primary recovered
- Total Cycle: ~7 minutes from failure to full restoration
- Result: ✅ Zero downtime, zero duplicate trades, zero data loss
Critical Operational Notes:
- Primary Health Check Firewall: pfSense rule allows Hostinger (72.62.39.24) → srvdocker02:3001 for health checks
- Both Bots on Port 3001: Reverse proxies handle HTTPS, internal port standardized for consistency
- Health Endpoint Requirements: Must return valid JSON (not HTML 404). Monitor uses JSON validation to detect failures.
- Manual Failover (Emergency):
ssh root@72.62.39.24 'python3 /usr/local/bin/manual-dns-switch.py secondary' - Update Secondary Bot:
rsync -avz --exclude 'node_modules' --exclude '.next' --exclude 'logs' \ /home/icke/traderv4/ root@72.62.39.24:/root/traderv4-secondary/ ssh root@72.62.39.24 'cd /root/traderv4-secondary && docker compose build trading-bot && docker compose up -d --force-recreate trading-bot'
Documentation References:
- Deployment Guide:
docs/DEPLOY_SECONDARY_MANUAL.md(689 lines) - Roadmap:
HA_SETUP_ROADMAP.md(all phases complete) - Git Commits:
99dc736- Deployment guide with test results62c7b70- Roadmap completion documentation
Why This Matters:
- Financial Protection: Trading bot stays online 24/7 even if primary server fails
- Zero Downtime: Automatic failover ensures no missed trading signals
- Data Integrity: Database replication prevents trade history loss
- Peace of Mind: System handles failures autonomously while user sleeps
- Cost: ~$20-30/month for enterprise-grade 99.9%+ uptime
When Making Changes:
- Code Deployments: Deploy to primary first, test, then rsync to secondary
- Database Migrations: Run on primary only (replicates automatically)
- Container Restarts: Primary can be restarted safely, failover protection active
- Testing: Use
docker stop trading-bot-v4on primary to test failover (verified working) - Monitor Logs:
ssh root@72.62.39.24 'tail -f /var/log/dns-failover.log'to watch health checks
Project-Specific Patterns
1. Singleton Services
Never create multiple instances - always use getter functions:
const driftService = await initializeDriftService() // NOT: new DriftService()
const positionManager = getPositionManager() // NOT: new PositionManager()
const prisma = getPrismaClient() // NOT: new PrismaClient()
2. Price Calculations
Direction matters for long vs short:
function calculatePrice(entry: number, percent: number, direction: 'long' | 'short') {
if (direction === 'long') {
return entry * (1 + percent / 100) // Long: +1% = higher price
} else {
return entry * (1 - percent / 100) // Short: +1% = lower price
}
}
3. Error Handling
Database failures should not fail trades - always wrap in try/catch:
try {
await createTrade(params)
console.log('💾 Trade saved to database')
} catch (dbError) {
console.error('❌ Failed to save trade:', dbError)
// Don't fail the trade if database save fails
}
4. Reduce-Only Orders
All exit orders MUST be reduce-only (can only close, not open positions):
const orderParams = {
reduceOnly: true, // CRITICAL for TP/SL orders
// ... other params
}
5. Nextcloud Deck Roadmap Sync
Purpose: Visual kanban board for tracking optimization roadmap progress
Key Components:
scripts/discover-deck-ids.sh- Find Nextcloud Deck board/stack IDsscripts/sync-roadmap-to-deck.py- Sync roadmap files to Deck cardsdocs/NEXTCLOUD_DECK_SYNC.md- Complete documentation
Workflow:
# One-time setup (already done)
bash scripts/discover-deck-ids.sh # Creates /tmp/deck-config.json
# Sync roadmap to Deck (creates/updates cards)
python3 scripts/sync-roadmap-to-deck.py --init
# Always dry-run first to preview changes
python3 scripts/sync-roadmap-to-deck.py --init --dry-run
Stack Mapping:
- 📥 Backlog: Future phases, ideas, ML work (status: FUTURE)
- 📋 Planning: Next phases, ready to implement (status: PENDING, NEXT)
- 🚀 In Progress: Currently active work (status: CURRENT, IN PROGRESS, DEPLOYED)
- ✅ Complete: Finished phases (status: COMPLETE)
Card Structure:
- 3 high-level initiative cards (from
OPTIMIZATION_MASTER_ROADMAP.md) - 18 detailed phase cards (from individual roadmap files)
- Total: 21 cards tracking all optimization work
When to Sync:
- After completing a phase (update markdown status → re-sync)
- When starting new phase (move card in Deck UI)
- Weekly during active development to keep visual state current
Important Notes:
- API doesn't support duplicate detection - always use
--dry-runfirst - Manual card deletion required (API returns 405 on DELETE)
- Code blocks auto-removed from descriptions (prevent API errors)
- Card titles cleaned (no markdown, emojis removed for readability)
Testing Commands
# Local development
npm run dev
# Build production
npm run build && npm start
# Docker build and restart
docker compose build trading-bot
docker compose up -d --force-recreate trading-bot
docker logs -f trading-bot-v4
# Database operations
npx prisma generate # Generate client
DATABASE_URL="postgresql://...@localhost:5432/..." npx prisma migrate dev
docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "\dt"
# Test trade from UI
# Go to http://localhost:3001/settings
# Click "Test LONG" or "Test SHORT"
SQL Analysis Queries
Essential queries for monitoring signal quality and blocked signals. Run via:
docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "YOUR_QUERY"
Phase 1: Monitor Data Collection Progress
-- Check blocked signals count (target: 10-20 for Phase 2)
SELECT COUNT(*) as total_blocked FROM "BlockedSignal";
-- Score distribution of blocked signals
SELECT
CASE
WHEN signalQualityScore >= 60 THEN '60-64 (Close Call)'
WHEN signalQualityScore >= 55 THEN '55-59 (Marginal)'
WHEN signalQualityScore >= 50 THEN '50-54 (Weak)'
ELSE '0-49 (Very Weak)'
END as tier,
COUNT(*) as count,
ROUND(AVG(signalQualityScore)::numeric, 1) as avg_score
FROM "BlockedSignal"
WHERE blockReason = 'QUALITY_SCORE_TOO_LOW'
GROUP BY tier
ORDER BY MIN(signalQualityScore) DESC;
-- Recent blocked signals with full details
SELECT
symbol,
direction,
signalQualityScore as score,
ROUND(adx::numeric, 1) as adx,
ROUND(atr::numeric, 2) as atr,
ROUND(pricePosition::numeric, 1) as pos,
ROUND(volumeRatio::numeric, 2) as vol,
blockReason,
TO_CHAR(createdAt, 'MM-DD HH24:MI') as time
FROM "BlockedSignal"
ORDER BY createdAt DESC
LIMIT 10;
Phase 2: Compare Blocked vs Executed Trades
-- Compare executed trades in 60-69 score range
SELECT
signalQualityScore as score,
COUNT(*) as trades,
ROUND(AVG(realizedPnL)::numeric, 2) as avg_pnl,
ROUND(SUM(realizedPnL)::numeric, 2) as total_pnl,
ROUND(100.0 * SUM(CASE WHEN realizedPnL > 0 THEN 1 ELSE 0 END) / COUNT(*)::numeric, 1) as win_rate
FROM "Trade"
WHERE exitReason IS NOT NULL
AND signalQualityScore BETWEEN 60 AND 69
GROUP BY signalQualityScore
ORDER BY signalQualityScore;
-- Block reason breakdown
SELECT
blockReason,
COUNT(*) as count,
ROUND(AVG(signalQualityScore)::numeric, 1) as avg_score
FROM "BlockedSignal"
GROUP BY blockReason
ORDER BY count DESC;
Analyze Specific Patterns
-- Blocked signals at range extremes (price position)
SELECT
direction,
signalQualityScore as score,
ROUND(pricePosition::numeric, 1) as pos,
ROUND(adx::numeric, 1) as adx,
ROUND(volumeRatio::numeric, 2) as vol,
symbol,
TO_CHAR(createdAt, 'MM-DD HH24:MI') as time
FROM "BlockedSignal"
WHERE blockReason = 'QUALITY_SCORE_TOO_LOW'
AND (pricePosition < 10 OR pricePosition > 90)
ORDER BY signalQualityScore DESC;
-- ADX distribution in blocked signals
SELECT
CASE
WHEN adx >= 25 THEN 'Strong (25+)'
WHEN adx >= 20 THEN 'Moderate (20-25)'
WHEN adx >= 15 THEN 'Weak (15-20)'
ELSE 'Very Weak (<15)'
END as adx_tier,
COUNT(*) as count,
ROUND(AVG(signalQualityScore)::numeric, 1) as avg_score
FROM "BlockedSignal"
WHERE blockReason = 'QUALITY_SCORE_TOO_LOW'
AND adx IS NOT NULL
GROUP BY adx_tier
ORDER BY MIN(adx) DESC;
Usage Pattern:
- Run "Monitor Data Collection" queries weekly during Phase 1
- Once 10+ blocked signals collected, run "Compare Blocked vs Executed" queries
- Use "Analyze Specific Patterns" to identify optimization opportunities
- Full query reference:
BLOCKED_SIGNALS_TRACKING.md
Common Pitfalls
⚠️ CRITICAL REFERENCE: See docs/COMMON_PITFALLS.md for complete list (73 documented issues)
This section contains the TOP 10 MOST CRITICAL pitfalls that every AI agent must know. For full details, category breakdowns, code examples, and historical context, see the complete documentation.
🔴 TOP 10 CRITICAL PITFALLS
1. Position Manager Monitoring Stops Randomly (#73 - CRITICAL - Dec 7, 2025)
- Symptom: PM last update at 23:21 Dec 6, stopped for 90+ minutes, user forced to manually close
- Root Cause: Drift state propagation delay (5+ min) → 60s timeout expires → false "external closure" detection →
activeTrades.delete()→ monitoring stops - Financial Impact: Real losses during unmonitored period
- THE FIX (3 Safety Layers - DEPLOYED Dec 7, 2025):
- Layer 1: Extended timeout from 60s → 5 minutes (allows Drift state to propagate)
- Layer 2: Double-check with 10s delay before processing external closure (catches false positives)
- Layer 3: Verify Drift has no positions before calling stopMonitoring() (fail-safe)
- Code Locations:
- Layer 1:
lib/trading/position-manager.tsline ~792 (timeout extension) - Layer 2:
lib/trading/position-manager.tsline ~603 (double-check logic) - Layer 3:
lib/trading/position-manager.tsline ~1069 (Drift verification)
- Layer 1:
- Expected Impact: Zero unprotected positions, false positive detection eliminated
- Status: ✅ DEPLOYED Dec 7, 2025 02:47 UTC (commit
ed9e4d5) - See:
docs/PM_MONITORING_STOP_ROOT_CAUSE_DEC7_2025.mdfor complete analysis
2. Drift SDK Memory Leak (#1) - JavaScript heap OOM after 10+ hours
- Solution: Smart error-based health monitoring (
lib/monitoring/drift-health-monitor.ts) - Detection:
interceptWebSocketErrors()patches console.error - Action: Restarts if 50+ errors in 30-second window
- Status: Fixed Nov 15, 2025, Enhanced Nov 24, 2025
3. Wrong RPC Provider (#2) - Alchemy breaks Drift SDK subscriptions
- FINAL CONCLUSION: Use Helius RPC, NEVER use Alchemy
- Root Cause: Alchemy rate limits break Drift's burst subscription pattern
- Evidence: 17-71 subscription errors with Alchemy vs 0 with Helius
- Status: Investigation Complete Nov 14, 2025
4. P&L Compounding Race Condition (#48, #49, #59, #60, #61, #67)
- Pattern: Multiple monitoring loops detect same closure → each adds P&L
- Result: $6 real → $92 recorded (15x inflation)
- Fix: Use
Map.delete()atomic return as deduplication lock (Dec 2, 2025) - Code:
if (!this.activeTrades.delete(tradeId)) return- first caller wins
5. Database-First Pattern (#29) - Save DB before Position Manager
- Rule:
createTrade()MUST succeed beforepositionManager.addTrade() - Why: If DB fails, API returns 500 with "CLOSE POSITION MANUALLY"
- Impact: Without this, positions become untracked on container restart
- Status: Fixed Nov 13, 2025
6. Container Deployment Verification (#31)
- Rule: NEVER say "fixed" without checking container timestamp
- Verification:
docker logs trading-bot-v4 | grep "Server starting"vsgit log -1 --format='%ai' - If container older than commit: CODE NOT DEPLOYED, FIX NOT ACTIVE
- Status: Critical lesson from Nov 13, 2025 incident
7. Position.size Tokens vs USD (#24) - SDK returns tokens, not USD
- Bug: Comparing 12.28 tokens to $1,950 → "99.4% reduction" → false TP1
- Fix:
positionSizeUSD = Math.abs(position.size) * currentPrice - Impact: Without fix, TP1 never triggers correctly
- Status: Fixed Nov 12, 2025
8. Ghost Detection Atomic Lock (#67) - Map.delete() as deduplication
- Pattern: Async handlers called by multiple code paths simultaneously
- Solution:
if (!this.activeTrades.delete(tradeId)) { return }- atomic lock - Why: JavaScript Map.delete() returns true only for first caller
- Status: Fixed Dec 2, 2025
9. Smart Entry Wrong Price (#66, #68) - Use Pyth price, not webhook
- Bug #66: Symbol format mismatch ("SOLUSDT" vs "SOL-PERP") caused cache miss
- Bug #68: Webhook
signal.pricecontained percentage (70.80) not market price ($142) - Fix: Always use
pythClient.getPrice(symbol)for calculations - Status: Fixed Dec 1-3, 2025
10. MFE/MAE Wrong Units (#54) - Store percentages, not dollars
- Bug: Storing $64.08 when should store 0.48% (133× inflation)
- Fix:
trade.maxFavorableExcursion = profitPercentnotcurrentPnLDollars - Impact: Analytics completely wrong for all trades
- Status: Fixed Nov 23, 2025
Quick Links by Category
P&L Calculation Errors: #11, #41, #48, #49, #54, #57, #61 Race Conditions: #27, #28, #59, #60, #67 SDK/API Issues: #1, #2, #12, #24, #36, #45 Database Operations: #29, #35, #37, #50, #58 Configuration: #55, #62 Smart Entry: #63, #66, #68, #70 Deployment: #31, #47
📚 Full Documentation: docs/COMMON_PITFALLS.md (73 pitfalls with code examples, git commits, deployment dates)
-
CRITICAL: Wrong Year in SQL Queries - ALWAYS Use Current Year (CRITICAL - Dec 8, 2025):
- Symptom: Query returns 247 rows spanning months when expecting 5-6 recent trades
- Root Cause: Database stores timestamps in 2024 format, AI agent queried '2024-12-07' instead of '2025-12-07'
- Impact: Reported -$1,616 total loss when actual recent loss was -$137.55 (12× inflation)
- User Dispute: "THE LOSS WAS NOT 63$ but 120,89$. where do you get those numbers from??"
- Root Cause Analysis:
- Database exitTime field contains dates like '2024-12-02', '2024-12-05', etc.
- AI agent wrote query:
WHERE "exitTime" >= '2024-12-07 00:00:00' - This matched ALL trades from Oct 2024 onwards (247 rows)
- Should have written:
WHERE "exitTime" >= '2025-12-07 00:00:00' - Current date is Dec 8, 2025, not 2024
- MANDATORY SQL Pattern - ALWAYS Check Year:
-- WRONG: Hardcoded 2024 when current year is 2025 SELECT * FROM "Trade" WHERE "exitTime" >= '2024-12-07 00:00:00'; -- CORRECT: Use current year 2025 SELECT * FROM "Trade" WHERE "exitTime" >= '2025-12-07 00:00:00'; -- BEST: Verify current date first SELECT NOW()::date as current_date; -- Check what year database thinks it is -- SAFEST: Use relative dates (past 3 days) SELECT * FROM "Trade" WHERE "exitTime" >= NOW() - INTERVAL '3 days'; - Verification Before Reporting Numbers:
- Check row count - if querying "last 3 days" returns 247 rows, year is WRONG
- Verify date range in results:
SELECT MIN("exitTime"), MAX("exitTime") FROM ... - Use
TO_CHAR("exitTime", 'YYYY-MM-DD HH24:MI')to see full dates including year - Cross-reference with context: User said "emergency today" → query should return TODAY's data only
- Why This Matters:
- This is a REAL MONEY system - wrong loss figures = incorrect financial decisions
- User was EXACTLY RIGHT with $120.89 figure (actual Dec 5-8 losses)
- AI agent gave wrong numbers due to year mismatch in query
- Wasted user time disputing correct figures
- User mandate: "drift tells the truth not you" - trust user's numbers, verify queries
- Prevention Rules:
- ALWAYS use
NOW()orCURRENT_DATEfor relative date queries - NEVER hardcode year without verifying current year first
- ALWAYS check row counts before declaring results accurate
- When user disputes numbers, re-verify query year immediately
- Include full YYYY-MM-DD in SELECT to catch year mismatches
- ALWAYS use
- Red Flags Indicating Year Mismatch:
- Query for "recent trades" returns 100+ rows
- Date range spans months when expecting days
- User says "that's wrong" and provides different figure
- exitTime dates show 2024 but current date is 2025
- Git commit: [Document wrong year SQL query lesson - Dec 8, 2025]
- Status: ✅ Documented - Future AI agents must verify year in date queries
-
CRITICAL: Silent SL Placement Failure - placeExitOrders() Returns SUCCESS With Missing Orders (CRITICAL - Dec 8, 2025):
- Symptom: Position opened with TP1 and TP2 orders but NO stop loss, completely unprotected from downside
- User Report: "when i opened the manually trade we hade a sl and tp but it was removed by the system"
- Financial Impact: Part of $1,000+ losses - positions left open with no SL protection
- Real Incident (Dec 8, 2025 13:39:24):
- Trade: cmix773hk019gn307fjjhbikx
- Symbol: SOL-PERP LONG at $138.45, size $2,003
- TP1 order EXISTS: 2QzE4q9Q... ($139.31)
- TP2 order EXISTS: 5AQRiwRK... ($140.17)
- SL order MISSING: NULL in database (should be $137.16)
- stopLossPrice: Correctly calculated ($137.1551) and passed to placeExitOrders()
- Logs: "📨 Exit orders placed on-chain: [2 signatures]" (expected 3!)
- Function returned:
{success: true, signatures: [tp1Sig, tp2Sig]}(SL missing)
- Root Cause:
- File:
lib/drift/orders.tsfunctionplaceExitOrders()(lines 252-495) - Lines 465-473: TRIGGER_MARKET SL placement code exists but never executed
- No "🛡️ Placing SL..." log found in container logs
- No error handling around SL placement section
- Function returns SUCCESS even if signatures.length < 3
- No validation before return statement
- File:
- Why It's Silent:
- placeExitOrders() doesn't check signatures.length before returning
- Execute endpoint trusts SUCCESS status without validation
- No alerts, no errors, no indication of failure
- Position appears protected but actually isn't
- How It Bypasses Checks:
- Size check: Position 14.47 SOL >> minOrderSize 0.1 SOL (146× above threshold)
- All inputs valid: stopLossPrice calculated correctly, market exists, wallet has balance
- Code path exists but doesn't execute - unknown reason (rate limit? SDK bug? network?)
- Function returns early or skips SL section without throwing error
- Fix Required (Not Yet Implemented):
// In lib/drift/orders.ts at end of placeExitOrders() (around line 490) const expectedCount = useDualStops ? 4 : 3 // TP1 + TP2 + SL (+ hard SL if dual) if (signatures.length < expectedCount) { console.error(`❌ CRITICAL: Only ${signatures.length}/${expectedCount} exit orders placed!`) console.error(` Expected: TP1 + TP2 + SL${useDualStops ? ' + Hard SL' : ''}`) console.error(` Got: ${signatures.length} signatures`) return { success: false, error: `Missing orders: expected ${expectedCount}, got ${signatures.length}`, signatures } } // Add try/catch around SL placement section (lines 346-476) // Log errors explicitly if SL placement fails - Execute Endpoint Fix Required:
// In app/api/trading/execute/route.ts after placeExitOrders() (around line 940) const expectedSigs = config.useDualStops ? 4 : 3 if (exitRes.signatures && exitRes.signatures.length < expectedSigs) { console.error(`❌ CRITICAL: Missing exit orders!`) console.error(` Expected: ${expectedSigs}, Got: ${exitRes.signatures.length}`) await logCriticalError('MISSING_EXIT_ORDERS', { symbol, expectedCount: expectedSigs, actualCount: exitRes.signatures.length, tradeId: trade.id }) } - Detection: Health Monitoring System (Dec 8, 2025):
- File:
lib/health/position-manager-health.ts(177 lines) - Function:
checkPositionManagerHealth()runs every 30 seconds - Check: Open positions missing SL orders → CRITICAL ALERT per position
- Validates: slOrderTx, softStopOrderTx, hardStopOrderTx all present
- Log format: "🚨 CRITICAL: Position {id} missing SL order (symbol: {symbol}, size: ${size})"
- Started automatically via
lib/startup/init-position-manager.tsline ~78
- File:
- Why This Matters:
- This is a REAL MONEY system - no SL = unlimited loss exposure
- Position can drop 5%, 10%, 20% with no protection
- User may be asleep, away, unavailable for hours
- Silent failures are the most dangerous kind
- Function says "success" but position is unprotected
- Prevention Rules:
- ALWAYS validate signatures.length matches expected count
- NEVER return success without verifying all orders placed
- ADD try/catch around ALL order placement sections
- LOG errors explicitly, don't fail silently
- Health monitor will detect missing orders within 30 seconds
- Execute endpoint must validate placeExitOrders() result
- Red Flags Indicating This Bug:
- Logs show "Exit orders placed: [2 signatures]"
- Database slOrderTx field is NULL
- No "🛡️ Placing SL..." log messages
- placeExitOrders() returned success: true
- Position open with TP1/TP2 but no SL
- Git commit: [Pending - health monitoring deployed, placeExitOrders() fix pending]
- Status: ⚠️ Health monitor deployed (detects issue), root cause fix pending
-
CRITICAL: Position Manager Never Actually Monitors - Logs Say "Added" But isMonitoring Stays False (CRITICAL - Dec 8, 2025):
- Symptom: System logs "✅ Trade added to position manager for monitoring" but position never monitored
- User Report: "we have lost 1000$...... i hope with the new test system this is an issue of the past"
- Financial Impact: $1,000+ losses because positions completely unprotected despite logs saying otherwise
- Real Incident (Dec 8, 2025):
- Trade: cmix773hk019gn307fjjhbikx created at 13:39:24
- Logs: "✅ Trade added to position manager for monitoring"
- Database:
configSnapshot.positionManagerState= NULL (not monitoring!) - Reality: No price checks, no TP/SL monitoring, no protection whatsoever
- No Pyth price monitor startup logs found
- No price update logs found
- No "checking conditions" logs found
- Root Cause:
- File:
lib/trading/position-manager.ts(2027 lines) - Function:
addTrade()(lines 257-271) - Adds to Map, calls startMonitoring() - Function:
startMonitoring()(lines 482-518) - Calls priceMonitor.start() - Problem: startMonitoring() exists and looks correct but doesn't execute properly
- No verification that monitoring actually started
- No health check that isMonitoring matches activeTrades.size
- Pyth price monitor never starts (no WebSocket connection logs)
- File:
- Why It's Catastrophic:
- System SAYS position is protected
- User trusts the logs
- Position actually has ZERO protection
- No TP/SL checks, no emergency stop, no trailing stop
- Position can move 10%+ with no action
- Database shows NULL for positionManagerState (smoking gun)
- The Deception:
- Log message: "✅ Trade added to position manager for monitoring"
- Reality: Trade added to Map but monitoring never starts
- isMonitoring flag stays false
- No price monitor callbacks registered
- Silent failure - no errors thrown
- Detection: Health Monitoring System (Dec 8, 2025):
- File:
lib/health/position-manager-health.ts(177 lines) - Function:
checkPositionManagerHealth()runs every 30 seconds - Critical Check #1: DB has open trades but PM not monitoring
- Critical Check #2: PM has trades but isMonitoring = false
- Critical Check #3: DB vs PM trade count mismatch
- Alert format: "🚨 CRITICAL: Position Manager not monitoring! DB: {dbCount} open trades, PM: {pmCount} trades, Monitoring: {isMonitoring}"
- Started automatically via
lib/startup/init-position-manager.tsline ~78
- File:
- Test Suite Created:
- File:
tests/integration/position-manager/monitoring-verification.test.ts(201 lines) - Test Suite: "CRITICAL: Monitoring Actually Starts" (4 tests)
- Validates startMonitoring() calls priceMonitor.start()
- Validates symbols array passed correctly
- Validates isMonitoring flag set to true
- Validates monitoring doesn't start twice
- Test Suite: "CRITICAL: Price Updates Actually Trigger Checks" (2 tests)
- Test Suite: "CRITICAL: Monitoring Stops When No Trades" (2 tests)
- Test Suite: "CRITICAL: Error Handling Doesnt Break Monitoring" (1 test)
- Purpose: Validate Position Manager ACTUALLY monitors, not just logs "added"
- File:
- Fix Required (Not Yet Implemented):
// In lib/trading/position-manager.ts after startMonitoring() call (around line 269) // Add verification that monitoring actually started if (this.activeTrades.size > 0 && !this.isMonitoring) { console.error(`❌ CRITICAL: Failed to start monitoring!`) console.error(` Active trades: ${this.activeTrades.size}`) console.error(` isMonitoring: ${this.isMonitoring}`) await logCriticalError('MONITORING_START_FAILED', { activeTradesCount: this.activeTrades.size, symbols: Array.from(this.activeTrades.values()).map(t => t.symbol) }) } - Why This Matters:
- This is a REAL MONEY system - no monitoring = no protection
- TP/SL orders can fail, monitoring is the backup
- Position Manager is the "safety net" - if it doesn't work, nothing does
- User trusts logs saying "monitoring" - but it's a lie
- $1,000+ losses prove this is NOT theoretical
- Prevention Rules:
- NEVER trust log messages about state - verify actual state
- Health checks MUST validate isMonitoring matches activeTrades
- Test suite MUST validate monitoring actually starts
- Add verification after startMonitoring() calls
- Health monitor detects failures within 30 seconds
- If monitoring fails to start, throw error immediately
- Red Flags Indicating This Bug:
- Logs say "Trade added to position manager for monitoring"
- Database configSnapshot.positionManagerState is NULL
- No Pyth price monitor startup logs
- No price update logs
- No "checking conditions" logs
- Position moves significantly with no PM action
- Git commit: [Health monitoring deployed Dec 8, 2025 - detects issue within 30 seconds]
- Status: ✅ Health monitor deployed (detects issue), root cause investigation ongoing
-
CRITICAL: Orphan Detection Removes Active Position Orders - CancelAllOrders Affects ALL Positions On Symbol (CRITICAL - Dec 8, 2025):
- Symptom: User opens new position with TP/SL orders, system immediately removes them, position left unprotected
- User Report: "when i opened the manually trade we hade a sl and tp but it was removed by the system"
- Financial Impact: Part of $1,000+ losses - active positions stripped of protection while system tries to close old positions
- Real Incident Timeline (Dec 8, 2025):
- 06:46:23 - Old orphaned position: 14.47 SOL-PERP (DB says closed, Drift says open)
- 13:39:24 - User opens NEW manual SOL-PERP LONG at $138.45, size $2,003
- 13:39:25 - placeExitOrders() places TP1 + TP2 (SL fails silently - Bug #76)
- 13:39:26 - Drift state verifier detects OLD orphan (7 hours old)
- 13:39:27 - System attempts to close orphan via market order
- 13:39:28 - Close fails (Drift state propagation delay 5+ min)
- 13:39:30 - Position Manager removeTrade() calls cancelAllOrders(symbol='SOL-PERP')
- 13:39:31 - cancelAllOrders() cancels ALL SOL-PERP orders (TP1 + TP2 from NEW position)
- Result - NEW position left open with NO TP, NO SL, NO PROTECTION
- Root Cause:
- File:
lib/trading/position-manager.tsfunctionremoveTrade()(lines 275-300) - Code:
await cancelAllOrders(symbol)- operates on SYMBOL level, not position level - Problem: Doesn't distinguish between old orphaned position and new active position
- When closing orphan, cancels orders for ALL positions on that symbol
- User's NEW position gets orders removed while orphan cleanup runs
- File:
- Why It's Dangerous:
- Orphan detection is GOOD (recovers lost positions)
- But cleanup affects ALL positions on symbol, not just orphan
- If user opens position while orphan cleanup runs, new position loses protection
- Window of vulnerability: 5+ minutes (Drift state propagation delay)
- Multiple close attempts = multiple cancelAllOrders() calls
- Code Evidence:
// lib/trading/position-manager.ts lines ~285-300 async removeTrade(tradeId: string, reason: string) { const trade = this.activeTrades.get(tradeId) if (!trade) return try { // PROBLEM: This cancels ALL orders for the symbol // Doesn't check if other active positions exist on same symbol await cancelAllOrders(trade.symbol) console.log(`🧹 Cancelled all orders for ${trade.symbol}`) } catch (error) { console.error(`❌ Error cancelling orders:`, error) } this.activeTrades.delete(tradeId) } - Orphan Detection Context:
- File:
lib/startup/init-position-manager.tsfunctiondetectOrphanedPositions() - Runs every 10 minutes via Drift state verifier
- Checks: DB says closed but Drift says open → orphan detected
- Action: Attempts to close orphan position
- Side effect: Calls removeTrade() → cancelAllOrders() → affects ALL positions
- File:
- Fix Required (Not Yet Implemented):
// Option 1: Check Drift position size before cancelling orders async removeTrade(tradeId: string, reason: string) { const trade = this.activeTrades.get(tradeId) if (!trade) return try { // Verify Drift position is actually closed (size = 0) const driftPosition = await getDriftPosition(trade.symbol) if (driftPosition && Math.abs(driftPosition.size) > 0.01) { console.log(`⚠️ Not cancelling orders - Drift position still open`) return } await cancelAllOrders(trade.symbol) console.log(`🧹 Cancelled all orders for ${trade.symbol}`) } catch (error) { console.error(`❌ Error cancelling orders:`, error) } this.activeTrades.delete(tradeId) } // Option 2: Store order IDs with trade, cancel only those specific orders // This requires tracking orderIds in trade object - Detection: Health Monitoring System:
- File:
lib/health/position-manager-health.ts - Check: Open positions missing TP1/TP2 orders → WARNING
- Check: Open positions missing SL orders → CRITICAL ALERT
- Detects orders removed within 30 seconds
- Logs: "🚨 CRITICAL: Position {id} missing SL order"
- File:
- Why This Matters:
- This is a REAL MONEY system - removed orders = lost protection
- Orphan detection is necessary (recovers stuck positions)
- But must not affect active positions on same symbol
- User opens position expecting protection, system removes it
- Silent removal - no notification, no alert
- Prevention Rules:
- NEVER cancel orders without verifying position actually closed
- Check Drift position size = 0 before cancelAllOrders()
- Store order IDs per trade, cancel specific orders only
- Health monitor detects missing orders within 30 seconds
- Add grace period for new positions (skip orphan checks <5 min old)
- Log CRITICAL alert when orders removed from active position
- Red Flags Indicating This Bug:
- Position initially has TP/SL orders
- Orders disappear shortly after opening
- Orphan detection logs around same time
- Multiple close attempts on old position
- cancelAllOrders() logs for symbol
- New position left with no orders
- Git commit: [Health monitoring deployed Dec 8, 2025 - detects missing orders]
- Status: ⚠️ Health monitor deployed (detects issue), root cause fix pending
-
CRITICAL: Smart Validation Queue Never Monitors - In-Memory Queue Lost on Container Restart (CRITICAL - Dec 9, 2025):
- Symptom: Quality 50-89 signals blocked and saved to database, but validation queue never monitors them for price confirmation
- User Report: "the smart validation system should have entered the trade as it shot up shouldnt it?"
- Financial Impact: Missed +$18.56 manual entry (SOL-PERP LONG quality 85, price moved +1.21% in 1 minute = 4× the +0.3% confirmation threshold)
- Real Incident (Dec 9, 2025 15:40):
- Signal: cmiyqy6uf03tcn30722n02lnk
- Quality: 85/90 (blocked correctly per thresholds)
- Entry Price: $134.94
- Price after 1min: $136.57 (+1.21%)
- Confirmation threshold: +0.3%
- System should have: Queued → Monitored → Entered at confirmation
- What happened: Signal saved to database, queue NEVER monitored it
- User had to manually enter → +$18.56 profit
- Root Cause #1 - In-Memory Queue Lost on Restart:
- File:
lib/trading/smart-validation-queue.ts - Queue uses
Map<string, QueuedSignal>in-memory storage - BlockedSignal records saved to PostgreSQL ✅
- But queue Map is empty after container restart ❌
- startSmartValidation() just created empty singleton, never loaded from database
- File:
- Root Cause #2 - Production Logger Silencing:
- logger.log() calls silenced when NODE_ENV=production
- File:
lib/utils/logger.ts- logger.log() only works in dev mode - Startup messages never appeared in container logs
- Silent failure - no errors, no indication queue was empty
- THE FIX (Dec 9, 2025 - DEPLOYED):
// In lib/trading/smart-validation-queue.ts startSmartValidation() export async function startSmartValidation(): Promise<void> { const queue = getSmartValidationQueue() // Query BlockedSignal table for signals within 30-minute entry window const thirtyMinutesAgo = new Date(Date.now() - 30 * 60 * 1000) const recentBlocked = await prisma.blockedSignal.findMany({ where: { blockReason: 'QUALITY_SCORE_TOO_LOW', signalQualityScore: { gte: 50, lt: 90 }, // Marginal quality range createdAt: { gte: thirtyMinutesAgo }, }, }) console.log(`🔄 Restoring ${recentBlocked.length} pending signals from database`) // Re-queue each signal with original parameters for (const signal of recentBlocked) { await queue.addSignal({ /* signal params */ }) } console.log(`✅ Smart validation restored ${recentBlocked.length} signals, monitoring started`) } - Why Both Fixes Were Needed:
- Database restoration: Load pending signals from PostgreSQL on startup
- console.log(): Replace logger.log() calls with console.log() for production visibility
- Without #1: Queue always empty after restart
- Without #2: Couldn't debug why queue was empty (no logs)
- Expected Behavior After Fix:
- Container restart: Queries database for signals within 30-minute window
- Signals found: Re-queued with original entry price, quality score, metrics
- Monitoring starts: 30-second price checks begin immediately
- Logs show: "🔄 Restoring N pending signals from database"
- Confirmation: "👁️ Smart validation monitoring started (checks every 30s)"
- Verification Commands:
# Check startup logs docker logs trading-bot-v4 2>&1 | grep -E "(Smart|validation|Restor)" # Expected output: # 🧠 Starting smart entry validation system... # 🔄 Restoring N pending signals from database # ✅ Smart validation restored N signals, monitoring started # 👁️ Smart validation monitoring started (checks every 30s) # If N=0, check database for recent signals docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c \ "SELECT COUNT(*) FROM \"BlockedSignal\" WHERE \"blockReason\" = 'QUALITY_SCORE_TOO_LOW' AND \"signalQualityScore\" BETWEEN 50 AND 89 AND \"createdAt\" > NOW() - INTERVAL '30 minutes';" - Why This Matters:
- This is a REAL MONEY system - validation queue is designed to catch marginal signals that confirm
- Quality 50-89 signals = borderline setups that need price confirmation
- Without validation: Miss profitable confirmed moves (like +$18.56 opportunity)
- System appeared to work (signals blocked correctly, saved to database)
- But critical validation step never executed (queue empty, monitoring off)
- Prevention Rules:
- NEVER use in-memory-only data structures for critical financial logic
- ALWAYS restore state from database on startup for trading systems
- ALWAYS use console.log() for critical startup messages (not logger.log())
- ALWAYS verify monitoring actually started (check logs for confirmation)
- Add database restoration for ANY system that queues/monitors signals
- Test container restart scenarios to catch in-memory state loss
- Red Flags Indicating This Bug:
- BlockedSignal records exist in database but no queue monitoring logs
- Price moves meet confirmation threshold but no execution
- User manually enters trades that validation queue should have handled
- No "👁️ Smart validation check: N pending signals" logs every 30 seconds
- Telegram shows "⏰ SIGNAL QUEUED FOR VALIDATION" but nothing after
- Files Changed:
- lib/trading/smart-validation-queue.ts (Lines 456-500, 137-175, 117-127)
- Git commit:
2a1badf"critical: Fix Smart Validation Queue - restore signals from database on startup" - Deploy Status: ✅ DEPLOYED Dec 9, 2025 17:07 CET
- Status: ✅ Fixed - Queue now restores pending signals on startup, production logging enabled
-
CRITICAL: 1-Minute Market Data Webhook Action Mismatch - Fresh ATR Data Never Arriving (CRITICAL - Dec 9, 2025):
- Symptom: Telegram bot timing out waiting for fresh ATR data, falling back to stale preset (0.43)
- User Report: "for some reason we are not getting fresh atr data from the 1 minute data feed"
- Financial Impact: Manual trades executing with stale volatility metrics instead of fresh real-time data
- Real Incident (Dec 9, 2025 19:00):
- User sent "long sol" via Telegram
- Bot response: "⏳ Waiting for next 1-minute datapoint... Will execute with fresh ATR (max 60s)"
- After 60s: "⚠️ Timeout waiting for fresh data. Using preset ATR: 0.43"
- Cache inspection: Only contained "manual" timeframe data (97 seconds old), no fresh 1-minute data
- Logs: No "Received market data webhook" entries
- Root Cause - Webhook Action Validation Mismatch:
- File:
app/api/trading/market-data/route.tslines 64-71 - Endpoint validated:
if (body.action !== 'market_data')(exact string match) - TradingView alert sends:
"action": "market_data_1min"(line 54 in 1min_market_data_feed.pinescript) - Result: Webhook returned 400 Bad Request, data never cached
- Smart entry timer polled empty cache, timed out after 60 seconds
- File:
- THE FIX (Dec 9, 2025 - DEPLOYED):
// BEFORE (lines 64-71): if (body.action !== 'market_data') { return NextResponse.json( { error: 'Invalid action - expected "market_data"' }, { status: 400 } ) } // AFTER: const validActions = ['market_data', 'market_data_1min'] if (!validActions.includes(body.action)) { return NextResponse.json( { error: `Invalid action - expected one of: ${validActions.join(', ')}` }, { status: 400 } ) } - Why This Fix:
- Endpoint now accepts BOTH action variants
- TradingView 1-minute alerts use "market_data_1min" to distinguish from 5-minute signals
- Higher timeframe alerts (15min, 1H, 4H, Daily) use "market_data"
- Single endpoint serves both data collection systems
- Error message updated to show all valid options
- Build Challenges:
- Initial build used cached layers, fix not included in compiled code
- Required
docker compose build trading-bot --no-cacheto force TypeScript recompilation - Verification:
docker exec trading-bot-v4 grep "validActions" /app/.next/server/app/api/trading/market-data/route.js
- Verification Complete (Dec 9, 2025 19:18 CET):
- Manual test:
curl -X POST /api/trading/market-data -d '{"action": "market_data_1min", ...}' - Response:
{"success": true, "symbol": "SOL-PERP", "message": "Market data cached and stored successfully"} - Cache inspection: Fresh data with ATR 0.55, ADX 28.5, RSI 62, timeframe "1", age 9 seconds
- Logs: "📡 Received market data webhook: { action: 'market_data_1min', symbol: 'SOLUSDT', atr: 0.55 }"
- Logs: "✅ Market data cached for SOL-PERP"
- Manual test:
- Expected Behavior After Fix:
- TradingView 1-minute alert fires → webhook accepted → data cached
- Telegram "long sol" command → waits for next datapoint → receives fresh data within 60s
- Bot shows: "✅ Fresh data received | ATR: 0.55 | ADX: 28.5 | RSI: 62.0"
- Trade executes with real-time ATR-based TP/SL targets (not stale preset)
- Why This Matters:
- This is a REAL MONEY system - stale volatility metrics = wrong position sizing
- ATR changes with market conditions (0.43 preset vs 0.55 actual = 28% difference)
- TP/SL targets calculated from ATR multipliers (2.0×, 4.0×, 3.0×)
- Wrong ATR = targets too tight (missed profits) or too wide (unnecessary risk)
- User's manual trades require fresh data for optimal execution
- Prevention Rules:
- NEVER use exact string match for webhook action validation (use array inclusion)
- ALWAYS accept multiple action variants when endpoints serve similar purposes
- ALWAYS verify Docker build includes TypeScript changes (check compiled JS)
- ALWAYS test webhook endpoints with curl before declaring fix working
- Add monitoring alerts when cache shows only stale "manual" timeframe data
- Log webhook rejections with 400 errors for debugging
- Red Flags Indicating This Bug:
- Telegram bot times out waiting for fresh data every time
- Cache only contains "manual" timeframe (not "1" for 1-minute data)
- No "Received market data webhook" logs in container output
- TradingView alerts configured but endpoint returns 400 errors
- Bot always falls back to preset metrics (ATR 0.43, ADX 32, RSI 58/42)
- Files Changed:
- app/api/trading/market-data/route.ts (Lines 64-71 - webhook validation)
- TradingView Alert Setup:
- Alert name: "1-Minute Market Data Feed"
- Chart: SOL-PERP 1-minute timeframe
- Condition: "1min Market Data" indicator, "Once Per Bar Close"
- Webhook URL: n8n or direct bot endpoint
- Alert message: Auto-generated JSON with
"action": "market_data_1min" - Expected rate: 1 alert per minute (60/hour per symbol)
- Troubleshooting Commands:
# Check if webhook firing docker logs trading-bot-v4 2>&1 | grep "Received market data webhook" | tail -5 # Check cache contents curl -s http://localhost:3001/api/trading/market-data | jq '.cache."SOL-PERP"' # Check data age (should be < 60 seconds) curl -s http://localhost:3001/api/trading/market-data | jq '.cache."SOL-PERP".ageSeconds' # Monitor webhook hits in real-time docker logs -f trading-bot-v4 2>&1 | grep "market data webhook" - Git commit:
9668349"fix: Accept market_data_1min action in webhook endpoint" (Dec 9, 2025) - Deploy Status: ✅ DEPLOYED Dec 9, 2025 19:18 CET (--no-cache build)
- Status: ✅ Fixed - Endpoint accepts both action variants, fresh data flow operational
- Documentation:
docs/1MIN_ALERT_SETUP_INSTRUCTIONS.md- Complete setup guide for TradingView alerts
-
CRITICAL: MFE Data Unit Mismatch - ALWAYS Filter by Date (CRITICAL - Dec 5, 2025):
- Symptom: SQL analysis shows "20%+ average MFE" but TP1 (0.6% target) never hits
- Root Cause: Old Trade records stored MFE/MAE in DOLLARS, new records store PERCENTAGES
- Data Corruption Examples:
- Entry $126.51, Peak $128.21 = 1.35% actual move
- But stored as maxFavorableExcursion = 90.73 (dollars, not percent)
- SQL AVG() returns meaningless mix: (1.35 + 90.73 + 0.85 + 87.22) / 4 = 45.04
- Incident (Dec 5, 2025):
- Agent analyzed blocked vs executed signals
- SQL showed executed signals: 20.15% avg MFE (appeared AMAZING)
- Implemented "optimizations": tighter targets, higher TP1 close, 5× leverage
- User questioned: "tp1 barely hits that has nothing to do with our software monitoring does it?"
- Investigation revealed: Only 2/11 trades reached TP1 price
- TRUE MFE after filtering: 0.76% (long), 1.20% (short) - NOT 20%!
- 26× inflation due to unit mismatch in old data
- MANDATORY SQL Pattern:
-- WRONG: Includes corrupted old data SELECT AVG("maxFavorableExcursion") FROM "Trade" WHERE "signalQualityScore" >= 90; -- CORRECT: Filter to after Nov 23, 2025 fix SELECT AVG("maxFavorableExcursion") FROM "Trade" WHERE "signalQualityScore" >= 90 AND "createdAt" >= '2025-11-23'; -- After MFE fix -- OR: Recalculate from prices (always correct) SELECT AVG( CASE WHEN direction = 'long' THEN (("maxFavorablePrice" - "entryPrice") / "entryPrice") * 100 ELSE (("entryPrice" - "maxFavorablePrice") / "entryPrice") * 100 END ) FROM "Trade" WHERE "signalQualityScore" >= 90; - Why This Matters:
- Verification Before Any MFE/MAE Analysis:
-- Check if data is percentages or dollars SELECT "entryPrice", "maxFavorablePrice", "maxFavorableExcursion" as stored, CASE WHEN direction = 'long' THEN (("maxFavorablePrice" - "entryPrice") / "entryPrice") * 100 ELSE (("entryPrice" - "maxFavorablePrice") / "entryPrice") * 100 END as calculated_pct FROM "Trade" WHERE "exitReason" IS NOT NULL ORDER BY "createdAt" DESC LIMIT 5; -- If stored ≠ calculated_pct → OLD DATA, use date filter - See Also:
- Git commits:
a15f17f(revert),a67a338(incorrect optimization),f65aae5(incorrect docs) - Status: ✅ Fixed - Analysis methodology documented, incorrect changes reverted
-
CRITICAL SECURITY: .env file tracked in git (CRITICAL - Fixed Dec 5, 2025 - PR #3):
- Symptom: Sensitive credentials exposed in git repository history
- Credentials exposed:
- Database connection strings (PostgreSQL)
- Drift Protocol private keys (wallet access)
- Telegram bot tokens
- API keys and secrets
- RPC endpoints
- Root Cause:
.envfile was tracked in git from initial commit, exposing all secrets to anyone with repository access - Files modified:
.gitignore- Added.env,.env.local,.env.*.localpatterns.env- Removed from git tracking (kept locally).env.telegram-bot- Removed from git tracking (contains bot token)
- Fix Process (Dec 5, 2025):
# 1. Update .gitignore first (add these lines if not present) # .env # .env.local # .env.*.local # 2. Remove from git tracking (keeps local file) git rm --cached .env git rm --cached .env.telegram-bot # 3. Commit the fix git commit -m "security: Remove .env from git tracking" - Impact:
- ✅ Future commits will NOT include .env files
- ✅ Local development unaffected (files still exist locally)
- ⚠️ Historical commits still contain secrets (until git history rewrite)
- POST-FIX ACTIONS REQUIRED:
- Rotate all credentials immediately:
- Database passwords
- Telegram bot token (create new bot if needed)
- Drift Protocol keys (if exposed to public)
- Any API keys in .env
- Verify .env.example exists - Template for new developers
- Consider git history cleanup - Use BFG Repo-Cleaner if secrets were public
- Rotate all credentials immediately:
- Prevention:
- Always add
.envto.gitignoreBEFORE first commit - Use
.env.examplewith placeholder values - CI/CD should fail if .env detected in commit
- Regular security audits with
git log -p | grep -i password
- Always add
- Why This Matters for Trading Bot:
- Private keys = wallet access - Could drain trading account
- Database = trade history - Could manipulate records
- Telegram = notifications - Could send fake alerts
- This is a real money system managing $540 capital
- Verification:
# Confirm .env is ignored git check-ignore .env # Should output: .env # Confirm .env not tracked git ls-files | grep "\.env" # Should output: only .env.example - Git commit: PR #3 on branch
copilot/remove-env-from-git-tracking - Status: ✅ Fixed - .env removed from tracking, .gitignore updated
-
CRITICAL: Service Initialization Never Ran - $1,000 Lost (CRITICAL - Dec 5, 2025):
- Symptom: 4 critical services coded correctly but never started for 16 days
- Financial Impact: $700-1,400 in missed opportunities (user estimate: $1,000)
- Duration: Nov 19 - Dec 5, 2025 (16 days)
- Root Cause: Services initialized AFTER validation function with early return
- Code Flow (BROKEN):
// lib/startup/init-position-manager.ts await validateOpenTrades() // Line 43 // validateOpenTrades() returns early if no trades (line 111) // SERVICE INITIALIZATION (Lines 59-72) - NEVER REACHED startDataCleanup() startBlockedSignalTracking() await startStopHuntTracking() await startSmartValidation() - Affected Services:
- Stop Hunt Revenge Tracker (Nov 20) - Never attempted revenge on quality 85+ stop-outs
- Smart Entry Validation (Nov 30) - Manual Telegram trades used stale data instead of fresh TradingView metrics
- Blocked Signal Price Tracker (Nov 19) - No data collected for threshold optimization
- Data Cleanup Service (Dec 2) - Database bloat, no 28-day retention enforcement
- Why It Went Undetected:
- Silent failure: No errors thrown, services simply never initialized
- Logger silencing: Production logger (
logger.log) silenced byNODE_ENV=production - Split logging: Some logs appeared (from service functions), others didn't (from init function)
- Common trigger: Bug only occurred when
openTrades.length === 0(frequent in production)
- Financial Breakdown:
- Stop hunt revenge: $300-600 lost (missed reversal opportunities)
- Smart validation: $200-400 lost (stale data caused bad entries)
- Blocked signals: $200-400 lost (suboptimal quality thresholds)
- Total: $700-1,400 over 16 days
- Fix (Dec 5, 2025):
// CORRECT ORDER: // 1. Start services FIRST (lines 34-50) startDataCleanup() startBlockedSignalTracking() await startStopHuntTracking() await startSmartValidation() // 2. THEN validate (line 56) - can return early safely await validateAllOpenTrades() await validateOpenTrades() // Early return OK now // 3. Finally init Position Manager const manager = await getInitializedPositionManager() - Logging Fix: Changed
logger.log()toconsole.log()for production visibility - Verification:
$ docker logs trading-bot-v4 | grep -E "🧹|🔬|🎯|🧠|📊" 🧹 Starting data cleanup service... 🔬 Starting blocked signal price tracker... 🎯 Starting stop hunt revenge tracker... 📊 No active stop hunts - tracker will start when needed 🧠 Starting smart entry validation system... - Prevention Measures:
- Test suite (PR #2): 113 tests covering Position Manager - add service initialization tests
- CI/CD pipeline (PR #5): Automated quality gates - add service startup validation
- Startup health check: Verify all expected services initialized, throw error if missing
- Production logging standard: Critical operations use
console.log(), notlogger.log()
- Lessons Learned:
- Service initialization order matters - never place critical services after functions with early returns
- Silent failures are dangerous - add explicit verification that services started
- Production logging must be visible - logger utilities that silence logs = debugging nightmare
- Test real-world conditions - bug only occurred with
NODE_ENV=production+openTrades.length === 0
- Timeline:
- Nov 19: Blocked Signal Tracker deployed (never ran)
- Nov 20: Stop Hunt Revenge deployed (never ran)
- Nov 30: Smart Validation deployed (never ran)
- Dec 2: Data Cleanup deployed (never ran)
- Dec 5: Bug discovered and fixed
- Result: 16 days of development with 0 production execution
- Git commits:
51b63f4(service order fix),f6c9a7b(console.log fix),35c2d7f(stop hunt logs fix) - Full documentation:
docs/CRITICAL_SERVICE_INITIALIZATION_BUG_DEC5_2025.md - Status: ✅ Fixed - All services now start on every container restart, verified in production logs
File Conventions
- API routes:
app/api/[feature]/[action]/route.ts(Next.js 15 App Router) - Services:
lib/[service]/[module].ts(drift, pyth, trading, database) - Config: Single source in
config/trading.tswith env merging - Types: Define interfaces in same file as implementation (not separate types directory)
- Console logs: Use emojis for visual scanning: 🎯 🚀 ✅ ❌ 💰 📊 🛡️
Re-Entry Analytics System (Phase 1)
Purpose: Validate manual Telegram trades using fresh TradingView data + recent performance analysis
Components:
-
Market Data Cache (
lib/trading/market-data-cache.ts)- Singleton service storing TradingView metrics
- 5-minute expiry on cached data
- Tracks: ATR, ADX, RSI, volume ratio, price position, timeframe
-
Market Data Webhook (
app/api/trading/market-data/route.ts)- Receives TradingView alerts every 1-5 minutes
- POST: Updates cache with fresh metrics
- GET: View cached data (debugging)
-
Re-Entry Check Endpoint (
app/api/analytics/reentry-check/route.ts)- Validates manual trade requests
- Uses fresh TradingView data if available (<5min old)
- Falls back to historical metrics from last trade
- Scores signal quality + applies performance modifiers:
- -20 points if last 3 trades lost money (avgPnL < -5%)
- +10 points if last 3 trades won (avgPnL > +5%, WR >= 66%)
- -5 points for stale data, -10 points for no data
- Minimum score: 55 (vs 60 for new signals)
-
Auto-Caching (
app/api/trading/execute/route.ts)- Every trade signal from TradingView auto-caches metrics
- Ensures fresh data available for manual re-entries
-
Telegram Integration (
telegram_command_bot.py)- Calls
/api/analytics/reentry-checkbefore executing manual trades - Shows data freshness ("✅ FRESH 23s old" vs "⚠️ Historical")
- Blocks low-quality re-entries unless
--forceflag used - Fail-open: Proceeds if analytics check fails
- Calls
User Flow:
User: "long sol"
↓ Check cache for SOL-PERP
↓ Fresh data? → Use real TradingView metrics
↓ Stale/missing? → Use historical + penalty
↓ Score quality + recent performance
↓ Score >= 55? → Execute
↓ Score < 55? → Block (unless --force)
TradingView Setup: Create alerts that fire every 1-5 minutes with this webhook message:
{
"action": "market_data",
"symbol": "{{ticker}}",
"timeframe": "{{interval}}",
"atr": {{ta.atr(14)}},
"adx": {{ta.dmi(14, 14)}},
"rsi": {{ta.rsi(14)}},
"volumeRatio": {{volume / ta.sma(volume, 20)}},
"pricePosition": {{(close - ta.lowest(low, 100)) / (ta.highest(high, 100) - ta.lowest(low, 100)) * 100}},
"currentPrice": {{close}}
}
Webhook URL: https://your-domain.com/api/trading/market-data
Per-Symbol Trading Controls
Purpose: Independent enable/disable toggles and position sizing for SOL and ETH to support different trading strategies (e.g., ETH for data collection at minimal size, SOL for profit generation).
Configuration Priority:
- Per-symbol ENV vars (highest priority)
SOLANA_ENABLED,SOLANA_POSITION_SIZE,SOLANA_LEVERAGEETHEREUM_ENABLED,ETHEREUM_POSITION_SIZE,ETHEREUM_LEVERAGE
- Market-specific config (from
MARKET_CONFIGSin config/trading.ts) - Global ENV vars (fallback for BTC and other symbols)
MAX_POSITION_SIZE_USD,LEVERAGE
- Default config (lowest priority)
Settings UI: app/settings/page.tsx has dedicated sections:
- 💎 Solana section: Toggle + position size + leverage + risk calculator
- ⚡ Ethereum section: Toggle + position size + leverage + risk calculator
- 💰 Global fallback: For BTC-PERP and future symbols
Example usage:
// In execute/test endpoints
const { size, leverage, enabled } = getPositionSizeForSymbol(driftSymbol, config)
if (!enabled) {
return NextResponse.json({
success: false,
error: 'Symbol trading disabled'
}, { status: 400 })
}
Test buttons: Settings UI has symbol-specific test buttons:
- 💎 Test SOL LONG/SHORT (disabled when
SOLANA_ENABLED=false) - ⚡ Test ETH LONG/SHORT (disabled when
ETHEREUM_ENABLED=false)
When Making Changes
- Adding new config: Update DEFAULT_TRADING_CONFIG + getConfigFromEnv() + .env file
- Adding database fields: Update prisma/schema.prisma →
npx prisma migrate dev→npx prisma generate→ rebuild Docker - Changing order logic: Test with DRY_RUN=true first, use small position sizes ($10)
- API endpoint changes: Update both endpoint + corresponding n8n workflow JSON (Check Risk and Execute Trade nodes)
- Docker changes: Rebuild with
docker compose build trading-botthen restart container - Modifying quality score logic: Update BOTH
/api/trading/check-riskand/api/trading/executeendpoints, ensure timeframe-aware thresholds are synchronized - Exit strategy changes: Modify Position Manager logic + update on-chain order placement in
placeExitOrders() - TradingView alert changes:
- Ensure alerts pass
timeframefield (e.g.,"timeframe": "5") to enable proper signal quality scoring - CRITICAL: Include
atrfield for ATR-based TP/SL system:"atr": {{ta.atr(14)}} - Without ATR, system falls back to less optimal fixed percentages
- Ensure alerts pass
- ATR-based risk management changes:
- Update multipliers or bounds in
.env(ATR_MULTIPLIER_TP1/TP2/SL, MIN/MAX_*_PERCENT) - Test with known ATR values to verify calculation (e.g., SOL ATR 0.43)
- Log shows:
📊 ATR-based targets: TP1 X.XX%, TP2 Y.YY%, SL Z.ZZ% - Verify targets fall within safety bounds (TP1: 0.5-1.5%, TP2: 1.0-3.0%, SL: 0.8-2.0%)
- Update Telegram manual trade presets if median ATR changes (currently 0.43 for SOL)
- Update multipliers or bounds in
- Position Manager changes: ALWAYS run tests BEFORE deployment, then validate in production
- CRITICAL (Dec 8, 2025): Health monitoring system detects PM failures within 30 seconds
- Health checks:
docker logs -f trading-bot-v4 | grep "🏥" - Expected: "🏥 Starting Position Manager health monitor (every 30 sec)..."
- If issues: "🚨 CRITICAL: Position Manager not monitoring!" or "🚨 CRITICAL: Position {id} missing SL order"
- STEP 1 - Run tests locally (MANDATORY):
npm test # Run all 113 tests (takes ~30 seconds) # OR run specific test file: npm test tests/integration/position-manager/tp1-detection.test.ts - Why mandatory: Tests catch bugs (tokens vs USD, TP1 false detection, wrong SL price) BEFORE they cost real money
- If tests fail: Fix the issue or update tests - DO NOT deploy broken code
- STEP 2 - Deploy and validate with test trade:
- Use
/api/trading/testendpoint or Telegramlong sol --force - Monitor
docker logs -f trading-bot-v4for full cycle - Verify TP1 hit → 75% close → SL moved to breakeven
- SQL: Check
tp1Hit,slMovedToBreakeven,currentSizein Trade table - Compare: Position Manager logs vs actual Drift position size
- Phase 7.3 Adaptive trailing stop verification (Nov 27, 2025+):
- Watch for "📊 1-min ADX update: Entry X → Current Y (±Z change)" every 60 seconds
- Verify ADX acceleration bonus: "🚀 ADX acceleration (+X points)"
- Verify ADX deceleration penalty: "⚠️ ADX deceleration (-X points)"
- Check final calculation: "📊 Adaptive trailing: ATR X (Y%) × Z× = W%"
- Confirm multiplier adjusts dynamically (not static like old system)
- Example: ADX 22.5→29.5 should show multiplier increase from 1.5× to 2.4×+
- Trailing stop changes:
- CRITICAL (Nov 27, 2025): Phase 7.3 uses REAL-TIME 1-minute ADX, not entry-time ADX
- Code location:
lib/trading/position-manager.tslines 1356-1450 - Queries
getMarketDataCache()for fresh ADX every monitoring loop (2-second interval) - Adaptive multipliers: Base 1.5× + ADX strength tier (1.0×-1.5×) + acceleration (1.3×) + deceleration (0.7×) + profit (1.3×)
- Test with known ADX progression: Entry 22.5 → Current 29.5 = expect acceleration bonus
- Fallback: Uses
trade.adxAtEntryif cache unavailable (backward compatible) - Log shows: "📊 Adaptive trailing: ATR 0.43 (0.31%) × 3.16× = 0.99%"
- Expected: Trail width changes dynamically as ADX changes (captures acceleration, protects on deceleration)
- Calculation changes: Add verbose logging and verify with SQL
- Log every intermediate step, especially unit conversions
- Never assume SDK data format - log raw values to verify
- SQL query with manual calculation to compare results
- Test boundary cases: 0%, 100%, min/max values
- Adaptive leverage changes: When modifying quality-based leverage tiers
- Quality score MUST be calculated BEFORE position sizing (execute endpoint line ~172)
- Update
getLeverageForQualityScore()helper in config/trading.ts - Test with known quality scores to verify tier selection (95+ = 15x, 90-94 = 10x)
- Log shows:
📊 Adaptive leverage: Quality X → Yx leverage (threshold: 95) - Update ENV variables: USE_ADAPTIVE_LEVERAGE, HIGH_QUALITY_LEVERAGE, LOW_QUALITY_LEVERAGE, QUALITY_LEVERAGE_THRESHOLD
- Monitor first 10-20 trades to verify correct leverage applied
- DEPLOYMENT VERIFICATION (MANDATORY): Before declaring ANY fix working:
- Check container start time vs commit timestamp
- If container older than commit: CODE NOT DEPLOYED
- Restart container and verify new code is running
- Never say "fixed" or "protected" without deployment confirmation
- This is a REAL MONEY system - unverified fixes cause losses
- GIT COMMIT AND PUSH (MANDATORY): After completing ANY feature, fix, or significant change:
- ALWAYS commit changes with descriptive message
- ALWAYS push to remote repository
- User should NOT have to ask for this - it's part of completion
- DUAL REMOTE SETUP (Dec 5, 2025):
- origin: Production Gitea (ssh://git@127.0.0.1:222/root/trading_bot_v4.git)
- github: Copilot PR workflow (https://github.com/mindesbunister/trading_bot_v4.git)
- Post-commit hook automatically pushes to github after every commit
- Manual push to origin required:
git push origin master - Verify sync status:
git log --oneline -1 && git remote -v && git branch -vv
- Commit message format:
git add -A git commit -m "type: brief description - Bullet point details - Files changed - Why the change was needed " # Hook auto-pushes to github git push origin master # Manual push to production - Types:
feat:(new feature)fix:(bug fix)docs:(documentation)refactor:(code restructure)critical:(financial/safety critical fixes)
- This is NOT optional - code exists only when committed and pushed
- Automation Setup:
- File:
.git/hooks/post-commit(executable) - Purpose: Auto-sync commits to GitHub for Copilot PR workflow
- Status: Active and verified (Dec 5, 2025)
- Testing: Commits auto-appear on github/master
- Manual setup: Copy hook script if cloning fresh repository
- File:
- Recent examples:
test: Verify GitHub auto-sync hook(de77cfe, Dec 5, 2025)- Verified post-commit hook working correctly
- All remotes synced (origin/master, github/master)
fix: Implement Associated Token Account for USDC withdrawals(c37a9a3, Nov 19, 2025)- Fixed PublicKey undefined, ATA resolution, excluded archive
- Successfully tested $6.58 withdrawal with on-chain confirmation
fix: Correct MIN_QUALITY_SCORE to MIN_SIGNAL_QUALITY_SCORE(Nov 19, 2025)- Settings UI using wrong ENV variable name
- Quality score changes now take effect
critical: Fix withdrawal statistics to use actual Drift deposits(8d53c4b, Nov 19, 2025)- Query cumulativeDeposits from Drift ($1,440.61 vs hardcoded $546)
- Created /api/drift/account-summary endpoint
- DOCKER MAINTENANCE (AFTER BUILDS): Clean up accumulated cache to prevent disk full:
# Remove dangling images (old builds) docker image prune -f # Remove build cache (biggest space hog - 40+ GB typical) docker builder prune -f # Optional: Remove dangling volumes (if no important data) docker volume prune -f # Check space saved docker system df- When to run: After successful deployments, weekly if building frequently, when disk warnings appear
- Space freed: Dangling images (2-5 GB), Build cache (40-50 GB), Dangling volumes (0.5-1 GB)
- Safe to delete:
<none>tagged images, build cache (recreated on next build), dangling volumes - Keep: Named volumes (
trading-bot-postgres), active containers, tagged images in use - Why critical: Docker builds create 1.3+ GB per build, cache accumulates to 40-50 GB without cleanup
- NEXTCLOUD DECK SYNC (MANDATORY): After completing phases or making significant roadmap progress:
- Update roadmap markdown files with new status (🔄 IN PROGRESS, ✅ COMPLETE, 🔜 NEXT)
- Run sync to update Deck cards:
python3 scripts/sync-roadmap-to-deck.py --init - Move cards between stacks in Nextcloud Deck UI to reflect progress visually
- Backlog (📥) → Planning (📋) → In Progress (🚀) → Complete (✅)
- Keep Deck in sync with actual work - it's the visual roadmap tracker
- Documentation:
docs/NEXTCLOUD_DECK_SYNC.md
- UPDATE COPILOT-INSTRUCTIONS.MD (MANDATORY): After implementing ANY significant feature or system change:
- Document new database fields and their purpose
- Add filtering requirements (e.g., manual vs TradingView trades)
- Update "Important fields" sections with new schema changes
- Add new API endpoints to the architecture overview
- Document data integrity requirements (what must be excluded from analysis)
- Add SQL query patterns for common operations
- Update "When Making Changes" section with new patterns learned
- Create reference docs in
docs/for complex features (e.g.,MANUAL_TRADE_FILTERING.md) - WHY: Future AI agents need complete context to maintain data integrity and avoid breaking analysis
- EXAMPLES: signalSource field for filtering, MAE/MFE tracking, phantom trade detection
- MULTI-TIMEFRAME DATA COLLECTION CHANGES (Nov 26, 2025): When modifying signal processing for different timeframes:
- Quality scoring MUST happen BEFORE timeframe filtering (execute endpoint line 112)
- All timeframes need real quality scores for analysis (not hardcoded 0)
- Data collection signals (15min/1H/4H/Daily) save to BlockedSignal with full quality metadata
- BlockedSignal fields to populate: signalQualityScore, signalQualityVersion, minScoreRequired, scoreBreakdown
- Enables SQL:
WHERE blockReason = 'DATA_COLLECTION_ONLY' AND signalQualityScore >= X - Purpose: Compare quality-filtered win rates across timeframes to determine optimal trading interval
- Update Multi-Timeframe section in copilot-instructions.md when changing flow
Development Roadmap
Current Status (Nov 14, 2025):
- 168 trades executed with quality scores and MAE/MFE tracking
- Capital: $97.55 USDC at 100% health (zero debt, all USDC collateral)
- Leverage: 15x SOL (reduced from 20x for safer liquidation cushion)
- Three active optimization initiatives in data collection phase:
- Signal Quality: 0/20 blocked signals collected → need 10-20 for analysis
- Position Scaling: 161 v5 trades, collecting v6 data → need 50+ v6 trades
- ATR-based TP: 1/50 trades with ATR data → need 50 for validation
- Expected combined impact: 35-40% P&L improvement when all three optimizations complete
- Master roadmap: See
OPTIMIZATION_MASTER_ROADMAP.mdfor consolidated view
See SIGNAL_QUALITY_OPTIMIZATION_ROADMAP.md for systematic signal quality improvements:
- Phase 1 (🔄 IN PROGRESS): Collect 10-20 blocked signals with quality scores (1-2 weeks)
- Phase 2 (🔜 NEXT): Analyze patterns and make data-driven threshold decisions
- Phase 3 (🎯 FUTURE): Implement dual-threshold system or other optimizations based on data
- Phase 4 (🤖 FUTURE): Automated price analysis for blocked signals
- Phase 5 (🧠 DISTANT): ML-based scoring weight optimization
See POSITION_SCALING_ROADMAP.md for planned position management optimizations:
- Phase 1 (✅ COMPLETE): Collect data with quality scores (20-50 trades needed)
- Phase 2: ATR-based dynamic targets (adapt to volatility)
- Phase 3: Signal quality-based scaling (high quality = larger runners)
- Phase 4: Direction-based optimization (shorts vs longs have different performance)
- Phase 5 (✅ COMPLETE): TP2-as-runner system implemented - configurable runner (default 25%, adjustable via TAKE_PROFIT_1_SIZE_PERCENT) with ATR-based trailing stop
- Phase 6: ML-based exit prediction (future)
Recent Implementation: TP2-as-runner system provides 5x larger runner (default 25% vs old 5%) for better profit capture on extended moves. When TP2 price is hit, trailing stop activates on full remaining position instead of closing partial amount. Runner size is configurable (100% - TP1 close %).
Blocked Signals Tracking (Nov 11, 2025): System now automatically saves all blocked signals to database for data-driven optimization. See BLOCKED_SIGNALS_TRACKING.md for SQL queries and analysis workflows.
Multi-Timeframe Data Collection (Nov 18-19, 2025): Execute endpoint now supports parallel data collection across timeframes:
- 5min signals: Execute trades (production)
- 15min/1H/4H/Daily signals: Save to BlockedSignal table with
blockReason='DATA_COLLECTION_ONLY' - Enables cross-timeframe performance comparison (which timeframe has best win rate?)
- Zero financial risk - non-5min signals just collect data for future analysis
- TradingView alerts on multiple timeframes → n8n passes
timeframefield → bot routes accordingly - After 50+ trades: SQL analysis to determine optimal timeframe for live trading
- Implementation:
app/api/trading/execute/route.tslines 106-145 - n8n Parse Signal Enhanced (Nov 19): Supports multiple timeframe formats:
"buy 5"→"5"(5 minutes)"buy 15"→"15"(15 minutes)"buy 60"or"buy 1h"→"60"(1 hour)"buy 240"or"buy 4h"→"240"(4 hours)"buy D"or"buy 1d"→"D"(daily)- Extracts indicator version from
IND:v8format
Data-driven approach: Each phase requires validation through SQL analysis before implementation. No premature optimization.
Signal Quality Version Tracking: Database tracks signalQualityVersion field to compare algorithm performance:
- Analytics dashboard shows version comparison: trades, win rate, P&L, extreme position stats
- v4 (current) includes blocked signals tracking for data-driven optimization
- Focus on extreme positions (< 15% range) - v3 aimed to reduce losses from weak ADX entries
- SQL queries in
docs/analysis/SIGNAL_QUALITY_VERSION_ANALYSIS.sqlfor deep-dive analysis - Need 20+ trades per version before meaningful comparison
Indicator Version Tracking (Nov 18-28, 2025): Database tracks indicatorVersion field for TradingView strategy comparison:
- v9: Money Line with Momentum-Based SHORT Filter (Nov 26+) - PRODUCTION SYSTEM
- Built on v8 foundation (0.6% flip threshold, momentum confirmation, anti-whipsaw)
- MA Gap Analysis: +5 to +15 quality points based on MA50-MA200 convergence
- Momentum-Based SHORT Filter (Nov 26, 2025 - CRITICAL ENHANCEMENT):
- REMOVED: RSI filter for SHORTs (data showed RSI 50+ has BEST 68.2% WR)
- ADDED: ADX ≥23 requirement (filters weak chop like ADX 20.7 failure)
- ADDED: Price Position ≥60% (catches tops) OR ≤40% with Vol ≥2.0x (capitulation)
- Rationale: v8 shorted oversold (RSI 25-35), v9 shorts momentum at tops
- Blocks: Weak chop at range bottom
- Catches: Massive downtrends from top of range
- Data Evidence (95 SHORT trades analyzed):
- RSI < 35: 37.5% WR, -$655.23 (4 biggest disasters)
- RSI 50+: 68.2% WR, +$29.88 (BEST performance!)
- Winners: ADX 23.7-26.9, Price Pos 19-64%
- Losers: ADX 21.8-25.4, Price Pos 13.6%
- Quality threshold (Nov 28, 2025): LONG ≥90, SHORT ≥80
- File:
workflows/trading/moneyline_v9_ma_gap.pinescript
- v8: Money Line Sticky Trend (Nov 18-26) - ARCHIVED
- 8 trades completed (57.1% WR, +$262.70)
- Failure pattern: 5 oversold SHORT disasters (RSI 25-35), 1 weak chop (ADX 20.7)
- Purpose: Baseline for v9 momentum improvements
- ARCHIVED (historical baseline for comparison):
- v5: Buy/Sell Signal strategy (pre-Nov 12) - 36.4% WR, +$25.47
- v6: HalfTrend + BarColor (Nov 12-18) - 48% WR, -$47.70
- v7: v6 with toggles (deprecated - minimal data, no improvements)
- Purpose: v9 is production, archived versions provide baseline for future enhancements
- Analytics UI: v9 highlighted, archived versions greyed out but kept for statistical reference
Financial Roadmap Integration: All technical improvements must align with current phase objectives (see top of document):
- Phase 1 (CURRENT): Prove system works, compound aggressively, 60%+ win rate mandatory
- Phase 2-3: Transition to sustainable growth while funding withdrawals
- Phase 4+: Scale capital while reducing risk progressively
- See
TRADING_GOALS.mdfor complete 8-phase plan ($106 → $1M+) - SQL queries in
docs/analysis/SIGNAL_QUALITY_VERSION_ANALYSIS.sqlfor deep-dive analysis - Need 20+ trades per version before meaningful comparison
Blocked Signals Analysis: See BLOCKED_SIGNALS_TRACKING.md for:
- SQL queries to analyze blocked signal patterns
- Score distribution and metric analysis
- Comparison with executed trades at similar quality levels
- Future automation of price tracking (would TP1/TP2/SL have hit?)
Telegram Notifications (Nov 16, 2025 - Enhanced Nov 20, 2025)
Position Closure Notifications: System sends direct Telegram messages for all position closures via lib/notifications/telegram.ts
Implemented for:
- TP1 partial closes (NEW Nov 20, 2025): Immediate notification when TP1 hits (60% closed)
- Runner exits: Full close notifications when remaining position exits (TP2/SL/trailing)
- Stop loss triggers (SL, soft SL, hard SL, emergency)
- Manual closures (via API or settings UI)
- Ghost position cleanups (external closure detection)
Notification format:
🎯 POSITION CLOSED
📈 SOL-PERP LONG
💰 P&L: $12.45 (+2.34%)
📊 Size: $48.75
📍 Entry: $168.50
🎯 Exit: $172.45
⏱ Hold Time: 1h 23m
🔚 Exit: TP1 (60% closed, 40% runner remaining)
📈 Max Gain: +3.12%
📉 Max Drawdown: -0.45%
Key Features (Nov 20, 2025):
- Immediate TP1 feedback: User sees profit as soon as TP1 hits, doesn't wait for runner to close
- Partial close details: Exit reason shows percentage split (e.g., "TP1 (60% closed, 40% runner remaining)")
- Separate notifications: TP1 close gets one notification, runner close gets another
- Complete P&L tracking: Each notification shows its portion of realized P&L
Configuration: Requires TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID in .env
Code location:
lib/notifications/telegram.ts- sendPositionClosedNotification()lib/trading/position-manager.ts- Integrated in executeExit() (both partial and full closes) and handleExternalClosure()
Commits:
b1ca454"feat: Add Telegram notifications for position closures" (Nov 16, 2025)79e7ffe"feat: Add Telegram notification for TP1 partial closes" (Nov 20, 2025)
Stop Hunt Revenge System (Nov 20, 2025)
Purpose: Automatically re-enters positions after high-quality signals (score 85+) get stopped out, when price reverses back through original entry. Captures the reversal with same position size as original.
Architecture:
- 4-Hour Revenge Window: Monitors for price reversal within 4 hours of stop-out
- Quality Threshold: Only quality score 85+ signals eligible (top-tier setups)
- Position Size: 1.0× original size (same as original - user at 100% allocation)
- One Revenge Per Stop Hunt: Maximum 1 revenge trade per stop-out event
- Monitoring Interval: 30-second price checks for active stop hunts
- Database: StopHunt table (20 fields, 4 indexes) tracks all stop hunt events
Revenge Conditions:
// LONG stopped above entry → Revenge when price drops back below entry
if (direction === 'long' && currentPrice < originalEntryPrice - (0.005 * originalEntryPrice)) {
// Price dropped 0.5% below entry → Stop hunt reversal confirmed
executeRevengeTrade()
}
// SHORT stopped below entry → Revenge when price rises back above entry
if (direction === 'short' && currentPrice > originalEntryPrice + (0.005 * originalEntryPrice)) {
// Price rose 0.5% above entry → Stop hunt reversal confirmed
executeRevengeTrade()
}
How It Works:
- Recording: Position Manager detects SL close with
signalQualityScore >= 85 - Database: Creates StopHunt record with entry price, quality score, ADX, ATR
- Monitoring: Background job checks every 30 seconds for price reversals
- Trigger: Price crosses back through entry + 0.5% buffer within 4 hours
- Execution: Calls
/api/trading/executewith same position size, same direction - Telegram: Sends "🔥 REVENGE TRADE ACTIVATED" notification
- Completion: Updates database with revenge trade ID, marks revengeExecuted=true
Database Schema (StopHunt table):
- Original Trade:
originalTradeId,symbol,direction,stopHuntPrice,originalEntryPrice - Quality Metrics:
originalQualityScore(85+),originalADX,originalATR - Financial:
stopLossAmount(how much user lost),revengeEntryPrice - Timing:
stopHuntTime,revengeTime,revengeExpiresAt(4 hours after stop) - Tracking:
revengeTradeId,revengeExecuted,revengeWindowExpired - Price Extremes:
highestPriceAfterStop,lowestPriceAfterStop(for analysis) - Indexes: symbol, revengeExecuted, revengeWindowExpired, stopHuntTime
Code Components:
// lib/trading/stop-hunt-tracker.ts (293 lines)
class StopHuntTracker {
recordStopHunt() // Save stop hunt to database
startMonitoring() // Begin 30-second checks
checkRevengeOpportunities()// Find active stop hunts needing revenge
shouldExecuteRevenge() // Validate price reversal conditions
executeRevengeTrade() // Call execute API with 1.2x size
}
// lib/startup/init-position-manager.ts (integration)
await startStopHuntTracking() // Initialize on server startup
// lib/trading/position-manager.ts (recording - ready for next deployment)
if (reason === 'SL' && trade.signalQualityScore >= 85) {
const tracker = getStopHuntTracker()
await tracker.recordStopHunt({ /* trade details */ })
}
Telegram Notification Format:
🔥 REVENGE TRADE ACTIVATED 🔥
Original Trade:
📍 Entry: $142.48 SHORT
❌ Stopped Out: -$138.35
🎯 Quality Score: 90 (ADX 26)
Revenge Trade:
📍 Re-Entry: $138.20 SHORT
💪 Size: Same as original ($8,350)
🎯 Targets: TP1 +0.86%, TP2 +1.72%
Stop Hunt Reversal Confirmed ✓
Time to get our money back!
Singleton Pattern:
// CORRECT: Use getter function
const tracker = getStopHuntTracker()
await tracker.recordStopHunt({ /* params */ })
// WRONG: Direct instantiation creates multiple instances
const tracker = new StopHuntTracker() // ❌ Don't do this
Startup Behavior:
- Container starts → Checks database for active stop hunts (not expired, not executed)
- If activeCount > 0: Starts monitoring immediately, logs count
- If activeCount = 0: Logs "No active stop hunts - tracker will start when needed"
- Monitoring auto-starts when Position Manager records new stop hunt
Common Pitfalls:
- Database query hanging: Fixed with try-catch error handling (Nov 20, 2025)
- Import path errors: Use
'../database/trades'not'../database/client' - Multiple instances: Always use
getStopHuntTracker()singleton getter - Quality threshold: Only 85+ eligible, don't lower without user approval
- Position size math: 1.0× means execute with
originalSize, same as original trade - Revenge window: 4 hours from stop-out, not from signal generation
- One revenge limit: Check
revengeExecutedflag before executing again
Real-World Use Case (Nov 20, 2025 motivation):
- User had v8 signal: Quality 90, ADX 26, called exact top at $141.37
- Stopped at $142.48 for -$138.35 loss
- Price then dropped to $131.32 (8.8% move)
- Missed +$490 potential profit if not stopped
- Revenge system would've re-entered SHORT at ~$141.50 with same size, captured full reversal move
Revenge Timing Enhancement - 90s Confirmation (Nov 26, 2025):
- Problem Identified: Immediate entry at reversal price caused retest stop-outs
- Real Incident (Nov 26, 14:51 CET):
- LONG stopped at $138.00, quality 105
- Price dropped to $136.32 (would trigger immediate revenge)
- Retest bounce to $137.50 (would stop out again at $137.96)
- Actual move: $136 → $144.50 (+$530 opportunity MISSED)
- Root Cause: Entry at candle close = top of move, natural 1-1.5% pullbacks common
- OLD System:
- LONG: Enter immediately when price < entry
- SHORT: Enter immediately when price > entry
- Result: Retest wicks stop out before real move
- NEW System (Option 2 - 90s Confirmation):
- LONG: Require price below entry for 90 seconds (1.5 minutes) before entry
- SHORT: Require price above entry for 90 seconds (1.5 minutes) before entry
- Tracks
firstCrossTime, resets if price leaves zone - Logs progress: "⏱️ LONG/SHORT revenge: X.Xmin in zone (need 1.5min)"
- Rationale: Fast enough to catch moves (not full 5min candle), slow enough to filter retest wicks
- Implementation Details:
// lib/trading/stop-hunt-tracker.ts (lines 254-310) // LONG revenge: if (timeInZone >= 90000) { // 90 seconds = 1.5 minutes console.log(`✅ LONG revenge: Price held below entry for ${(timeInZone/60000).toFixed(1)}min, confirmed!`) return true } // SHORT revenge: if (timeInZone >= 90000) { // 90 seconds = 1.5 minutes console.log(`✅ SHORT revenge: Price held above entry for ${(timeInZone/60000).toFixed(1)}min, confirmed!`) return true } - User Insight: "i think atr bands are no good for this kind of stuff" - ATR measures volatility, not support/resistance
- Future Consideration: TradingView signals every 1 minute for better granularity (pending validation)
- Git Commit:
40ddac5"feat: Revenge timing Option 2 - 90s confirmation (DEPLOYED)" - Deployed: Nov 26, 2025 20:52:55 CET
- Status: ✅ DEPLOYED and VERIFIED in production
Deployment Status:
- ✅ Database schema created (StopHunt table with indexes)
- ✅ Tracker service implemented (293 lines, 8 methods)
- ✅ Startup integration active (initializes on container start)
- ✅ Error handling added (try-catch for database operations)
- ✅ Clean production logs (DEBUG logs removed)
- ⏳ Position Manager recording (code ready, deploys on next Position Manager change)
- ⏳ Real-world validation (waiting for first quality 85+ stop-out)
Git Commits:
702e027"feat: Stop Hunt Revenge System - DEPLOYED (Nov 20, 2025)"- Fixed import paths, added error handling, removed debug logs
- Full system operational, monitoring active
v9 Parameter Optimization & Backtesting (Nov 28-29, 2025)
Purpose: Comprehensive parameter sweep to optimize v9 Money Line indicator for maximum profitability while maintaining quality standards.
Background - v10 Removal (Nov 28, 2025):
- v10 Status: FULLY REMOVED - discovered to be "garbage" during initial backtest analysis
- v10 Problems Discovered:
- Parameter insensitivity: 72 different configurations produced identical $498.12 P&L
- Bug in penalty logic: Price position penalty incorrectly applied to 18.9% position (should only apply to 40-60% chop zone)
- No edge over v9: Despite added complexity, no performance improvement
- Removal Actions (Nov 28, 2025):
- Removed moneyline_v10_adaptive_position_scoring.pinescript
- Removed v10-specific code from backtester modules
- Updated all documentation to remove v10 references
- Docker rebuild completed successfully
- Git commit:
5f77024"remove: Complete v10 indicator removal - proven garbage"
- Lesson: Parameter insensitivity = no real edge, just noise. Simpler is better.
v9 Baseline Performance:
- Data: Nov 2024 - Nov 2025 SOLUSDT 5-minute OHLCV (139,678 rows)
- Default Parameters: flip_threshold=0.6, ma_gap=0.35, momentum_adx=23, long_pos=70, short_pos=25, cooldown_bars=2, momentum_spacing=3, momentum_cooldown=2
- Results: $405.88 PnL, 569 trades, 60.98% WR, 1.022 PF, -$1,360.58 max DD
- Baseline established: Nov 28, 2025
Adaptive Leverage Implementation (Nov 28, 2025 - Updated Dec 1, 2025):
- Purpose: Increase profit potential while maintaining risk management
- CURRENT Configuration (Dec 1, 2025):
USE_ADAPTIVE_LEVERAGE=true HIGH_QUALITY_LEVERAGE=10 # 10x for high-quality signals LOW_QUALITY_LEVERAGE=5 # 5x for borderline signals QUALITY_LEVERAGE_THRESHOLD_LONG=95 # LONG quality threshold (configurable via UI) QUALITY_LEVERAGE_THRESHOLD_SHORT=90 # SHORT quality threshold (configurable via UI) QUALITY_LEVERAGE_THRESHOLD=95 # Backward compatibility fallback - Settings UI (Dec 1, 2025 - FULLY IMPLEMENTED):
- Web interface at http://localhost:3001/settings
- Adaptive Leverage Section with 5 configurable fields:
- Enable/Disable toggle (USE_ADAPTIVE_LEVERAGE)
- High Quality Leverage (10x default)
- Low Quality Leverage (5x default)
- LONG Quality Threshold (95 default) - independent control
- SHORT Quality Threshold (90 default) - independent control
- Dynamic Collateral Display: Fetches real-time balance from Drift account
- Position Size Calculator: Shows notional positions for each leverage tier
- API Endpoint: GET /api/drift/account-health returns { totalCollateral, freeCollateral, totalLiability, marginRatio }
- Real-time Updates: Collateral fetched on page load via React useEffect
- Fallback: Uses $560 if Drift API unavailable
- Direction-Specific Thresholds:
- LONGs: Quality ≥95 → 10x, Quality 90-94 → 5x
- SHORTs: Quality ≥90 → 10x, Quality 80-89 → 5x
- Lower quality than thresholds → blocked by execute endpoint
- Expected Impact: 10× profit on high-quality signals, 5× on borderline (2× better than Nov 28 config)
- Status: ✅ ACTIVE in production with full UI control (Dec 1, 2025)
- Commits:
- See:
ADAPTIVE_LEVERAGE_SYSTEM.mdfor implementation details
Parameter Sweep Strategy:
- 8 Parameters to Optimize:
- flip_threshold: 0.4, 0.5, 0.6, 0.7 (4 values) - EMA flip confirmation threshold
- ma_gap: 0.20, 0.30, 0.40, 0.50 (4 values) - MA50-MA200 convergence bonus
- momentum_adx: 18, 21, 24, 27 (4 values) - ADX requirement for momentum filter
- momentum_long_pos: 60, 65, 70, 75 (4 values) - Price position for LONG momentum entry
- momentum_short_pos: 20, 25, 30, 35 (4 values) - Price position for SHORT momentum entry
- cooldown_bars: 1, 2, 3, 4 (4 values) - Bars between signals
- momentum_spacing: 2, 3, 4, 5 (4 values) - Bars between momentum confirmations
- momentum_cooldown: 1, 2, 3, 4 (4 values) - Momentum-specific cooldown
- Total Combinations: 4^8 = 65,536 exhaustive search
- Grid Design: 4 values per parameter = balanced between granularity and computation time
Sweep Results - Narrow Grid (27 combinations):
- Date: Nov 28, 2025 (killed early to port to EPYC)
- Top Result: $496.41 PnL (22% improvement over baseline)
- Key Finding: Parameter insensitivity observed again
- Multiple different configurations produced identical results
- Suggests v9 edge comes from core EMA logic, not parameter tuning
- Similar pattern to v10 (but v9 has proven baseline edge)
- Decision: Proceed with exhaustive 65,536 combo search on EPYC to confirm pattern
EPYC Server Exhaustive Sweep (Nov 28-29, 2025):
- Hardware: AMD EPYC 7282 16-Core Processor, Debian 12 Bookworm
- Configuration: 24 workers, 1.60s per combo (4× faster than local 6 workers)
- Total Combinations: 65,536 (full 4^8 grid)
- Duration: ~29 hours estimated
- Output: Top 100 results saved to sweep_v9_exhaustive_epyc.csv
- Setup:
- Package: backtest_v9_sweep.tar.gz (1.1MB compressed)
- Contents: data/solusdt_5m.csv (1.9MB), backtester modules, sweep scripts
- Python env: 3.11.2 with pandas 2.3.3, numpy 2.3.5
- Virtual environment: /home/backtest/.venv/
- Status: ✅ RUNNING (started Nov 28, 2025 ~17:00 UTC, ~17h remaining as of Nov 29)
- Critical Fixes Applied:
- Added
source .venv/bin/activateto run script (fixes ModuleNotFoundError) - Kept
--top 100limit (tests all 65,536, saves top 100 to CSV) - Proper output naming: sweep_v9_exhaustive_epyc.csv
- Added
Backtesting Infrastructure:
- Location:
/home/icke/traderv4/backtester/and/home/backtest/(EPYC) - Modules:
backtester_core.py- Core backtesting engine with ATR-based TP/SLv9_moneyline_ma_gap.py- v9 indicator logic implementationmoneyline_core.py- Shared EMA/signal detection logic
- Data:
data/solusdt_5m.csv- Nov 2024 to Nov 2025 OHLCV (139,678 5-min bars) - Sweep Script:
scripts/run_backtest_sweep.py- Multiprocessing parameter grid search- Progress bar shows hours/minutes (not seconds) for long-running sweeps
- Supports --top N to limit output file size
- Uses multiprocessing.Pool for parallel execution
- Python Environments:
- Local: Python 3.7.3 with .venv (pandas/numpy)
- EPYC: Python 3.11.2 with .venv (pandas 2.3.3, numpy 2.3.5)
- Setup Scripts:
setup_epyc.sh- Installs python3-venv, creates .venv, installs pandas/numpyrun_sweep_epyc.sh- Executes parameter sweep with proper venv activation
Expected Outcomes:
- If parameter insensitivity persists: v9 edge is in core EMA logic, not tuning
- Action: Use baseline parameters in production
- Conclusion: v9 works because of momentum filter logic, not specific values
- If clear winners emerge: Optimize production parameters
- Action: Update .pinescript with optimal values
- Validation: Confirm via forward testing (50-100 trades)
- If quality thresholds need adjustment:
- SHORT threshold 80 may be too strict (could be missing profitable setups)
- Analyze win rate distribution around thresholds
Post-Sweep Analysis Plan:
- Review top 100 results for parameter clustering
- Check if top performers share common characteristics
- Identify "stability zones" (parameters that consistently perform well)
- Compare exhaustive results to baseline ($405.88) and narrow sweep ($496.41)
- Make production parameter recommendations
- Consider if SHORT quality threshold (80) needs lowering based on blocked signals analysis
Key Files:
workflows/trading/moneyline_v9_ma_gap.pinescript- Production v9 indicatorbacktester/v9_moneyline_ma_gap.py- Python implementation for backtestingscripts/run_backtest_sweep.py- Parameter sweep orchestrationrun_sweep_epyc.sh- EPYC execution script (24 workers, venv activation)ADAPTIVE_LEVERAGE_SYSTEM.md- Adaptive leverage implementation docsINDICATOR_V9_MA_GAP_ROADMAP.md- v9 development roadmap
Current Production State (Nov 28-29, 2025):
- Indicator: v9 Money Line with MA Gap + Momentum SHORT Filter
- Quality Thresholds: LONG ≥90, SHORT ≥80
- Adaptive Leverage: ACTIVE (5x high quality, 1x borderline)
- Capital: $540 USDC at 100% health
- Expected Profit Boost: 5× on high-quality signals with adaptive leverage
- Backtesting: Exhaustive parameter sweep in progress (17h remaining)
Lessons Learned:
- Parameter insensitivity indicates overfitting: When many configs give identical results, the edge isn't in parameters
- Simpler is better: v10 added complexity but no edge → removed completely
- Quality-based leverage scales winners: 5x on Q95+ signals amplifies edge without increasing borderline risk
- Exhaustive search validates findings: 65,536 combos confirm if pattern is real or sampling artifact
- Python environments matter: Always activate venv before running backtests on remote servers
- Portable packages enable distributed computing: 1.1MB tar.gz enables 16-core EPYC utilization
Cluster Status Detection: Database-First Architecture (Nov 30, 2025)
Purpose: Distributed parameter sweep cluster monitoring system with database-driven status detection
Critical Problem Discovered (Nov 30, 2025):
- Symptom: Web dashboard showed "IDLE" status with 0 active workers despite 22+ worker processes running on EPYC cluster
- Root Cause: SSH-based status detection timing out due to network latency → catch blocks returning "offline" → false negative cluster status
- Impact: System appeared idle when actually processing 4,000 parameter combinations across 2 active chunks
- Financial Risk: In production trading system, false idle status could prevent monitoring of critical distributed processes
Solution: Database-First Status Detection
Architectural Principle: Database is the source of truth for business logic, NOT infrastructure availability
Implementation (app/api/cluster/status/route.ts):
export async function GET(request: NextRequest) {
try {
// CRITICAL FIX (Nov 30, 2025): Check database FIRST before SSH detection
// Database shows actual work state, SSH just provides supplementary metrics
const explorationData = await getExplorationData()
const hasRunningChunks = explorationData.chunks.running > 0
console.log(`📊 Database status: ${explorationData.chunks.running} running chunks`)
// Get SSH status for supplementary metrics (CPU, load, process count)
const [worker1Status, worker2Status] = await Promise.all([
getWorkerStatus('worker1', WORKERS.worker1.host, WORKERS.worker1.port),
getWorkerStatus('worker2', WORKERS.worker2.host, WORKERS.worker2.port, {
proxyJump: WORKERS.worker1.host
})
])
// DATABASE-FIRST: Override SSH "offline" status if database shows running chunks
const workers = [worker1Status, worker2Status].map(w => {
if (hasRunningChunks && w.status === 'offline') {
console.log(`✅ ${w.name}: Database shows running chunks - overriding SSH offline to active`)
return {
...w,
status: 'active' as const,
activeProcesses: w.activeProcesses || 1
}
}
return w
})
// DATABASE-FIRST cluster status
let clusterStatus: 'active' | 'idle' = 'idle'
if (hasRunningChunks) {
clusterStatus = 'active'
console.log('✅ Cluster status: ACTIVE (database shows running chunks)')
} else if (workers.some(w => w.status === 'active')) {
clusterStatus = 'active'
console.log('✅ Cluster status: ACTIVE (workers detected via SSH)')
}
return NextResponse.json({
cluster: {
status: clusterStatus,
activeWorkers: workers.filter(w => w.status === 'active').length,
totalStrategiesExplored: explorationData.strategies.explored,
totalStrategiesToExplore: explorationData.strategies.total,
},
workers,
chunks: {
pending: explorationData.chunks.pending,
running: explorationData.chunks.running,
completed: explorationData.chunks.completed,
total: explorationData.chunks.total,
},
})
} catch (error) {
console.error('❌ Error getting cluster status:', error)
return NextResponse.json({ error: 'Failed to get cluster status' }, { status: 500 })
}
}
Why This Approach:
- Database persistence: SQLite exploration.db records chunk assignments with status='running'
- Business logic integrity: Work state exists in database regardless of SSH availability
- SSH supplementary only: Process counts, CPU metrics are nice-to-have, not critical
- Network resilience: SSH timeouts don't cause false negative status
- Single source of truth: All cluster control operations write to database first
Verification Methodology (Nov 30, 2025):
Before Fix:
curl -s http://localhost:3001/api/cluster/status | jq '.cluster'
{
"status": "idle",
"activeWorkers": 0,
"totalStrategiesExplored": 0,
"totalStrategiesToExplore": 4096
}
After Fix:
curl -s http://localhost:3001/api/cluster/status | jq '.cluster'
{
"status": "active",
"activeWorkers": 2,
"totalStrategiesExplored": 0,
"totalStrategiesToExplore": 4096
}
Container Logs Showing Fix Working:
📊 Database status: 2 running chunks
✅ worker1: Database shows running chunks - overriding SSH offline to active
✅ worker2: Database shows running chunks - overriding SSH offline to active
✅ Cluster status: ACTIVE (database shows running chunks)
Database State Verification:
sqlite3 cluster/exploration.db "SELECT id, start_combo, end_combo, status, assigned_worker FROM chunks WHERE status='running';"
v9_chunk_000000|0|2000|running|worker1
v9_chunk_000001|2000|4000|running|worker2
SSH Process Verification (Manual):
ssh root@10.10.254.106 "ps aux | grep [p]ython | grep backtest | wc -l"
22 # 22 worker processes actively running
ssh root@10.10.254.106 "ssh root@10.20.254.100 'ps aux | grep [p]ython | grep backtest | wc -l'"
18 # 18 worker processes on worker2 via hop
Cluster Control System:
Start Button (app/cluster/page.tsx):
{status.cluster.status === 'idle' ? (
<button
onClick={() => handleControl('start')}
className="bg-green-600 hover:bg-green-700"
>
▶️ Start Cluster
</button>
) : (
<button
onClick={() => handleControl('stop')}
className="bg-red-600 hover:bg-red-700"
>
⏹️ Stop Cluster
</button>
)}
Control API (app/api/cluster/control/route.ts):
- start: Runs distributed_coordinator.py → creates chunks in database → starts workers via SSH
- stop: Kills coordinator process → workers auto-stop when chunks complete → database cleanup
- status: Returns coordinator process status (supplementary to database status)
Database Schema (exploration.db):
CREATE TABLE chunks (
id TEXT PRIMARY KEY, -- v9_chunk_000000, v9_chunk_000001, etc.
start_combo INTEGER NOT NULL, -- Starting combination index (0, 2000, 4000, etc.)
end_combo INTEGER NOT NULL, -- Ending combination index (exclusive)
total_combos INTEGER NOT NULL, -- Total combinations in chunk (2000)
status TEXT NOT NULL, -- 'pending', 'running', 'completed', 'failed'
assigned_worker TEXT, -- 'worker1', 'worker2', NULL for pending
started_at INTEGER, -- Unix timestamp when work started
completed_at INTEGER, -- Unix timestamp when work completed
created_at INTEGER DEFAULT (strftime('%s', 'now'))
);
CREATE TABLE strategies (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chunk_id TEXT NOT NULL,
params TEXT NOT NULL, -- JSON of parameter values
pnl REAL NOT NULL,
win_rate REAL NOT NULL,
profit_factor REAL NOT NULL,
max_drawdown REAL NOT NULL,
total_trades INTEGER NOT NULL,
created_at INTEGER DEFAULT (strftime('%s', 'now')),
FOREIGN KEY (chunk_id) REFERENCES chunks(id)
);
Deployment Details:
- Container: trading-bot-v4 on port 3001
- Build Time: Nov 30 21:12 UTC (TypeScript compilation 77.4s)
- Restart Time: Nov 30 21:18 UTC with
--force-recreate - Volume Mount:
./cluster:/app/cluster(database persistence) - Git Commits:
Telegram Notifications (Dec 2, 2025):
- Purpose: Alert user when parameter sweep completes or stops prematurely
- Implementation: Added to
v9_advanced_coordinator.py(197 lines) - Credentials:
- Bot Token:
8240234365:AAEm6hg_XOm54x8ctnwpNYreFKRAEvWU3uY - Chat ID:
579304651 - Source:
/home/icke/traderv4/.env
- Bot Token:
- Notifications Sent:
- Startup: When coordinator starts (includes worker count, total combos, start time)
- Completion: When all chunks finish (includes duration stats, completion time)
- Premature Stop: When coordinator receives SIGINT/SIGTERM (crash/manual kill)
- Technical Details:
- Uses
urllib.requestfor HTTP POST to Telegram Bot API - Signal handlers registered for graceful shutdown detection
- Messages formatted with HTML parse mode for bold/structure
- 10-second timeout on HTTP requests
- Errors logged but don't crash coordinator
- Uses
- Code Locations:
send_telegram_message()function: Lines ~25-45signal_handler()function: Lines ~47-55- Startup notification: In
main()after banner - Completion notification: When
pending == 0 and running == 0
- Deployment: Dec 2, 2025 08:08:24 (coordinator PID 1477050)
- User Benefit: "works through entire dataset without having to check all the time"
Lessons Learned:
-
Infrastructure availability ≠ business logic state
- SSH timeouts are infrastructure failures
- Running chunks in database are business state
- Never let infrastructure failures dictate false business states
-
Database as source of truth
- All state-changing operations write to database first
- Status detection reads from database first
- External checks (SSH, API calls) are supplementary metrics only
-
Fail-open vs fail-closed
- SSH timeout → assume active if database says so (fail-open)
- Database unavailable → hard error, don't guess (fail-closed)
- Business logic requires authoritative data source
-
Verification before declaration
- curl test confirmed API response changed
- Log analysis confirmed database-first logic executing
- Manual SSH verification confirmed workers actually running
- NEVER say "fixed" without testing deployed container
-
Conditional UI rendering
- Stop button already existed in codebase
- Shown conditionally based on cluster status
- Status detection fix made Stop button visible automatically
- Search codebase before claiming features are "missing"
Documentation References:
- Full technical details:
cluster/STATUS_DETECTION_FIX_COMPLETE.md - Database queries:
cluster/lib/db.ts- getExplorationData() - Worker management:
cluster/distributed_coordinator.py- chunk creation and assignment - Status API:
app/api/cluster/status/route.ts- database-first implementation
Current Operational State (Nov 30, 2025):
- Cluster: ACTIVE with 2 workers processing 4,000 combinations
- Database: 2 chunks status='running' (0-2000 on worker1, 2000-4000 on worker2)
- Remaining: 96 combinations (4000-4096) will be assigned after current chunks complete
- Dashboard: Shows accurate "active" status with 2 active workers
- SSH Status: May show "offline" due to latency, but database override ensures accurate cluster status
Integration Points
- n8n: Expects exact response format from
/api/trading/execute(see n8n-complete-workflow.json) - Drift Protocol: Uses SDK v2.75.0 - check docs at docs.drift.trade for API changes
- Pyth Network: WebSocket + HTTP fallback for price feeds (handles reconnection)
- PostgreSQL: Version 16-alpine, must be running before bot starts
- EPYC Cluster: Database-first status detection via SQLite exploration.db (SSH supplementary)
Key Mental Model: Think of this as two parallel systems (on-chain orders + software monitoring) working together. The Position Manager is the "backup brain" that constantly watches and acts if on-chain orders fail. Both write to the same database for complete trade history.
Cluster Mental Model: Database is the authoritative source of cluster state. SSH detection is supplementary metrics. If database shows running chunks, cluster is active regardless of SSH availability. Infrastructure failures don't change business logic state.