# AI Agent Instructions for Trading Bot v4 ## πŸš€ NEW AGENT QUICK START **First Time Here?** Follow this sequence to get up to speed: 1. **Read this file first** (`.github/copilot-instructions.md`) - Contains all AI agent guidelines and development standards - Top 10 Critical Pitfalls summary (see `docs/COMMON_PITFALLS.md` for full 72 pitfalls) - Financial system verification requirements (MANDATORY reading) 2. **Navigate with** `docs/README.md` (Documentation Hub) - Comprehensive documentation structure with 8 organized categories - Multiple navigation methods: by topic, date, or file type - Quick Start workflows for different development tasks - Links to all subdirectories: setup, architecture, bugs, roadmaps, etc. 3. **Get project context** from main `README.md` - Live system status and configuration - Architecture overview and key features - File structure and deployment information 4. **Explore specific topics** via category subdirectories as needed - `docs/setup/` - Configuration and environment setup - `docs/architecture/` - Technical design and system overview - `docs/bugs/` - Known issues and critical fixes - `docs/roadmaps/` - Planned features and optimization phases - `docs/guides/` - Step-by-step implementation guides - `docs/deployments/` - Deployment procedures and verification - `docs/analysis/` - Performance analysis and data studies - `docs/history/` - Project evolution and milestones **Key Principle:** "NOTHING gets lost" - all documentation is cross-referenced, interconnected, and comprehensive. --- ## πŸ” "DO I ALREADY HAVE THIS?" - Quick Feature Discovery **Before implementing ANY feature, check if it already exists!** This system has 70+ features built over months of development. ### Quick Reference Table | "I want to..." | Existing Feature | Search Term | |----------------|------------------|-------------| | Re-enter after stop-out | **Stop Hunt Revenge System** - Auto re-enters quality 85+ signals after price reverses through original entry | `grep -i "stop hunt revenge"` | | Scale position by quality | **Adaptive Leverage System** - 10x for quality 95+, 5x for borderline signals | `grep -i "adaptive leverage"` | | Test different timeframes | **Multi-Timeframe Data Collection** - Parallel data collection for 5min/15min/1H/4H/Daily | `grep -i "multi-timeframe"` | | Monitor blocked signals | **BlockedSignal Tracker** - Tracks quality-blocked signals with price analysis | `grep -i "blockedsignal"` | | Survive server failures | **HA Failover** - Secondary server with auto DNS failover (90s detection) | `grep -i "high availability"` | | Validate re-entries | **Re-Entry Analytics System** - Fresh TradingView data + recent performance scoring | `grep -i "re-entry analytics"` | | Backtest parameters | **Distributed Cluster Backtester** - 65,536 combo sweep on EPYC cluster | `grep -i "cluster\|backtester"` | | Handle RPC rate limits | **Retry with Exponential Backoff** - 5s β†’ 10s β†’ 20s retry for 429 errors | `grep -i "retryWithBackoff"` | | Track best/worst P&L | **MAE/MFE Tracking** - Built into Position Manager, updated every 2s | `grep -i "mae\|mfe"` | ### Quick Search Commands ```bash # Search main documentation grep -i "KEYWORD" .github/copilot-instructions.md # Search all documentation grep -ri "KEYWORD" docs/ # Check live system logs docker logs trading-bot-v4 | grep -i "KEYWORD" | tail -20 # List database tables (shows what data is tracked) docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "\dt" # Check environment variables cat .env | grep -i "KEYWORD" # Search codebase grep -r "KEYWORD" lib/ app/ --include="*.ts" ``` ### Feature Discovery by Category **πŸ“Š Entry/Exit Logic:** - ATR-based TP/SL (dynamic targets based on volatility) - TP2-as-runner (40% runner after TP1, configurable) - ADX-based runner SL (adaptive positioning by trend strength) - Adaptive trailing stop (real-time 1-min ADX adjustments) - Emergency stop (-2% hard limit) **πŸ›‘οΈ Risk Management:** - Adaptive leverage (quality-based position sizing) - Direction-specific thresholds (LONG 90+, SHORT 80+) - Per-symbol sizing (SOL/ETH independent controls) - Phantom trade auto-closure (size mismatch detection) - Dual stops (soft TRIGGER_LIMIT + hard TRIGGER_MARKET) **πŸ”„ Re-Entry & Recovery:** - Stop Hunt Revenge (auto re-entry after reversal) - Re-Entry Analytics (validation with fresh data) - Market Data Cache (5-min expiry TradingView data) **πŸ“ˆ Monitoring & Analysis:** - Position Manager (2s price checks, MAE/MFE tracking) - BlockedSignal Tracker (quality-blocked signal analysis) - Multi-timeframe collection (parallel data gathering) - Rate limit monitoring (429 error tracking + analytics) - Drift health monitor (memory leak detection + auto-restart) **πŸ—οΈ High Availability:** - Secondary server (Hostinger standby) - Database replication (PostgreSQL streaming) - DNS auto-failover (90s detection via INWX API) - Orphan position recovery (startup validation) **πŸ”§ Developer Tools:** - Distributed cluster (EPYC parameter sweep) - Test suite (113 tests, 7 test files) - CI/CD pipeline (GitHub Actions) - Persistent logger (survives container restarts) ### Decision Flowchart: Does This Feature Exist? ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 1. Search copilot-instructions.md β”‚ β”‚ grep -i "feature-name" .github/copilot-instructions.md β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό β”‚ β”‚ Found? ──YES──► READ THE SECTION β”‚ β”‚ β”‚ β”‚ β”‚ NO β”‚ β”‚ β–Ό β”‚ β”‚ 2. Search docs/ directory β”‚ β”‚ grep -ri "feature-name" docs/ β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό β”‚ β”‚ Found? ──YES──► READ THE DOCUMENTATION β”‚ β”‚ β”‚ β”‚ β”‚ NO β”‚ β”‚ β–Ό β”‚ β”‚ 3. Check database schema β”‚ β”‚ cat prisma/schema.prisma | grep -i "related-table" β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό β”‚ β”‚ Found? ──YES──► FEATURE LIKELY EXISTS β”‚ β”‚ β”‚ β”‚ β”‚ NO β”‚ β”‚ β–Ό β”‚ β”‚ 4. Check docker logs β”‚ β”‚ docker logs trading-bot-v4 | grep -i "feature" | tail β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό β”‚ β”‚ Found? ──YES──► FEATURE IS ACTIVE β”‚ β”‚ β”‚ β”‚ β”‚ NO β”‚ β”‚ β–Ό β”‚ β”‚ 5. Check git history β”‚ β”‚ git log --oneline --all | grep -i "feature" | head -10 β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό β”‚ β”‚ Found? ──YES──► MAY BE ARCHIVED/DISABLED β”‚ β”‚ β”‚ β”‚ β”‚ NO β”‚ β”‚ β–Ό β”‚ β”‚ FEATURE DOES NOT EXIST - SAFE TO BUILD β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ### Why This Matters: Historical Examples | Feature | Built Date | Trigger Event | Value | |---------|------------|---------------|-------| | **Stop Hunt Revenge** | Nov 20, 2025 | Quality 90 signal stopped out, missed $490 profit on 8.8% reversal | Captures reversal moves | | **Adaptive Leverage** | Nov 24, 2025 | Quality 95+ signals had 100% win rate, wanted to scale winners | 2Γ— profit on high quality | | **HA Failover** | Nov 25, 2025 | Server went down during active trades | Zero-downtime protection | | **Phantom Detection** | Nov 16, 2025 | Position opened with wrong size, no monitoring | Prevents unprotected positions | | **BlockedSignal Tracker** | Nov 22, 2025 | Needed data to optimize quality thresholds | Data-driven threshold tuning | **Don't rebuild what exists. Enhance what's proven.** --- ## ⚠️ CRITICAL: VERIFICATION MANDATE - READ THIS FIRST ⚠️ **THIS IS A REAL MONEY TRADING SYSTEM - EVERY CHANGE AFFECTS USER'S FINANCIAL FUTURE** ### 🚨 IRON-CLAD RULES - NO EXCEPTIONS 🚨 **1. NEVER SAY "DONE", "FIXED", "WORKING", OR "DEPLOYED" WITHOUT 100% VERIFICATION** This is NOT optional. This is NOT negotiable. This is the MOST IMPORTANT rule in this entire document. **"Working" means:** - βœ… Code deployed (container restarted AFTER commit timestamp) - βœ… Logs show expected behavior in production - βœ… Database state matches expectations (SQL verification) - βœ… Test trade executed successfully (when applicable) - βœ… All metrics calculated correctly (manual verification) - βœ… Edge cases tested (0%, 100%, boundaries) **"Working" does NOT mean:** - ❌ "Code looks correct" - ❌ "Should work in theory" - ❌ "TypeScript compiled successfully" - ❌ "Tests passed locally" - ❌ "Committed to git" **2. TEST EVERY CHANGE IN PRODUCTION** Financial code verification requirements: - **Position Manager changes:** Execute test trade, watch full cycle (TP1 β†’ TP2 β†’ exit) - **API endpoints:** curl test with real payloads, verify database records - **Calculations:** Add console.log for EVERY step, verify units (USD vs tokens, % vs decimal) - **Exit logic:** Test actual TP1/TP2/SL triggers, not just code paths **3. DEPLOYMENT VERIFICATION IS MANDATORY** Before declaring anything "deployed": ```bash # 1. Check container start time docker logs trading-bot-v4 | grep "Server starting" | head -1 # 2. Check latest commit time git log -1 --format='%ai' # 3. Verify container NEWER than commit # If container older: CODE NOT DEPLOYED, FIX NOT ACTIVE # 4. Test feature-specific behavior docker logs -f trading-bot-v4 | grep "expected new log message" ``` **Container start time OLDER than commit = FIX NOT DEPLOYED = DO NOT SAY "FIXED"** **4. DOCUMENT VERIFICATION RESULTS** Every change must include: - What was tested - How it was verified - Actual logs/SQL results showing correct behavior - Edge cases covered - What user should watch for on next real trade **WHY THIS MATTERS:** User is building from $901 β†’ $100,000+ with this system. Every bug costs money. Every unverified change is a financial risk. This is not a hobby project - this is the user's financial future. **Declaring something "working" without verification = causing financial loss** **5. ALWAYS CHECK DOCUMENTATION BEFORE MAKING SUGGESTIONS** This is MANDATORY. This is NOT negotiable. DO NOT waste user's time with questions already answered in documentation. **Before making ANY suggestion or asking ANY question:** - βœ… Check `.github/copilot-instructions.md` (THIS FILE - contains system knowledge, patterns, pitfalls) - βœ… Check `docs/README.md` (Documentation hub with organized categories) - βœ… Check main `README.md` (Live system status and configuration) - βœ… Search `docs/` subdirectories for specific topics (setup, architecture, bugs, roadmaps, guides) - βœ… Grep search for keywords related to the topic - βœ… Check Common Pitfalls section (bugs #1-71) for known issues **Examples of WASTING USER TIME (DO NOT DO THIS):** - ❌ Asking about TradingView rate limits when `docs/HELIUS_RATE_LIMITS.md` exists - ❌ Suggesting features already documented in roadmaps - ❌ Asking about configuration when ENV variables documented - ❌ Proposing solutions to bugs already fixed (check Common Pitfalls) - ❌ Questions about architecture already explained in docs **Correct Workflow:** 1. Read user request 2. **SEARCH DOCUMENTATION FIRST** (copilot-instructions.md + docs/ directory) 3. Check if question is already answered 4. Check if suggestion is already implemented 5. Check if issue is already documented 6. **ONLY THEN** make suggestions or ask questions **Why This Matters:** - User has spent MONTHS documenting this system comprehensively - Asking clarified questions = disrespecting user's documentation effort - "NOTHING gets lost" is the project principle - USE the documentation - This is a financial system - wasting time = wasting money - User expects AI to be KNOWLEDGEABLE, not forgetful **Red Flags Indicating You Didn't Check Docs:** - User responds: "we already have this documented" - User responds: "check the docs first" - User responds: "this is in Common Pitfalls" - User responds: "read the roadmap" - User has to point you to existing documentation **This rule applies to EVERYTHING:** Features, bugs, configuration, architecture, deployment, troubleshooting, optimization, analysis. --- ## πŸ“‹ MANDATORY: ROADMAP MAINTENANCE - NO EXCEPTIONS **THIS IS A CRITICAL REQUIREMENT - NOT OPTIONAL** ### Why Roadmap Updates Are MANDATORY User discovered critical documentation bug (Nov 27, 2025): - Roadmap said: "Phase 3: Smart Entry Timing - NOT STARTED" - Reality: Fully deployed as Phase 7.1 (718-line smart-entry-timer.ts operational) - User confusion: "i thought that was already implemented?" β†’ User was RIGHT - **Result:** Documentation misleading, wasted time investigating "next feature" already deployed ### IRON-CLAD RULES for Roadmap Updates **1. UPDATE ROADMAP IMMEDIATELY AFTER DEPLOYMENT** - βœ… Phase completed β†’ Mark as COMPLETE with deployment date - βœ… Phase started β†’ Update status to IN PROGRESS - βœ… Expected impact realized β†’ Document actual data vs expected - βœ… Commit roadmap changes SAME SESSION as feature deployment **2. VERIFY ROADMAP ACCURACY BEFORE RECOMMENDING FEATURES** - ❌ NEVER suggest implementing features based ONLY on roadmap status - βœ… ALWAYS grep codebase for existing implementation before recommending - βœ… Check: Does file exist? Is it integrated? Is ENV variable set? - βœ… Example: Phase 3 "not started" but smart-entry-timer.ts exists = roadmap WRONG **3. MAINTAIN PHASE NUMBERING CONSISTENCY** - If code says "Phase 7.1" but roadmap says "Phase 3", consolidate naming - Update ALL references (roadmap files, code comments, documentation) - Prevent confusion from multiple names for same feature **4. ROADMAP FILES TO UPDATE** - `1MIN_DATA_ENHANCEMENTS_ROADMAP.md` (main detailed roadmap) - `docs/1MIN_DATA_ENHANCEMENTS_ROADMAP.md` (documentation copy) - `OPTIMIZATION_MASTER_ROADMAP.md` (high-level consolidated view) - Website roadmap API endpoint (if applicable) - This file's "When Making Changes" section (if new pattern learned) **5. ROADMAP UPDATE CHECKLIST** When completing ANY feature or phase: - [ ] Mark phase status: NOT STARTED β†’ IN PROGRESS β†’ COMPLETE - [ ] Add deployment date: βœ… COMPLETE (Nov 27, 2025) - [ ] Document actual impact vs expected (after 50-100 trades data) - [ ] Update phase numbering if inconsistencies exist - [ ] Commit with message: "docs: Update roadmap - Phase X complete" - [ ] Verify website roadmap updated (if applicable) **6. BEFORE RECOMMENDING "NEXT FEATURE"** ```bash # 1. Read roadmap to identify highest-impact "NOT STARTED" feature cat 1MIN_DATA_ENHANCEMENTS_ROADMAP.md | grep "NOT STARTED" # 2. VERIFY it's actually not implemented (grep for files/classes) grep -r "SmartEntryTimer" lib/ grep -r "SMART_ENTRY_ENABLED" .env # 3. If files exist β†’ ROADMAP WRONG, update it first # 4. Only then recommend truly unimplemented features ``` **WHY THIS MATTERS:** User relies on roadmap for strategic planning. Wrong roadmap = wrong decisions = wasted development time = delayed profit optimization. In a real money system, time wasted = money not earned. **Outdated roadmap = wasted user time = lost profits** --- ## πŸ“ MANDATORY: DOCUMENTATION + GIT COMMIT: INSEPARABLE WORKFLOW - NUMBER ONE PRIORITY **⚠️ CRITICAL: THIS IS THE #1 MANDATORY RULE - DOCUMENTATION GOES HAND-IN-HAND WITH EVERY GIT COMMIT ⚠️** **USER MANDATE (Dec 1, 2025): "in the actual documentation it shall be a number one priority mandatory thing, that which each git commit and push there must be an update to the documentation. this HAS to go hand in hand"** ### Universal Rule: EVERY Git Commit REQUIRES Documentation Update **IRON-CLAD WORKFLOW - NO EXCEPTIONS:** ```bash # ❌ WRONG (INCOMPLETE - NEVER DO THIS): git add [files] git commit -m "feat: Added new feature" git push # STOP! This is INCOMPLETE work. Documentation is MISSING. # βœ… CORRECT (COMPLETE - ALWAYS DO THIS): git add [files] git commit -m "feat: Added new feature" # MANDATORY NEXT STEP - UPDATE DOCUMENTATION: # Edit .github/copilot-instructions.md with: # - What changed and why # - New patterns/insights/learnings # - Configuration changes # - API endpoints added/modified # - Database schema changes # - Integration points affected git add .github/copilot-instructions.md git commit -m "docs: Document new feature insights and patterns" git push # βœ… NOW work is COMPLETE - Code + Documentation together ``` **This is NOT a suggestion. This is NOT optional. This is MANDATORY.** **Code without documentation = INCOMPLETE WORK = DO NOT PUSH** ### Why This is #1 Priority (User's Direct Mandate): 1. **"I am sick and tired of reminding you"** - User has repeatedly emphasized this 2. **This is a real money trading system** - Undocumented changes cause financial losses 3. **Knowledge preservation** - Insights are lost without documentation 4. **Future AI agents** - Need complete context to maintain system integrity 5. **Time savings** - Documented patterns prevent re-investigation 6. **Financial protection** - Trading system knowledge prevents costly errors ### When Documentation is MANDATORY (EVERY TIME): **You MUST update .github/copilot-instructions.md when:** - βœ… Adding ANY new feature or component - βœ… Fixing ANY bug (add to Common Pitfalls section) - βœ… Changing configuration (ENV variables, defaults, precedence) - βœ… Modifying API endpoints (add to API Endpoints section) - βœ… Updating database schema (add to Important fields section) - βœ… Discovering system behaviors or quirks - βœ… Implementing optimizations or enhancements - βœ… Adding new integrations or dependencies - βœ… Changing data flows or architecture - βœ… Learning ANY lesson worth remembering **If you learned something valuable β†’ Document it BEFORE pushing** **If you solved a problem β†’ Document the solution BEFORE pushing** **If you discovered a pattern β†’ Document the pattern BEFORE pushing** ### The Correct Mindset: - **Documentation is NOT separate work** - It's part of completing the task - **Documentation is NOT optional** - It's a requirement for "done" - **Documentation is NOT an afterthought** - It's planned from the start - **Every git commit is a learning opportunity** - Capture the knowledge ### Examples of Commits Requiring Documentation: ```bash # Scenario 1: Bug fix reveals system behavior git commit -m "fix: Correct P&L calculation for partial closes" # β†’ MUST document: Why averageExitPrice doesn't work, must use realizedPnL # β†’ MUST add to: Common Pitfalls section # Scenario 2: New feature with integration requirements git commit -m "feat: Smart Entry Validation Queue system" # β†’ MUST document: How it works, when it triggers, integration points # β†’ MUST add to: Critical Components section # Scenario 3: Performance optimization reveals insight git commit -m "perf: Adaptive leverage based on quality score" # β†’ MUST document: Quality thresholds, why tiers chosen, expected impact # β†’ MUST add to: Configuration System or relevant section # Scenario 4: Data analysis reveals filtering requirement git commit -m "fix: Exclude manual trades from indicator analysis" # β†’ MUST document: signalSource field, SQL filtering patterns, why it matters # β†’ MUST add to: Important fields and Analysis patterns sections ``` ### Red Flags That Documentation is Missing: - ❌ User says: "please add in the documentation" - ❌ User asks: "is this documented?" - ❌ User asks: "everything documented?" - ❌ Code commit has NO corresponding documentation commit - ❌ Bug fix with NO Common Pitfall entry - ❌ New feature with NO integration notes - ❌ You push code without updating copilot-instructions.md ### Integration with Existing Sections: When documenting, update these sections as appropriate: - **Common Pitfalls:** Add bugs/mistakes/lessons learned - **Critical Components:** Add new systems/services - **Configuration System:** Add new ENV variables - **When Making Changes:** Add new development patterns - **API Endpoints:** Add new routes and their purposes - **Database Schema:** Add new tables/fields and their meaning - **Architecture Overview:** Add new integrations or data flows ### Remember: **Documentation is not bureaucracy - it's protecting future profitability** by preserving hard-won knowledge. In a real money trading system, forgotten lessons = repeated mistakes = financial losses. **Git commit + Documentation = Complete work. One without the other = Incomplete.** **This is the user's #1 priority. Make it yours too.** ### IRON-CLAD RULE: UPDATE THIS FILE FOR EVERY SIGNIFICANT CHANGE **When to update .github/copilot-instructions.md (MANDATORY):** 1. **New system behaviors discovered** (like 1-minute signal direction field artifacts) 2. **Data integrity requirements** (what fields are meaningful vs meaningless) 3. **Analysis patterns** (how to query data correctly, what to filter out) 4. **Architecture changes** (new components, integrations, data flows) 5. **Database schema additions** (new tables, fields, their purpose and usage) 6. **Configuration patterns** (ENV variables, feature flags, precedence rules) 7. **Common mistakes** (add to Common Pitfalls section immediately) 8. **Verification procedures** (how to test features, what to check) **This file is the PRIMARY KNOWLEDGE BASE for all future AI agents and developers.** **What MUST be documented here:** - βœ… Why things work the way they do (not just what they do) - βœ… What fields/data should be filtered out in analysis - βœ… How to correctly query and interpret database data - βœ… Known artifacts and quirks (like direction field in 1-min signals) - βœ… Data collection vs trading signal distinctions - βœ… When features are truly deployed vs just committed **DO NOT make user remind you to update this file. It's AUTOMATIC:** ``` Change β†’ Code β†’ Test β†’ Git Commit β†’ UPDATE COPILOT-INSTRUCTIONS.MD β†’ Git Commit ``` **If you implement something without documenting it here, the work is INCOMPLETE.** --- ## πŸ“ DOCUMENTATION + GIT COMMIT: INSEPARABLE WORKFLOW - NUMBER ONE PRIORITY **⚠️ CRITICAL: THIS IS THE #1 MANDATORY RULE - DOCUMENTATION GOES HAND-IN-HAND WITH EVERY GIT COMMIT ⚠️** **USER MANDATE (Dec 1, 2025): "in the actual documentation it shall be a number one priority mandatory thing, that which each git commit and push there must be an update to the documentation. this HAS to go hand in hand"** ### Universal Rule: EVERY Git Commit REQUIRES Documentation Update **IRON-CLAD WORKFLOW - NO EXCEPTIONS:** ```bash # ❌ WRONG (INCOMPLETE - NEVER DO THIS): git add [files] git commit -m "feat: Added new feature" git push # STOP! This is INCOMPLETE work. Documentation is MISSING. # βœ… CORRECT (COMPLETE - ALWAYS DO THIS): git add [files] git commit -m "feat: Added new feature" # MANDATORY NEXT STEP - UPDATE DOCUMENTATION: # Edit .github/copilot-instructions.md with: # - What changed and why # - New patterns/insights/learnings # - Configuration changes # - API endpoints added/modified # - Database schema changes # - Integration points affected git add .github/copilot-instructions.md git commit -m "docs: Document new feature insights and patterns" git push # βœ… NOW work is COMPLETE - Code + Documentation together ``` **This is NOT a suggestion. This is NOT optional. This is MANDATORY.** **Code without documentation = INCOMPLETE WORK = DO NOT PUSH** ### Why This is #1 Priority (User's Direct Mandate): 1. **"I am sick and tired of reminding you"** - User has repeatedly emphasized this 2. **This is a real money trading system** - Undocumented changes cause financial losses 3. **Knowledge preservation** - Insights are lost without documentation 4. **Future AI agents** - Need complete context to maintain system integrity 5. **Time savings** - Documented patterns prevent re-investigation 6. **Financial protection** - Trading system knowledge prevents costly errors ### When Documentation is MANDATORY (EVERY TIME): **You MUST update .github/copilot-instructions.md when:** - βœ… Adding ANY new feature or component - βœ… Fixing ANY bug (add to Common Pitfalls section) - βœ… Changing configuration (ENV variables, defaults, precedence) - βœ… Modifying API endpoints (add to API Endpoints section) - βœ… Updating database schema (add to Important fields section) - βœ… Discovering system behaviors or quirks - βœ… Implementing optimizations or enhancements - βœ… Adding new integrations or dependencies - βœ… Changing data flows or architecture - βœ… Learning ANY lesson worth remembering **If you learned something valuable β†’ Document it BEFORE pushing** **If you solved a problem β†’ Document the solution BEFORE pushing** **If you discovered a pattern β†’ Document the pattern BEFORE pushing** ### The Correct Mindset: - **Documentation is NOT separate work** - It's part of completing the task - **Documentation is NOT optional** - It's a requirement for "done" - **Documentation is NOT an afterthought** - It's planned from the start - **Every git commit is a learning opportunity** - Capture the knowledge ### Examples of Commits Requiring Documentation: ```bash # Scenario 1: Bug fix reveals system behavior git commit -m "fix: Correct P&L calculation for partial closes" # β†’ MUST document: Why averageExitPrice doesn't work, must use realizedPnL # β†’ MUST add to: Common Pitfalls section # Scenario 2: New feature with integration requirements git commit -m "feat: Smart Entry Validation Queue system" # β†’ MUST document: How it works, when it triggers, integration points # β†’ MUST add to: Critical Components section # Scenario 3: Performance optimization reveals insight git commit -m "perf: Adaptive leverage based on quality score" # β†’ MUST document: Quality thresholds, why tiers chosen, expected impact # β†’ MUST add to: Configuration System or relevant section # Scenario 4: Data analysis reveals filtering requirement git commit -m "fix: Exclude manual trades from indicator analysis" # β†’ MUST document: signalSource field, SQL filtering patterns, why it matters # β†’ MUST add to: Important fields and Analysis patterns sections ``` ### Red Flags That Documentation is Missing: - ❌ User says: "please add in the documentation" - ❌ User asks: "is this documented?" - ❌ User asks: "everything documented?" - ❌ Code commit has NO corresponding documentation commit - ❌ Bug fix with NO Common Pitfall entry - ❌ New feature with NO integration notes - ❌ You push code without updating copilot-instructions.md ### Integration with Existing Sections: When documenting, update these sections as appropriate: - **Common Pitfalls:** Add bugs/mistakes/lessons learned - **Critical Components:** Add new systems/services - **Configuration System:** Add new ENV variables - **When Making Changes:** Add new development patterns - **API Endpoints:** Add new routes and their purposes - **Database Schema:** Add new tables/fields and their meaning - **Architecture Overview:** Add new integrations or data flows ### Remember: **Documentation is not bureaucracy - it's protecting future profitability** by preserving hard-won knowledge. In a real money trading system, forgotten lessons = repeated mistakes = financial losses. **Git commit + Documentation = Complete work. One without the other = Incomplete.** **This is the user's #1 priority. Make it yours too.** **What qualifies as "valuable insights" requiring documentation:** 1. **System behaviors discovered** during implementation or debugging 2. **Lessons learned** from bugs, failures, or unexpected outcomes 3. **Design decisions** and WHY specific approaches were chosen 4. **Integration patterns** that future changes must follow 5. **Data integrity rules** discovered through analysis 6. **Common mistakes** that cost time/money to discover 7. **Verification procedures** that proved critical 8. **Performance insights** from production data **Why this matters:** - **Knowledge preservation:** Insights are lost without documentation - **Future AI agents:** Need context to avoid repeating mistakes - **Time savings:** Documented patterns prevent re-investigation - **Financial protection:** Trading system knowledge prevents costly errors - **User expectation:** "please add in the documentation" shouldn't be necessary **The mindset:** - Every git commit = potential learning opportunity - If you learned something valuable β†’ document it - If you solved a tricky problem β†’ document the solution - If you discovered a pattern β†’ document the pattern - **Documentation is not separate work - it's part of completing the task** **Examples of commits requiring documentation:** ```bash # Scenario 1: Bug fix reveals system behavior git commit -m "fix: Correct P&L calculation for partial closes" # β†’ Document: Why averageExitPrice doesn't work, must use realizedPnL field # β†’ Add to: Common Pitfalls section # Scenario 2: New feature with integration requirements git commit -m "feat: Smart Entry Validation Queue system" # β†’ Document: How it works, when it triggers, integration points, monitoring # β†’ Add to: Common Pitfalls or Critical Components section # Scenario 3: Performance optimization reveals insight git commit -m "perf: Adaptive leverage based on quality score" # β†’ Document: Quality thresholds, why tiers chosen, expected impact # β†’ Add to: Configuration System or relevant feature section # Scenario 4: Data analysis reveals filtering requirement git commit -m "fix: Exclude manual trades from indicator analysis" # β†’ Document: signalSource field, SQL filtering patterns, why it matters # β†’ Add to: Important fields and Analysis patterns sections ``` **Red flags indicating missing documentation:** - ❌ User says: "please add in the documentation" - ❌ User asks: "is this documented?" - ❌ User asks: "everything documented?" - ❌ Code commit has no corresponding documentation commit - ❌ Bug fix with no Common Pitfall entry - ❌ New feature with no integration notes **Integration with existing sections:** - **Common Pitfalls:** Add bugs/mistakes/lessons learned - **Critical Components:** Add new systems/services - **Configuration System:** Add new ENV variables - **When Making Changes:** Add new development patterns - **API Endpoints:** Add new routes and their purposes **Remember:** Documentation is not bureaucracy - it's **protecting future profitability** by preserving hard-won knowledge. In a real money trading system, forgotten lessons = repeated mistakes = financial losses. **Git commit + Documentation = Complete work. One without the other = Incomplete.** --- ## πŸ“š Common Pitfalls Documentation Structure (Dec 5, 2025) **Purpose:** Centralized documentation of all production incidents, bugs, and lessons learned from real trading operations. **Documentation Reorganization (PR #1):** - **Problem Solved:** Original copilot-instructions.md was 6,575 lines with 72 pitfalls mixed throughout - **Solution:** Extracted to dedicated `docs/COMMON_PITFALLS.md` (1,556 lines) - **Result:** 45% reduction in main file size (6,575 β†’ 3,608 lines) **New Structure:** ``` docs/COMMON_PITFALLS.md β”œβ”€β”€ Quick Reference Table (all 72 pitfalls with severity, category, date) β”œβ”€β”€ πŸ”΄ CRITICAL Pitfalls (Financial/Data Integrity) β”‚ β”œβ”€β”€ Race Conditions & Duplicates (#27, #41, #48, #49, #59, #60, #61, #67) β”‚ β”œβ”€β”€ P&L Calculation Errors (#41, #49, #50, #54, #57) β”‚ └── SDK/API Integration (#2, #24, #36, #44, #66) β”œβ”€β”€ ⚠️ HIGH Pitfalls (System Stability) β”‚ β”œβ”€β”€ Deployment & Verification (#1, #31, #47) β”‚ └── Database Operations (#29, #35, #58) β”œβ”€β”€ 🟑 MEDIUM Pitfalls (Performance/UX) β”œβ”€β”€ πŸ”΅ LOW Pitfalls (Code Quality) β”œβ”€β”€ Pattern Analysis (common root causes) └── Contributing Guidelines (how to add new pitfalls) ``` **Top 10 Critical Pitfalls (Summary):** 1. **Position Manager Never Monitors (#77)** - Logs say "added" but isMonitoring=false = $1,000+ losses 2. **Silent SL Placement Failure (#76)** - placeExitOrders() returns SUCCESS with 2/3 orders, no SL protection 3. **Orphan Cleanup Removes Active Orders (#78)** - cancelAllOrders() affects ALL positions on symbol 4. **Wrong Year in SQL Queries (#75)** - Query 2024 dates when current is 2025 = 12Γ— inflated results 5. **Drift SDK Memory Leak (#1)** - JS heap OOM after 10+ hours β†’ Smart health monitoring 6. **Wrong RPC Provider (#2)** - Alchemy breaks Drift SDK β†’ Use Helius only 7. **P&L Compounding Race Condition (#48, #49, #61)** - Multiple closures β†’ Atomic Map.delete() 8. **Database-First Pattern (#29)** - Save DB before Position Manager 9. **Container Deployment Verification (#31)** - Always check container timestamp 10. **External Closure Race Condition (#67)** - 16 duplicate notifications β†’ Atomic lock **How to Use:** - **Quick lookup:** Check Quick Reference Table in `docs/COMMON_PITFALLS.md` - **By category:** Navigate to severity/category sections - **Pattern recognition:** See Pattern Analysis for common root causes - **Adding new pitfalls:** Follow Contributing Guidelines template **When Adding New Pitfalls:** 1. Add full details to `docs/COMMON_PITFALLS.md` with standard template 2. Assign severity (πŸ”΄ Critical, ⚠️ High, 🟑 Medium, πŸ”΅ Low) 3. Include: symptom, incident details, root cause, fix, prevention, code example 4. Update Quick Reference Table 5. If more critical than existing Top 10, update this section --- ## 🎯 BlockedSignal Minute-Precision Tracking (Dec 2, 2025 - OPTIMIZED) **Purpose:** Track exact minute-by-minute price movements for blocked signals to determine EXACTLY when TP1/TP2 would have been hit **CRITICAL: Data Contamination Discovery (Dec 5, 2025):** - **Problem:** All TradingView alerts (15min, 1H, 4H, Daily) were attached to OLD v9 version with different settings - **Impact:** 31 BlockedSignal records from wrong indicator version (multi-timeframe data collection) - **Solution:** Marked contaminated data with `blockReason='DATA_COLLECTION_OLD_V9_VERSION'` - **Exception:** 1-minute data (11,398 records) kept as `DATA_COLLECTION_ONLY` - not affected by alert version issue (pure market data sampling) - **SQL Filter:** Exclude old data: `WHERE blockReason != 'DATA_COLLECTION_OLD_V9_VERSION'` - **Fresh Start:** New signals from corrected alerts will use `blockReason='DATA_COLLECTION_ONLY'` - **Database State:** Old data preserved for historical reference, clearly marked to prevent analysis contamination **Critical Optimization (Dec 2, 2025):** - **Original Threshold:** 30 minutes (arbitrary, inefficient) - **User Insight:** "we have 1 minute data, so use it" - **Optimized Threshold:** 1 minute (matches data granularity) - **Performance Impact:** 30Γ— faster processing (96.7% reduction in wait time) - **Result:** 0 signals β†’ 15 signals immediately eligible for analysis **System Architecture:** ``` Data Collection: Every 1 minute (MarketData table) βœ“ Processing Wait: 1 minute (OPTIMIZED from 30 min) βœ“ Analysis Detail: Every 1 minute (480 points/8h) βœ“ Result Storage: Exact minute timestamps βœ“ Perfect alignment - all components at 1-minute granularity ``` **Validation Results (Dec 2, 2025):** - **Batch Processing:** 15 signals analyzed immediately after optimization - **Win Rate (recent 25):** 48% TP1 hits, 0 SL losses - **Historical Baseline:** 15.8% TP1 win rate (7,427 total signals) - **Recent Performance:** 3Γ— better than historical baseline - **Exact Timestamps:** - Signal cmiolsiaq005: Created 13:18:02, TP1 13:26:04 (8.0 min exactly) - Signal cmiolv2hw005: Created 13:20:01, TP1 13:26:04 (6.0 min exactly) **Code Location:** ```typescript // File: lib/analysis/blocked-signal-tracker.ts, Line 528 // CRITICAL FIX (Dec 2, 2025): Changed from 30min to 1min // Rationale: We collect 1-minute data, so use it! No reason to wait longer. // Impact: 30Γ— faster processing eligibility (0 β†’ 15 signals immediately qualified) const oneMinuteAgo = new Date(Date.now() - 1 * 60 * 1000) ``` **Why This Matters:** - **Matches Data Granularity:** 1-minute data collection = 1-minute processing threshold - **Eliminates Arbitrary Delays:** No reason to wait 30 minutes when data is available - **Immediate Analysis:** Signals qualify for batch processing within 1 minute of creation - **Exact Precision:** Database stores exact minute timestamps (6-8 min resolution typical) - **User Philosophy:** "we have 1 minute data, so use it" - use available precision fully **Database Fields (Minute Precision):** - `signalCreatedTime` - Exact timestamp when signal generated (YYYY-MM-DD HH:MM:SS) - `tp1HitTime` - Exact minute when TP1 target reached - `tp2HitTime` - Exact minute when TP2 target reached - `slHitTime` - Exact minute when SL triggered - `minutesToTP1` - Decimal minutes from signal to TP1 (e.g., 6.0, 8.0) - `minutesToTP2` - Decimal minutes from signal to TP2 - `minutesToSL` - Decimal minutes from signal to SL **Git Commits:** - d156abc "docs: Add mandatory git workflow and critical feedback requirements" (Dec 2, 2025) - [Next] "perf: Optimize BlockedSignal processing threshold from 30min to 1min" **Lesson Learned:** When you have high-resolution data (1 minute), use it immediately. Arbitrary delays (30 minutes) waste processing time without providing value. Match all system components to the same granularity for consistency and efficiency. --- ## οΏ½πŸ“Š 1-Minute Data Collection System (Nov 27, 2025) **Purpose:** Real-time market data collection via TradingView 1-minute alerts for Phase 7.1/7.2/7.3 enhancements **Data Flow:** - TradingView 1-minute chart β†’ Alert fires every minute with metrics - n8n Parse Signal Enhanced β†’ Bot execute endpoint - Timeframe='1' detected β†’ Saved to BlockedSignal (DATA_COLLECTION_ONLY) - Market data cache updated every 60 seconds - Used by: Smart Entry Timer validation, Revenge system ADX checks, Adaptive trailing stops **CRITICAL: Direction Field is Meaningless** - All 1-minute signals in BlockedSignal have `direction='long'` populated - This is an **artifact of TradingView alert syntax** (requires buy/sell trigger word to fire) - These are **NOT trading signals** - they are pure market data samples - **For analysis:** ALWAYS filter out or ignore direction field for timeframe='1' - **Focus on:** ADX, ATR, RSI, volumeRatio, pricePosition (actual market conditions) - **Example wrong query:** `WHERE timeframe='1' AND direction='long' AND signalQualityScore >= 90` - **Example correct query:** `WHERE timeframe='1' AND signalQualityScore >= 90` (no direction filter) **Database Fields:** - `timeframe='1'` β†’ 1-minute data collection - `blockReason='DATA_COLLECTION_ONLY'` β†’ Not a blocked trade, just data sample - `direction='long'` β†’ IGNORE THIS (TradingView artifact, not real direction) - `signalQualityScore` β†’ Quality score calculated but NOT used for execution threshold - `adx`, `atr`, `rsi`, `volumeRatio`, `pricePosition` β†’ **THESE ARE THE REAL DATA** **Why This Matters:** - Prevents confusion when analyzing 1-minute data - Ensures correct SQL queries for market condition analysis - Direction-based analysis on 1-min data is meaningless and misleading - Future developers won't waste time investigating "why all signals are long" --- ## Mission & Financial Goals **Primary Objective:** Build wealth systematically from $106 β†’ $100,000+ through algorithmic trading **Current Phase:** Phase 1 - Survival & Proof (Nov 2025 - Jan 2026) - **Current Capital:** $540 USDC (zero debt, 100% health) - **Total Invested:** $546 ($106 initial + $440 deposits) - **Trading P&L:** -$6 (early v6/v7 testing before v8 optimization) - **Target:** $2,500 by end of Phase 1 (Month 2.5) - 4.6x growth from current - **Strategy:** Aggressive compounding, 0 withdrawals, data-driven optimization - **Position Sizing:** 100% of free collateral (~$540 at 15x leverage = ~$8,100 notional) - **Risk Tolerance:** HIGH - Proof-of-concept mode with increased capital cushion - **Win Target:** 15-20% monthly returns to reach $2,500 (more achievable with larger base) - **Trades Executed:** 170+ (as of Nov 19, 2025) **Why This Matters for AI Agents:** - Every dollar counts at this stage - optimize for profitability, not just safety - User needs this system to work for long-term financial goals ($300-500/month withdrawals starting Month 3) - No changes that reduce win rate unless they improve profit factor - System must prove itself before scaling (see `TRADING_GOALS.md` for full 8-phase roadmap) **Key Constraints:** - Can't afford extended drawdowns (limited capital) - Must maintain 60%+ win rate to compound effectively - Quality over quantity - only trade 81+ signal quality scores (raised from 60 on Nov 21, 2025 after v8 success) - After 3 consecutive losses, STOP and review system ## Architecture Overview **Type:** Autonomous cryptocurrency trading bot with Next.js 15 frontend + Solana/Drift Protocol backend **Data Flow:** TradingView β†’ n8n webhook β†’ Next.js API β†’ Drift Protocol (Solana DEX) β†’ Real-time monitoring β†’ Auto-exit **CRITICAL: RPC Provider Choice** - **MUST use Alchemy RPC** (https://solana-mainnet.g.alchemy.com/v2/YOUR_API_KEY) - **DO NOT use Helius free tier** - causes catastrophic rate limiting (239 errors in 10 minutes) - Helius free: 10 req/sec sustained = TOO LOW for trade execution + Position Manager monitoring - Alchemy free: 300M compute units/month = adequate for bot operations - **Symptom if wrong RPC:** Trades hit SL immediately, duplicate closes, Position Manager loses tracking, database save failures - **Fixed Nov 14, 2025:** Switched to Alchemy, system now works perfectly (TP1/TP2/runner all functioning) **Key Design Principle:** Dual-layer redundancy - every trade has both on-chain orders (Drift) AND software monitoring (Position Manager) as backup. **Exit Strategy:** ATR-Based TP2-as-Runner system (CURRENT - Nov 17, 2025): - **ATR-BASED TP/SL** (PRIMARY): TP1/TP2/SL calculated from ATR Γ— multipliers - TP1: ATR Γ— 2.0 (typically ~0.86%, closes 60% default) - TP2: ATR Γ— 4.0 (typically ~1.72%, activates trailing stop) - SL: ATR Γ— 3.0 (typically ~1.29%) - Safety bounds: MIN/MAX caps prevent extremes - Falls back to fixed % if ATR unavailable - **Runner:** 40% remaining after TP1 (configurable via `TAKE_PROFIT_1_SIZE_PERCENT=60`) - **Runner SL after TP1:** ADX-based adaptive positioning (Nov 19, 2025): - ADX < 20: SL at 0% (breakeven) - Weak trend, preserve TP1 profit - ADX 20-25: SL at -0.3% - Moderate trend, some retracement room - ADX > 25: SL at -0.55% - Strong trend, full retracement tolerance - **Rationale:** Entry at candle close = always at top, natural -1% to -1.5% pullbacks common - **Risk management:** Only accept runner drawdown on high-probability strong trends - Worst case examples: ADX 18 β†’ +$38.70 total, ADX 29 β†’ +$22.20 if runner stops (but likely catches big move) - **Trailing Stop:** ATR-based with ADX multiplier (Nov 19, 2025 enhancement): - Base: ATR Γ— 1.5 multiplier - **ADX-based widening (graduated):** - ADX > 30: 1.5Γ— multiplier (very strong trends) - ADX 25-30: 1.25Γ— multiplier (strong trends) - ADX < 25: 1.0Γ— multiplier (base trail, weak/moderate trends) - **Profit acceleration:** Profit > 2%: additional 1.3Γ— multiplier - **Combined effect:** ADX 29.3 + 2% profit = trail multiplier 1.5 Γ— 1.3 = 1.95Γ— - **Purpose:** Capture more of massive trend moves (e.g., 38% MFE trades) - **Backward compatible:** Trades without ADX use base 1.5Γ— multiplier - Activates after TP2 trigger - **Benefits:** Regime-agnostic (adapts to bull/bear automatically), asset-agnostic (SOL vs BTC different ATR), trend-strength adaptive (wider trail for strong trends) - **Note:** All UI displays dynamically calculate runner% as `100 - TAKE_PROFIT_1_SIZE_PERCENT` **Exit Reason Tracking (Nov 24, 2025 - TRAILING_SL Distinction):** - **Regular SL:** Stop loss hit before TP2 reached (initial stop loss or breakeven SL after TP1) - **TRAILING_SL:** Stop loss hit AFTER TP2 trigger when trailing stop is active (runner protection) - **Detection Logic:** * If `tp2Hit=true` AND `trailingStopActive=true` AND price pulled back from peak (>1%) * Then `exitReason='TRAILING_SL'` (not regular 'SL') * Distinguishes runner exits from early stops - **Database:** Both stored in same `exitReason` column, but TRAILING_SL separate value - **Analytics UI:** Trailing stops display with purple styling + πŸƒ emoji, regular SL shows blue - **Purpose:** Analyze runner system performance separately from hard stop losses - **Code locations:** * Position Manager exit detection: `lib/trading/position-manager.ts` line ~937, ~1457 * External closure handler: `lib/trading/position-manager.ts` line ~927-945 * Frontend display: `app/analytics/page.tsx` line ~776-792 - **Implementation:** Nov 24, 2025 (commit 9d7932f) **Per-Symbol Configuration:** SOL and ETH have independent enable/disable toggles and position sizing: - `SOLANA_ENABLED`, `SOLANA_POSITION_SIZE`, `SOLANA_LEVERAGE` (defaults: true, 100%, 15x) - `ETHEREUM_ENABLED`, `ETHEREUM_POSITION_SIZE`, `ETHEREUM_LEVERAGE` (defaults: true, 100%, 1x) - BTC and other symbols fall back to global settings (`MAX_POSITION_SIZE_USD`, `LEVERAGE`) - **Priority:** Per-symbol ENV β†’ Market config β†’ Global ENV β†’ Defaults **Signal Quality System:** Filters trades based on 5 metrics (ATR, ADX, RSI, volumeRatio, pricePosition) scored 0-100. **Direction-specific thresholds (Nov 28, 2025):** LONG signals require 90+, SHORT signals require 80+. Scores stored in database for future optimization. Frequency penalties (overtrading / flip-flop / alternating) now ignore 1-minute data-collection alerts automatically: `getRecentSignals()` filters to timeframe='5' (or whatever timeframe is being scored) and drops `blockReason='DATA_COLLECTION_ONLY'`. This prevents the overtrading penalty from triggering when the 1-minute telemetry feeds multiple samples per minute for BlockedSignal analysis. **Direction-Specific Quality Thresholds (Nov 28, 2025):** - **LONG threshold:** 90 (straightforward) - **SHORT threshold:** 80 (more permissive due to higher baseline difficulty) - **Configuration:** `MIN_SIGNAL_QUALITY_SCORE_LONG=90`, `MIN_SIGNAL_QUALITY_SCORE_SHORT=80` in .env - **Fallback logic:** Direction-specific ENV β†’ Global ENV β†’ Default (60) - **Helper function:** `getMinQualityScoreForDirection(direction, config)` in config/trading.ts - **Implementation:** check-risk endpoint uses direction-specific thresholds before execution - **See:** `docs/DIRECTION_SPECIFIC_QUALITY_THRESHOLDS.md` for historical analysis **Adaptive Leverage System (Nov 24, 2025 - RISK-ADJUSTED POSITION SIZING):** - **Purpose:** Automatically adjust leverage based on signal quality score - high confidence gets full leverage, borderline signals get reduced risk exposure - **Quality-Based Leverage Tiers:** - **Quality 95-100:** 15x leverage ($540 Γ— 15x = $8,100 notional position) - **Quality 90-94:** 10x leverage ($540 Γ— 10x = $5,400 notional position) - **Quality <90:** Blocked by direction-specific thresholds - **Risk Impact:** Quality 90-94 signals save $2,700 exposure (33% risk reduction) vs fixed 15x - **Data-Driven Justification:** v8 indicator quality 95+ = 100% WR (4/4 wins), quality 90-94 more volatile - **Configuration:** `USE_ADAPTIVE_LEVERAGE=true`, `HIGH_QUALITY_LEVERAGE=15`, `LOW_QUALITY_LEVERAGE=10`, `QUALITY_LEVERAGE_THRESHOLD=95` in .env - **Implementation:** Quality score calculated EARLY in execute endpoint (before position sizing), passed to `getActualPositionSizeForSymbol(qualityScore)`, leverage determined via `getLeverageForQualityScore()` helper - **Log Message:** `πŸ“Š Adaptive leverage: Quality X β†’ Yx leverage (threshold: 95)` - **Trade-off:** ~$21 less profit on borderline wins, but ~$21 less loss on borderline stops = better risk-adjusted returns - **Future Enhancements:** Multi-tier (20x for 97+, 5x for 85-89), per-direction multipliers, streak-based adjustments - **See:** `ADAPTIVE_LEVERAGE_SYSTEM.md` for complete implementation details, code examples, monitoring procedures **Timeframe-Aware Scoring:** Signal quality thresholds adjust based on timeframe (5min vs daily): - 5min: ADX 12+ trending (vs 18+ for daily), ATR 0.2-0.7% healthy (vs 0.4%+ for daily) - Anti-chop filter: -20 points for extreme sideways (ADX <10, ATR <0.25%, Vol <0.9x) - Pass `timeframe` param to `scoreSignalQuality()` from TradingView alerts (e.g., `timeframe: "5"`) **MAE/MFE Tracking:** Every trade tracks Maximum Favorable Excursion (best profit %) and Maximum Adverse Excursion (worst loss %) updated every 2s. Used for data-driven optimization of TP/SL levels. **Manual Trading via Telegram:** Send plain-text messages like `long sol`, `short eth`, `long btc` to open positions instantly (bypasses n8n, calls `/api/trading/execute` directly with preset healthy metrics). **CRITICAL:** Manual trades are marked with `signalSource='manual'` and excluded from TradingView indicator analysis (prevents data contamination). **Telegram Manual Trade Presets (Nov 17, 2025 - Data-Driven):** - ATR: 0.43 (median from 162 SOL trades, Nov 2024-Nov 2025) - ADX: 32 (strong trend assumption) - RSI: 58 long / 42 short (neutral-favorable) - Volume: 1.2x average (healthy) - Price Position: 45 long / 55 short (mid-range) - Purpose: Enables quick manual entries when TradingView signals unavailable - Note: Re-entry analytics validate against fresh TradingView data when cached (<5min) **Manual Trade Quality Bypass (Dec 4, 2025 - USER MANDATE):** - **User requirement:** "when i say short or long it shall do it straight away and DO it" - Manual trades (`timeframe='manual'`) bypass ALL quality scoring checks - Execute endpoint detects `isManualTrade` flag and skips quality threshold validation - Logs show: `βœ… MANUAL TRADE BYPASS: Quality scoring skipped (Telegram command - executes immediately)` - **Purpose:** Instant execution for user-initiated trades without automated filtering - **Implementation:** `app/api/trading/execute/route.ts` line ~237-242 (commit 0982578, Dec 4, 2025) - **Behavior:** Manual trades execute regardless of ADX/ATR/RSI/quality score - **--force flag:** No longer needed (all manual trades bypass by default) **Re-Entry Analytics System (OPTIONAL VALIDATION):** Manual trades CAN be validated before execution using fresh TradingView data: - Market data cached from TradingView signals (5min expiry) - `/api/analytics/reentry-check` scores re-entry based on fresh metrics + recent performance - Telegram bot blocks low-quality re-entries unless `--force` flag used - Uses real TradingView ADX/ATR/RSI when available, falls back to historical data - Penalty for recent losing trades, bonus for winning streaks - **Note:** Analytics check is advisory only - manual trades execute even if rejected by analytics **Smart Validation Queue (Dec 10, 2025 - TWO-STAGE CONFIRMATION):** - **Purpose:** Monitor blocked signals to confirm price moves before execution (two-stage confirm) - **Timeout:** 90 minutes (was 30 minutes) with 30-second checks; restores last 90 minutes on startup - **Confirmation:** +0.15% move in trade direction triggers execution; abandon at -0.4% against (unchanged) - **Rationale:** Blocked-signal analysis showed rapid +0.15% confirms capture TP1/TP2 while retaining low false positives - **Configuration:** `entryWindowMinutes: 90`, `confirmationThreshold: 0.15`, `maxDrawdown: -0.4` in smart-validation-queue.ts - **Trade-off:** Longer watch window aligned to two-stage approach; still bounded by drawdown guard - **Implementation:** lib/trading/smart-validation-queue.ts line 105 - **Status:** βœ… UPDATED Dec 10, 2025 15:00 CET (two-stage thresholds live) ## πŸ§ͺ Test Infrastructure (Dec 5, 2025 - PR #2) **Purpose:** Comprehensive integration test suite for Position Manager - the 1,938-line core trading logic managing real capital. **Test Suite Structure:** ``` tests/ β”œβ”€β”€ setup.ts # Global mocks (Drift, Pyth, DB, Telegram) β”œβ”€β”€ helpers/ β”‚ └── trade-factory.ts # Factory functions for mock trades └── integration/ β”œβ”€β”€ drift-state-verifier/ β”‚ β”œβ”€β”€ position-verification.test.ts # Identity verification, fail-open bias, cooldown persistence (Bug #82) β”‚ └── cooldown-enforcement.test.ts # Retry cooldown enforcement and logging (Bug #80) └── position-manager/ β”œβ”€β”€ tp1-detection.test.ts # 16 tests - TP1 triggers for LONG/SHORT β”œβ”€β”€ breakeven-sl.test.ts # 14 tests - SL moves to entry after TP1 β”œβ”€β”€ adx-runner-sl.test.ts # 18 tests - ADX-based runner SL tiers β”œβ”€β”€ trailing-stop.test.ts # 16 tests - ATR-based trailing with peak tracking β”œβ”€β”€ edge-cases.test.ts # 15 tests - Token vs USD, phantom detection β”œβ”€β”€ price-verification.test.ts # 18 tests - Size AND price verification β”œβ”€β”€ decision-helpers.test.ts # 16 tests - shouldStopLoss, shouldTakeProfit1/2 β”œβ”€β”€ monitoring-verification.test.ts # Ensures monitoring actually starts and price updates trigger checks └── pure-runner-profit-widening.test.ts # Profit-based trailing widening after TP2 ``` **Total:** 13 test files, 162 tests (full suite green as of Dec 10, 2025 before rebuild) **Test Configuration:** - **Framework:** Jest + ts-jest - **Config:** `jest.config.js` at project root (created by PR #2) - **Coverage Threshold:** 60% minimum - **Mocks:** Drift SDK, Pyth price feeds, PostgreSQL, Telegram notifications **How to Run Tests:** ```bash # Run all tests npm test # Run tests in watch mode (development) npm run test:watch # Run with coverage report npm run test:coverage # Run specific test file npm test -- tests/integration/position-manager/tp1-detection.test.ts ``` **Trade Factory Helpers:** ```typescript import { createLongTrade, createShortTrade, createTradeAfterTP1 } from '../helpers/trade-factory' // Create basic trades const longTrade = createLongTrade({ entryPrice: 140, adx: 26.9 }) const shortTrade = createShortTrade({ entryPrice: 140, atr: 0.43 }) // Create runner after TP1 const runner = createTradeAfterTP1('short', { positionSize: 8000 }) ``` **Common Pitfalls Prevented by Tests:** - **#24:** Position.size tokens vs USD conversion - **#43:** TP1 false detection without price verification - **#45:** Wrong entry price for breakeven SL (must use DB entry, not Drift) - **#52:** ADX-based runner SL tier calculations - **#54:** MAE/MFE stored as percentages, not dollars - **#67:** Duplicate closure race conditions **Test Data (Standard Values):** | Parameter | LONG | SHORT | |-----------|------|-------| | Entry | $140.00 | $140.00 | | TP1 (+0.86%) | $141.20 | $138.80 | | TP2 (+1.72%) | $142.41 | $137.59 | | SL (-0.92%) | $138.71 | $141.29 | | ATR | 0.43 | 0.43 | | ADX | 26.9 | 26.9 | | Position Size | $8,000 | $8,000 | **Why Tests Matter:** - Position Manager handles **real money** ($540 capital, targeting $100k) - Zero test coverage before this PR despite 170+ trades and 71 documented bugs - Prevents regressions when refactoring critical trading logic - Validates calculations match documented behavior --- ## πŸ”„ CI/CD Pipeline (Dec 5, 2025 - PR #5) **Purpose:** Automated quality gates ensuring code reliability before deployment to production trading system. **Workflows:** ### 1. Test Workflow (`test.yml`) **Triggers:** Push/PR to main/master/develop ```yaml - npm ci # Install dependencies - npm test # Run 113 tests - npm run build # Verify TypeScript compiles ``` **Blocking:** βœ… PRs cannot merge if tests fail ### 2. Build Workflow (`build.yml`) **Triggers:** Push/PR to main/master ```yaml - docker build # Build production image - Buildx caching # Layer caching for speed ``` **Blocking:** βœ… PRs cannot merge if Docker build fails ### 3. Lint Workflow (`lint.yml`) **Triggers:** Every push/PR ```yaml - ESLint check # Code quality - console.log scan # Find debug statements in production code - TypeScript strict # Type checking ``` **Blocking:** ⚠️ Warnings only (does not block merge) ### 4. Security Workflow (`security.yml`) **Triggers:** Push/PR + weekly schedule ```yaml - npm audit # Check for vulnerable dependencies - Secret scanning # Basic credential detection ``` **Blocking:** βœ… Fails on high/critical vulnerabilities **Status Badges (README.md):** ```markdown ![Tests](https://github.com/mindesbunister/trading_bot_v4/workflows/Tests/badge.svg) ![Docker Build](https://github.com/mindesbunister/trading_bot_v4/workflows/Docker%20Build/badge.svg) ![Security](https://github.com/mindesbunister/trading_bot_v4/workflows/Security/badge.svg) ``` **Branch Protection Recommendations:** Enable in GitHub Settings β†’ Branches β†’ Add rule: - βœ… Require status checks to pass (test, build) - βœ… Require PR reviews before merging - βœ… Require branches to be up to date **Troubleshooting Common Failures:** | Failure | Cause | Fix | |---------|-------|-----| | Test failure | Position Manager logic changed | Update tests or fix regression | | Build failure | TypeScript error | Check `npm run build` locally | | Lint warning | console.log in code | Remove or use proper logging | | Security alert | Vulnerable dependency | `npm audit fix` or update package | **Why CI/CD Matters:** - **Real money at stake:** Bugs cost actual dollars - **Confidence to deploy:** Green pipeline = safe to merge - **Fast feedback:** Know within minutes if change breaks something - **Professional practice:** Industry standard for production systems --- ## VERIFICATION MANDATE: Financial Code Requires Proof **CRITICAL: THIS IS A REAL MONEY TRADING SYSTEM - NOT A TOY PROJECT** **Core Principle:** In trading systems, "working" means "verified with real data", NOT "code looks correct". **NEVER declare something working without:** 1. Observing actual logs showing expected behavior 2. Verifying database state matches expectations 3. Comparing calculated values to source data 4. Testing with real trades when applicable 5. **CONFIRMING CODE IS DEPLOYED** - Check container start time vs commit time 6. **VERIFYING ALL RELATED FIXES DEPLOYED** - Multi-fix sessions require complete deployment verification **CODE COMMITTED β‰  CODE DEPLOYED** - Git commit at 15:56 means NOTHING if container started at 15:06 - ALWAYS verify: `docker logs trading-bot-v4 | grep "Server starting" | head -1` - Compare container start time to commit timestamp - If container older than commit: **CODE NOT DEPLOYED, FIX NOT ACTIVE** - Never say "fixed" or "protected" until deployment verified **MULTI-FIX DEPLOYMENT VERIFICATION** When multiple related fixes are developed in same session: ```bash # 1. Check container start time docker inspect trading-bot-v4 --format='{{.State.StartedAt}}' # Example: 2025-11-16T09:28:20.757451138Z # 2. Check all commit timestamps git log --oneline --format='%h %ai %s' -5 # Example output: # b23dde0 2025-11-16 09:25:10 fix: Add needsVerification field # c607a66 2025-11-16 09:00:42 critical: Fix close verification # 673a493 2025-11-16 08:45:21 critical: Fix breakeven SL # 3. Verify container newer than ALL commits # Container 09:28:20 > Latest commit 09:25:10 βœ… ALL FIXES DEPLOYED # 4. Test-specific verification for each fix docker logs -f trading-bot-v4 | grep "expected log message from fix" ``` **DEPLOYMENT CHECKLIST FOR MULTI-FIX SESSIONS:** - [ ] All commits pushed to git - [ ] Container rebuilt successfully (no TypeScript errors) - [ ] Container restarted with `--force-recreate` - [ ] Container start time > ALL commit timestamps - [ ] Specific log messages from each fix observed (if testable) - [ ] Database state reflects changes (if applicable) **Example: Nov 16, 2025 Session (Breakeven SL + Close Verification)** - Fix 1: Breakeven SL (commit 673a493, 08:45:21) - Fix 2: Close verification (commit c607a66, 09:00:42) - Fix 3: TypeScript interface (commit b23dde0, 09:25:10) - Container restart: 09:28:20 βœ… All three fixes deployed - Verification: Log messages include "Using original entry price" and "Waiting 5s for Drift state" ### Critical Path Verification Requirements **MANDATORY: ALWAYS VERIFY DRIFT STATE BEFORE ANY POSITION OPERATIONS (Dec 9, 2025)** - **NEVER trust bot logs, API responses, or database state alone** - **ALWAYS query Drift API first:** `curl -X POST /api/trading/sync-positions -H "Authorization: Bearer $API_SECRET_KEY"` - **Verify actual position.size, entry price, current P&L from Drift response** - **Only AFTER Drift verification:** proceed with close, modify orders, or state changes - **Incident:** Agent closed position based on stale bot data when user explicitly said NOT to close - **Why:** Bot logs showed "closed" but Drift still had open position - catastrophic if user wants to keep position open - **This is NON-NEGOTIABLE** - verify Drift state before ANY position operation **MANDATORY: ALWAYS VERIFY DATABASE WITH DRIFT API BEFORE REPORTING NUMBERS (Dec 9, 2025)** - **NEVER trust database P&L, exitPrice, or trade details without Drift confirmation** - **ALWAYS cross-check database against Drift when reporting losses/gains to user** - **Query Drift account health:** `curl http://localhost:3001/api/drift/account-health` for actual balance - **Compare database totalCollateral with actual Drift balance** - database can be wrong - **Incident (Dec 9, 2025):** Database showed -$19.33 loss, Drift showed -$22.21 actual loss ($2.88 missing) - **Root Cause:** Retry loop chaos caused position to close in multiple chunks, only first chunk recorded - **User Frustration:** "drift tells the truth not you" - agent trusted incomplete database - **Why This Matters:** In real money system, wrong numbers = wrong financial decisions - **The Rule:** QUERY DRIFT FIRST β†’ COMPARE TO DATABASE β†’ REPORT DISCREPANCIES β†’ CORRECT DATABASE - **Verification Pattern:** ```bash # 1. Check Drift account balance curl -s http://localhost:3001/api/drift/account-health | jq '.totalCollateral' # 2. Query database for trade details psql -c "SELECT realizedPnL FROM Trade WHERE id='...'" # 3. If mismatch: Correct database to match Drift reality psql -c "UPDATE Trade SET realizedPnL = DRIFT_ACTUAL WHERE id='...'" ``` - **This is NON-NEGOTIABLE** - Drift is source of truth for financial data, not database **Position Manager Changes:** - [ ] Execute test trade with DRY_RUN=false (small size) - [ ] Watch docker logs for full TP1 β†’ TP2 β†’ exit cycle - [ ] SQL query: verify `tp1Hit`, `slMovedToBreakeven`, `currentSize` match Position Manager logs - [ ] Compare Position Manager tracked size to actual Drift position size - [ ] Check exit reason matches actual trigger (TP1/TP2/SL/trailing) - [ ] **VERIFY VIA DRIFT API** before declaring anything "working" or "closed" **Exit Logic Changes (TP/SL/Trailing):** - [ ] Log EXPECTED values (TP1 price, SL price after breakeven, trailing stop distance) - [ ] Log ACTUAL values from Drift position and Position Manager state - [ ] Verify: Does TP1 hit when price crosses TP1? Does SL move to breakeven? - [ ] Test: Open position, let it hit TP1, verify 75% closed + SL moved - [ ] Document: What SHOULD happen vs what ACTUALLY happened **API Endpoint Changes:** - [ ] curl test with real payload from TradingView/n8n - [ ] Check response JSON matches expectations - [ ] Verify database record created with correct fields - [ ] Check Telegram notification shows correct values (leverage, size, etc.) - [ ] SQL query: confirm all fields populated correctly **Calculation Changes (P&L, Position Sizing, Percentages):** - [ ] Add console.log for EVERY step of calculation - [ ] Verify units match (tokens vs USD, percent vs decimal, etc.) - [ ] SQL query with manual calculation: does code result match hand calculation? - [ ] Test edge cases: 0%, 100%, negative values, very small/large numbers **SDK/External Data Integration:** - [ ] Log raw SDK response to verify assumptions about data format - [ ] NEVER trust documentation - verify with console.log - [ ] Example: position.size doc said "USD" but logs showed "tokens" - [ ] Document actual behavior in Common Pitfalls section ### Red Flags Requiring Extra Verification **High-Risk Changes:** - Unit conversions (tokens ↔ USD, percent ↔ decimal) - State transitions (TP1 hit β†’ move SL to breakeven) - Configuration precedence (per-symbol vs global vs defaults) - Display values from complex calculations (leverage, size, P&L) - Timing-dependent logic (grace periods, cooldowns, race conditions) **Verification Steps for Each:** 1. **Before declaring working**: Show proof (logs, SQL results, test output) 2. **After deployment**: Monitor first real trade closely, verify behavior 3. **Edge cases**: Test boundary conditions (0, 100%, max leverage, min size) 4. **Regression**: Check that fix didn't break other functionality ### πŸ”΄ EXAMPLE: What NOT To Do (Nov 25, 2025 - Health Monitor Bug) **What the AI agent did WRONG:** 1. ❌ Fixed code (moved interceptWebSocketErrors() call) 2. ❌ Built Docker image successfully 3. ❌ Deployed container 4. ❌ Saw "Drift health monitor started" in logs 5. ❌ **DECLARED IT "WORKING" AND "DEPLOYED"** ← CRITICAL ERROR 6. ❌ Did NOT verify error interception was actually functioning 7. ❌ Did NOT test the health API to see if errors were being recorded 8. ❌ Did NOT add logging to confirm the fix was executing **What ACTUALLY happened:** - Code was deployed βœ… - Monitor was starting βœ… - But error interception was still broken ❌ - System still vulnerable to memory leak ❌ - User had to point out: "Never say it's done without testing" **What the AI agent SHOULD have done:** 1. βœ… Fix code 2. βœ… Build and deploy 3. βœ… **ADD LOGGING** to confirm fix executes: `console.log('πŸ”§ Setting up error interception...')` 4. βœ… Verify logs show the new message 5. βœ… **TEST THE API**: `curl http://localhost:3001/api/drift/health` 6. βœ… Verify errorCount field exists and updates 7. βœ… **SIMULATE ERRORS** or wait for natural errors 8. βœ… Verify errorCount increases when errors occur 9. βœ… **ONLY THEN** declare it "working" **The lesson:** - Deployment β‰  Working - Logs showing service started β‰  Feature functioning - "Code looks correct" β‰  Verified with real data - **ALWAYS ADD LOGGING** for critical changes - **ALWAYS TEST THE FEATURE** before declaring success ### SQL Verification Queries **After Position Manager changes:** ```sql -- Verify TP1 detection worked correctly SELECT symbol, entryPrice, currentSize, realizedPnL, tp1Hit, slMovedToBreakeven, exitReason, TO_CHAR(createdAt, 'MM-DD HH24:MI') as time FROM "Trade" WHERE exitReason IS NULL -- Open positions OR createdAt > NOW() - INTERVAL '1 hour' -- Recent closes ORDER BY createdAt DESC LIMIT 5; -- Compare Position Manager state to expectations SELECT configSnapshot->'positionManagerState' as pm_state FROM "Trade" WHERE symbol = 'SOL-PERP' AND exitReason IS NULL; ``` **After calculation changes:** ```sql -- Verify P&L calculations SELECT symbol, direction, entryPrice, exitPrice, positionSize, realizedPnL, -- Manual calculation: CASE WHEN direction = 'long' THEN positionSize * ((exitPrice - entryPrice) / entryPrice) ELSE positionSize * ((entryPrice - exitPrice) / entryPrice) END as expected_pnl, -- Difference: realizedPnL - CASE WHEN direction = 'long' THEN positionSize * ((exitPrice - entryPrice) / entryPrice) ELSE positionSize * ((entryPrice - exitPrice) / entryPrice) END as pnl_difference FROM "Trade" WHERE exitReason IS NOT NULL AND createdAt > NOW() - INTERVAL '24 hours' ORDER BY createdAt DESC LIMIT 10; ``` ### Example: How Position.size Bug Should Have Been Caught **What went wrong:** - Read code: "Looks like it's comparing sizes correctly" - Declared: "Position Manager is working!" - Didn't verify with actual trade **What should have been done:** ```typescript // In Position Manager monitoring loop - ADD THIS LOGGING: console.log('πŸ” VERIFICATION:', { positionSizeRaw: position.size, // What SDK returns positionSizeUSD: position.size * currentPrice, // Converted to USD trackedSizeUSD: trade.currentSize, // What we're tracking ratio: (position.size * currentPrice) / trade.currentSize, tp1ShouldTrigger: (position.size * currentPrice) < trade.currentSize * 0.95 }) ``` Then observe logs on actual trade: ``` πŸ” VERIFICATION: { positionSizeRaw: 12.28, // ← AH! This is SOL tokens, not USD! positionSizeUSD: 1950.84, // ← Correct USD value trackedSizeUSD: 1950.00, ratio: 1.0004, // ← Should be near 1.0 when position full tp1ShouldTrigger: false // ← Correct } ``` **Lesson:** One console.log would have exposed the bug immediately. ### CRITICAL: Documentation is MANDATORY (No Exceptions) **THIS IS A REAL MONEY TRADING SYSTEM - DOCUMENTATION IS NOT OPTIONAL** **IRON-CLAD RULE:** Every git commit MUST include updated copilot-instructions.md documentation. **NO EXCEPTIONS.** **Why this is non-negotiable:** - This is a financial system handling real money - incomplete documentation = financial losses - Future AI agents need complete context to maintain data integrity - User relies on documentation to understand what changed and why - Undocumented fixes are forgotten fixes - they get reintroduced as bugs - Common Pitfalls section prevents repeating expensive mistakes **MANDATORY workflow for ALL changes:** 1. Implement fix/feature 2. Test thoroughly 3. **UPDATE copilot-instructions.md** (Common Pitfalls, Architecture, etc.) 4. Git commit code changes 5. Git commit documentation changes 6. Push both commits **What MUST be documented:** - **Bug fixes:** Add to Common Pitfalls section with: * Symptom, Root Cause, Real incident details * Complete before/after code showing the fix * Files changed, commit hash, deployment timestamp * Lesson learned for future AI agents - **New features:** Update Architecture Overview, Critical Components, API Endpoints - **Database changes:** Update Important fields section, add filtering requirements - **Configuration changes:** Update Configuration System section - **Breaking changes:** Add to "When Making Changes" section **Recent examples of MANDATORY documentation:** - Common Pitfall #56: Ghost orders after external closures (commit a3a6222) - Common Pitfall #57: P&L calculation inaccuracy (commit 8e600c8) - Common Pitfall #55: BlockedSignalTracker Pyth cache bug (commit 6b00303) **If you commit code without updating documentation:** - User will be annoyed (rightfully so) - Future AI agents will lack context - Bug will likely recur - System integrity degrades **This is not a suggestion - it's a requirement.** Documentation updates are part of the definition of "done" for any change. ### Deployment Checklist **MANDATORY PRE-DEPLOYMENT VERIFICATION:** - [ ] Check container start time: `docker logs trading-bot-v4 | grep "Server starting" | head -1` - [ ] Compare to commit timestamp: Container MUST be newer than code changes - [ ] If container older: **STOP - Code not deployed, fix not active** - [ ] Never declare "fixed" or "working" until container restarted with new code Before marking feature complete: - [ ] Code review completed - [ ] Unit tests pass (if applicable) - [ ] Integration test with real API calls - [ ] Logs show expected behavior - [ ] Database state verified with SQL - [ ] Edge cases tested - [ ] **Container restarted and verified running new code** - [ ] Documentation updated (including Common Pitfalls if applicable) - [ ] User notified of what to verify during first real trade ### When to Escalate to User **Don't say "it's working" if:** - You haven't observed actual logs showing the expected behavior - SQL query shows unexpected values - Test trade behaved differently than expected - You're unsure about unit conversions or SDK behavior - Change affects money (position sizing, P&L, exits) - **Container hasn't been restarted since code commit** **Instead say:** - "Code is updated. Need to verify with test trade - watch for [specific log message]" - "Fixed, but requires verification: check database shows [expected value]" - "Deployed. First real trade should show [behavior]. If not, there's still a bug." - **"Code committed but NOT deployed - container running old version, fix not active yet"** ### Docker Build Best Practices **CRITICAL: Prevent build interruptions with background execution + live monitoring** Docker builds take 40-70 seconds and are easily interrupted by terminal issues. Use this pattern: ```bash # Start build in background with live log tail cd /home/icke/traderv4 && docker compose build trading-bot > /tmp/docker-build-live.log 2>&1 & BUILD_PID=$!; echo "Build started, PID: $BUILD_PID"; tail -f /tmp/docker-build-live.log ``` **Why this works:** - Build runs in background (`&`) - immune to terminal disconnects/Ctrl+C - Output redirected to log file - can review later if needed - `tail -f` shows real-time progress - see compilation, linting, errors - Can Ctrl+C the `tail -f` without killing build - build continues - Verification after: `tail -50 /tmp/docker-build-live.log` to check success **Success indicators:** - `βœ“ Compiled successfully in 27s` - `βœ“ Generating static pages (30/30)` - `#22 naming to docker.io/library/traderv4-trading-bot done` - `DONE X.Xs` on final step **Failure indicators:** - `Failed to compile.` - `Type error:` - `ERROR: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1` **After successful build:** ```bash # Deploy new container docker compose up -d --force-recreate trading-bot # Verify it started docker logs --tail=30 trading-bot-v4 # Confirm deployed version docker logs trading-bot-v4 | grep "Server starting" | head -1 ``` **DO NOT use:** `docker compose build trading-bot` in foreground - one network hiccup kills 60s of work ### When to Actually Rebuild vs Restart vs Nothing **⚠️ CRITICAL: Stop rebuilding unnecessarily - costs 40-70 seconds downtime per rebuild** **See `docs/ZERO_DOWNTIME_CHANGES.md` for complete guide** **Quick Decision Matrix:** | Change Type | Action | Downtime | When | |------------|--------|----------|------| | Documentation (`.md`) | **NONE** | 0s | Just commit and push | | Workflows (`.json`, `.pinescript`) | **NONE** | 0s | Import manually to TradingView/n8n | | ENV variables (`.env`) | **RESTART** | 5-10s | `docker compose restart trading-bot` | | Database schema | **MIGRATE + RESTART** | 10-15s | `prisma migrate + restart` | | Code (`.ts`, `.tsx`, `.js`) | **REBUILD** | 40-70s | TypeScript must recompile | | Dependencies (`package.json`) | **REBUILD** | 40-70s | npm install required | **Smart Batching Strategy:** - **DON'T:** Rebuild after every single code change (6Γ— rebuilds = 6 minutes downtime) - **DO:** Batch related changes together (6 fixes β†’ 1 rebuild = 50 seconds total) **Example (GOOD):** ```bash # 1. Make multiple code changes vim lib/trading/position-manager.ts vim app/api/trading/execute/route.ts vim lib/notifications/telegram.ts # 2. Commit all together git add -A && git commit -m "fix: Multiple improvements" # 3. ONE rebuild for everything docker compose build trading-bot docker compose up -d --force-recreate trading-bot # Total: 50 seconds (not 150 seconds) ``` **Recent Mistakes to Avoid (Nov 27, 2025):** - ❌ Rebuilt for documentation updates (should be git commit only) - ❌ Rebuilt for n8n workflow changes (should be manual import) - ❌ Rebuilt 4 times for 4 code changes (should batch into 1 rebuild) - βœ… Result: 200 seconds downtime that could have been 50 seconds ### Docker Cleanup After Builds **CRITICAL: Prevent disk full issues from build cache accumulation** Docker builds create intermediate layers (1.3+ GB per build) that accumulate over time. Build cache can reach 40-50 GB after frequent rebuilds. **After successful deployment, clean up:** ```bash # Remove dangling images (old builds) docker image prune -f # Remove build cache (biggest space hog - 40+ GB typical) docker builder prune -f # Optional: Remove dangling volumes (if no important data) docker volume prune -f # Check space saved docker system df ``` **When to run:** - After each successful deployment (recommended) - Weekly if building frequently - When disk space warnings appear - Before major updates/migrations **Space typically freed:** - Dangling images: 2-5 GB - Build cache: 40-50 GB - Dangling volumes: 0.5-1 GB - **Total: 40-55 GB per cleanup** **What's safe to delete:** - `` tagged images (old builds) - Build cache (recreated on next build) - Dangling volumes (orphaned from removed containers) **What NOT to delete:** - Named volumes (contain data: `trading-bot-postgres`, etc.) - Active containers - Tagged images currently in use ### Docker Optimization & Build Cache Management (Nov 26, 2025) **Purpose:** Prevent Docker cache accumulation (40+ GB) through automated cleanup and BuildKit optimizations **Three-Layer Optimization Strategy:** **1. Multi-Stage Builds (ALREADY IMPLEMENTED)** ```dockerfile # Dockerfile already uses multi-stage pattern: FROM node:20-alpine AS deps # Install dependencies FROM node:20-alpine AS builder # Build application FROM node:20-alpine AS runner # Final minimal image # Benefits: # - Smaller final images (only runtime dependencies) # - Faster builds (caches each stage independently) # - Better layer reuse ``` **2. BuildKit Auto-Cleanup (Nov 26, 2025)** ```bash # /etc/docker/daemon.json configuration: { "features": { "buildkit": true }, "builder": { "gc": { "enabled": true, "defaultKeepStorage": "20GB" } } } # Restart Docker to apply: sudo systemctl restart docker # Verify BuildKit active: docker buildx version # Should show v0.14.1+ ``` **Auto-Cleanup Behavior:** - **Threshold:** 20GB build cache limit - **Action:** Automatically garbage collects when exceeded - **Safety:** Keeps recent layers for build speed - **Monitoring:** Check current usage: `docker system df` **Current Disk Usage Baseline (Nov 26, 2025):** - Build Cache: 11.13GB (healthy, under 20GB threshold) - Images: 59.2GB (33.3GB reclaimable) - Volumes: 8.5GB (7.9GB reclaimable) - Containers: 232.9MB **3. Automated Cleanup Script (READY TO USE)** ```bash # Script: /home/icke/traderv4/cleanup_trading_bot.sh (94 lines) # Executable: -rwxr-xr-x (already set) # Features: # - Step 1: Keeps last 2 trading-bot images (rollback safety) # - Step 2: Removes dangling images (untagged layers) # - Step 3: Prunes build cache (biggest space saver) # - Step 4: Safe volume handling (protects postgres) # - Reporting: Shows disk space before/after # Manual usage (recommended after builds): cd /home/icke/traderv4 docker compose build trading-bot && ./cleanup_trading_bot.sh # Automated usage (daily cleanup at 2 AM): # Add to crontab: crontab -e 0 2 * * * /home/icke/traderv4/cleanup_trading_bot.sh # Check current disk usage: docker system df ``` **Script Safety Measures:** - **Never removes:** Named volumes (trading-bot-postgres, etc.) - **Never removes:** Running containers - **Never removes:** Tagged images currently in use - **Keeps:** Last 2 trading-bot images for quick rollback - **Reports:** Space freed after cleanup (typical: 40-50 GB) **When to Run Cleanup:** 1. **After builds:** Most effective, immediate cleanup 2. **Weekly:** If building frequently during development 3. **On demand:** When disk space warnings appear 4. **Before deployments:** Clean slate for major updates **Typical Space Savings:** - Manual script run: 40-50 GB (build cache + dangling images) - BuildKit auto-cleanup: Maintains 20GB cap automatically - Combined approach: Prevents accumulation entirely **Monitoring Commands:** ```bash # Check current disk usage docker system df # Detailed breakdown docker system df -v # Check BuildKit cache docker buildx du # Verify auto-cleanup threshold grep -A10 "builder" /etc/docker/daemon.json ``` **Why This Matters:** - **Problem:** User previously hit 40GB cache accumulation - **Solution:** BuildKit auto-cleanup (20GB cap) + manual script (on-demand) - **Result:** System self-maintains, prevents disk full scenarios - **Team benefit:** Documented process for all developers **Implementation Status:** - βœ… Multi-stage builds: Already present in Dockerfile (builder β†’ runner) - βœ… BuildKit auto-cleanup: Configured in daemon.json (20GB threshold) - βœ… Cleanup script: Exists and ready (/home/icke/traderv4/cleanup_trading_bot.sh) - βœ… Docker daemon: Restarted with new config (BuildKit v0.14.1 active) - βœ… Current state: Healthy (11.13GB cache, under threshold) --- ## Multi-Timeframe Price Tracking System (Nov 19, 2025) **Purpose:** Automated data collection and analysis for signals across multiple timeframes (5min, 15min, 1H, 4H, Daily) to determine which timeframe produces the best trading results. **Also tracks quality-blocked signals** to analyze if threshold adjustments are filtering too many winners. **Architecture:** - **5min signals:** Execute trades (production) - **15min/1H/4H/Daily signals:** Save to BlockedSignal table with `blockReason='DATA_COLLECTION_ONLY'` - **Quality-blocked signals:** Save with `blockReason='QUALITY_SCORE_TOO_LOW'` (Nov 21: threshold raised to 91+) - **Background tracker:** Runs every 5 minutes, monitors price movements for 30 minutes - **Analysis:** After 50+ signals per category, compare win rates and profit potential **Components:** 1. **BlockedSignalTracker** (`lib/analysis/blocked-signal-tracker.ts`) - Background job running every 5 minutes - **Tracks BOTH quality-blocked AND data collection signals** (Nov 22, 2025 enhancement) - Tracks price at 1min, 5min, 15min, 30min intervals - Detects if TP1/TP2/SL would have been hit using ATR-based targets - Records max favorable/adverse excursion (MFE/MAE) - Auto-completes after 30 minutes (`analysisComplete=true`) - Singleton pattern: Use `getBlockedSignalTracker()` or `startBlockedSignalTracking()` - **Purpose:** Validate if quality 91 threshold filters winners or losers (data-driven optimization) 2. **Database Schema** (BlockedSignal table) ```sql entryPrice FLOAT -- Price at signal time (baseline) priceAfter1Min FLOAT? -- Price 1 minute after priceAfter5Min FLOAT? -- Price 5 minutes after priceAfter15Min FLOAT? -- Price 15 minutes after priceAfter30Min FLOAT? -- Price 30 minutes after wouldHitTP1 BOOLEAN? -- Would TP1 have been hit? wouldHitTP2 BOOLEAN? -- Would TP2 have been hit? wouldHitSL BOOLEAN? -- Would SL have been hit? maxFavorablePrice FLOAT? -- Price at max profit maxAdversePrice FLOAT? -- Price at max loss maxFavorableExcursion FLOAT? -- Best profit % during 30min maxAdverseExcursion FLOAT? -- Worst loss % during 30min analysisComplete BOOLEAN -- Tracking finished (30min elapsed) ``` 3. **API Endpoints** - `GET /api/analytics/signal-tracking` - View tracking status, metrics, recent signals - `POST /api/analytics/signal-tracking` - Manually trigger tracking update (auth required) 4. **Integration Points** - Execute endpoint: Captures entry price when saving DATA_COLLECTION_ONLY signals - Startup: Auto-starts tracker via `initializePositionManagerOnStartup()` - Check-risk endpoint: Bypasses quality checks for non-5min signals (lines 147-159) **How It Works:** 1. TradingView sends 15min/1H/4H/Daily signal β†’ n8n β†’ `/api/trading/execute` 2. Execute endpoint detects `timeframe !== '5'` 3. Gets current price from Pyth, saves to BlockedSignal with `entryPrice` 4. Background tracker wakes every 5 minutes 5. Queries current price, calculates profit % based on direction 6. Checks if TP1 (~0.86%), TP2 (~1.72%), or SL (~1.29%) would have hit 7. Updates price fields at appropriate intervals (1/5/15/30 min) 8. Tracks MFE/MAE throughout 30-minute window 9. After 30 minutes, marks `analysisComplete=true` **Analysis Queries (After 50+ signals per timeframe):** ```sql -- Compare win rates across timeframes SELECT timeframe, COUNT(*) as total_signals, COUNT(CASE WHEN wouldHitTP1 = true THEN 1 END) as tp1_wins, COUNT(CASE WHEN wouldHitSL = true THEN 1 END) as sl_losses, ROUND(100.0 * COUNT(CASE WHEN wouldHitTP1 = true THEN 1 END) / COUNT(*), 1) as win_rate, ROUND(AVG(maxFavorableExcursion), 2) as avg_mfe, ROUND(AVG(maxAdverseExcursion), 2) as avg_mae FROM "BlockedSignal" WHERE analysisComplete = true AND blockReason = 'DATA_COLLECTION_ONLY' GROUP BY timeframe ORDER BY win_rate DESC; ``` **Decision Making:** After sufficient data collected: - **Multi-timeframe:** Compare 5min vs 15min vs 1H vs 4H vs Daily win rates - **Quality threshold:** Analyze if blocked signals (quality <91) would've been winners - **Evaluation:** Signal frequency vs win rate trade-off, threshold optimization - **Query example:** ```sql -- Would quality-blocked signals have been winners? SELECT COUNT(*) as blocked_count, SUM(CASE WHEN "wouldHitTP1" THEN 1 ELSE 0 END) as would_be_winners, SUM(CASE WHEN "wouldHitSL" THEN 1 ELSE 0 END) as would_be_losers, ROUND(100.0 * SUM(CASE WHEN "wouldHitTP1" THEN 1 ELSE 0 END) / COUNT(*), 1) as missed_win_rate FROM "BlockedSignal" WHERE "blockReason" = 'QUALITY_SCORE_TOO_LOW' AND "analysisComplete" = true; ``` - **Action:** Adjust thresholds or switch production timeframe based on data **Key Features:** - **Autonomous:** No manual work needed, runs in background - **Accurate:** Uses same TP/SL calculations as live trades (ATR-based) - **Risk-free:** Data collection only, no money at risk - **Comprehensive:** Tracks best/worst case scenarios (MFE/MAE) - **API accessible:** Check status anytime via `/api/analytics/signal-tracking` **Current Status (Nov 26, 2025):** - βœ… System deployed and running in production - βœ… **Enhanced Nov 22:** Now tracks quality-blocked signals (QUALITY_SCORE_TOO_LOW) in addition to multi-timeframe data collection - βœ… **Enhanced Nov 26:** Quality scoring now calculated for ALL timeframes (not just 5min production signals) - Execute endpoint calculates `scoreSignalQuality()` BEFORE timeframe check (line 112) - Data collection signals now get real quality scores (not hardcoded 0) - BlockedSignal records include: `signalQualityScore` (0-100), `signalQualityVersion` ('v9'), `minScoreRequired` (90/95) - Enables SQL queries: `WHERE signalQualityScore >= minScoreRequired` to compare quality-filtered win rates - Commit: dbada47 "feat: Calculate quality scores for all timeframes (not just 5min)" - βœ… TradingView alerts configured for 15min and 1H - βœ… Background tracker runs every 5 minutes autonomously - πŸ“Š **Data collection:** Multi-timeframe (50+ per timeframe) + quality-blocked (20-30 signals) - 🎯 **Dual goals:** 1. Determine which timeframe has best win rate (now with quality filtering capability) 2. Validate if quality 91 threshold filters winners or losers - πŸ“ˆ **First result (Nov 21, 16:50):** Quality 80 signal blocked (weak ADX 16.6), would have profited +0.52% (+$43) within 1 minute - **FALSE NEGATIVE confirmed** --- ## Critical Components ### 1. Persistent Logger System (lib/utils/persistent-logger.ts) **Purpose:** Survive-container-restarts logging for critical errors and trade failures **Key features:** - Writes to `/app/logs/errors.log` (Docker volume mounted from host) - Logs survive container restarts, rebuilds, crashes - Daily log rotation with 30-day retention - Structured JSON logging with timestamps, context, stack traces - Used for database save failures, Drift API errors, critical incidents **Usage:** ```typescript import { persistentLogger } from '../utils/persistent-logger' try { await createTrade({...}) } catch (error) { persistentLogger.logError('DATABASE_SAVE_FAILED', error, { symbol: 'SOL-PERP', entryPrice: 133.69, transactionSignature: '5Yx2...', // ALL data needed to reconstruct trade }) throw error } ``` **Infrastructure:** - Docker volume: `./logs:/app/logs` (docker-compose.yml line 63) - Directory: `/home/icke/traderv4/logs/` with `.gitkeep` - Log format: `{"timestamp":"2025-11-21T00:40:14.123Z","context":"DATABASE_SAVE_FAILED","error":"...","stack":"...","metadata":{...}}` **Why it matters:** - Console logs disappear on container restart - Database failures need persistent record for recovery - Enables post-mortem analysis of incidents - Orphan position detection can reference logs to reconstruct trades **Implemented:** Nov 21, 2025 as part of 5-layer database protection system ### 2. Phantom Trade Auto-Closure System **Purpose:** Automatically close positions when size mismatch detected (position opened but wrong size) **When triggered:** - Position opened on Drift successfully - Expected size: $50 (50% @ 1x leverage) - Actual size: $1.37 (7% fill - likely oracle price stale or exchange rejection) - Size ratio < 50% threshold β†’ phantom detected **Automated response (all happens in <1 second):** 1. **Immediate closure:** Market order closes 100% of phantom position 2. **Database logging:** Creates trade record with `status='phantom'`, saves P&L 3. **n8n notification:** Returns HTTP 200 with full details (not 500 - allows workflow to continue) 4. **Telegram alert:** Message includes entry/exit prices, P&L, reason, transaction IDs **Why auto-close instead of manual intervention:** - User may be asleep, away from devices, unavailable for hours - Unmonitored position = unlimited risk exposure - Position Manager won't track phantom (by design) - No TP/SL protection, no trailing stop, no monitoring - Better to exit with small loss/gain than leave position exposed - Re-entry always possible if setup was actually good **Example notification:** ``` ⚠️ PHANTOM TRADE AUTO-CLOSED Symbol: SOL-PERP Direction: LONG Expected Size: $48.75 Actual Size: $1.37 (2.8%) Entry: $168.50 Exit: $168.45 P&L: -$0.02 Reason: Size mismatch detected - likely oracle price issue or exchange rejection Action: Position auto-closed for safety (unmonitored positions = risk) TX: 5Yx2Fm8vQHKLdPaw... ``` **Database tracking:** - `status='phantom'` field identifies these trades - `isPhantom=true`, `phantomReason='ORACLE_PRICE_MISMATCH'` - `expectedSizeUSD`, `actualSizeUSD` fields for analysis - Exit reason: `'manual'` (phantom auto-close category) - Enables post-trade analysis of phantom frequency and patterns **Code location:** `app/api/trading/execute/route.ts` lines 322-445 ### 2. Signal Quality Scoring (`lib/trading/signal-quality.ts`) **Purpose:** Unified quality validation system that scores trading signals 0-100 based on 5 market metrics **Timeframe-aware thresholds:** ```typescript scoreSignalQuality({ atr, adx, rsi, volumeRatio, pricePosition, timeframe?: string // "5" for 5min, undefined for higher timeframes }) ``` **5min chart adjustments:** - ADX healthy range: 12-22 (vs 18-30 for daily) - ATR healthy range: 0.2-0.7% (vs 0.4%+ for daily) - Anti-chop filter: -20 points for extreme sideways (ADX <10, ATR <0.25%, Vol <0.9x) **Price position penalties (all timeframes):** - Long at 90-95%+ range: -15 to -30 points (chasing highs) - Short at <5-10% range: -15 to -30 points (chasing lows) - Prevents flip-flop losses from entering range extremes **Key behaviors:** - Returns score 0-100 and detailed breakdown object - Minimum score 91 required to execute trade (raised Nov 21, 2025) - Called by both `/api/trading/check-risk` and `/api/trading/execute` - Scores saved to database for post-trade analysis **Data-Proven Threshold (Nov 21, 2025):** - Analysis of 7 v8 trades revealed perfect separation: - **All 4 winners**: Quality 95, 95, 100, 105 (100% success rate β‰₯95) - **All 3 losers**: Quality 80, 90, 90 (100% failure rate ≀90) - 91 threshold eliminates borderline entries (ADX 18-20 weak trends) - Would have prevented all historical losses totaling -$624.90 - Pattern validates that quality β‰₯95 signals are high-probability setups **Threshold Validation In Progress (Nov 22, 2025):** - **Discovery:** First quality-blocked signal (quality 80, ADX 16.6) would have profited +0.52% (+$43) - **User observation:** "Green dots shot up" - visual confirmation of missed opportunity - **System response:** BlockedSignalTracker now tracks quality-blocked signals (QUALITY_SCORE_TOO_LOW) - **Data collection target:** 20-30 blocked signals over 2-4 weeks - **Decision criteria:** * If blocked signals show <40% win rate β†’ Keep threshold at 91 (correct filtering) * If blocked signals show 50%+ win rate β†’ Lower to 85 (too restrictive) * If quality 80-84 wins but 85-90 loses β†’ Adjust to 85 threshold - **Possible outcomes:** Keep 91, lower to 85, adjust ADX/RSI weights, add context filters ### 2. Position Manager Health Monitoring System (`lib/health/position-manager-health.ts`) **Purpose:** Detect Position Manager failures within 30 seconds to prevent $1,000+ loss scenarios **CRITICAL (Dec 8, 2025):** Created after discovering three bugs that caused $1,000+ losses: - Bug #77: Position Manager logs "added" but never actually monitors (isMonitoring=false) - Bug #76: placeExitOrders() returns SUCCESS but SL order missing (silent failure) - Bug #78: Orphan detection removes active position orders (cancelAllOrders affects all) **Key Functions:** - `checkPositionManagerHealth()`: Returns comprehensive health check result - DB open trades vs PM monitoring status - PM has trades but monitoring OFF - Missing SL orders (checks slOrderTx, softStopOrderTx, hardStopOrderTx) - Missing TP1/TP2 orders - DB vs PM vs Drift count mismatches - `startPositionManagerHealthMonitor()`: Runs automatically every 30 seconds - Logs CRITICAL alerts when issues found - Silent operation when system healthy - Started automatically in startup sequence **Health Checks Performed:** 1. **DB open trades but PM not monitoring** β†’ CRITICAL ALERT 2. **PM has trades but monitoring OFF** β†’ CRITICAL ALERT 3. **Open positions missing SL orders** β†’ CRITICAL ALERT per position 4. **Open positions missing TP orders** β†’ WARNING per position 5. **DB vs PM trade count mismatch** β†’ WARNING 6. **PM vs Drift position count mismatch** β†’ WARNING **Alert Format:** ``` 🚨 CRITICAL: Position Manager not monitoring! DB: 2 open trades PM: 2 trades in Map Monitoring: false ← BUG! 🚨 CRITICAL: Position cmix773hk019gn307fjjhbikx missing SL order Symbol: SOL-PERP Size: $2,003 slOrderTx: NULL softStopOrderTx: NULL hardStopOrderTx: NULL ``` **Integration:** - File: `lib/startup/init-position-manager.ts` line ~78 - Starts automatically after Drift state verifier - Runs alongside: data cleanup, blocked signals, stop hunt, smart validation - No manual intervention needed **Test Suite:** - File: `tests/integration/position-manager/monitoring-verification.test.ts` (201 lines) - 4 test suites, 8 test cases: * "CRITICAL: Monitoring Actually Starts" (4 tests) * "CRITICAL: Price Updates Actually Trigger Checks" (2 tests) * "CRITICAL: Monitoring Stops When No Trades" (2 tests) * "CRITICAL: Error Handling Doesnt Break Monitoring" (1 test) - Validates: startMonitoring() calls Pyth monitor, isMonitoring flag set, price updates processed - Mocks: drift/client, pyth/price-monitor, database/trades, notifications/telegram **Why This Matters:** - **This is a REAL MONEY system** - Position Manager is the safety net - User lost $1,000+ because PM said "monitoring" but wasn't - Positions appeared protected but had no monitoring whatsoever - Health monitor detects failures within 30 seconds - Prevents catastrophic silent failures **Deployment Status:** - βœ… Code complete and committed (Dec 8, 2025) - ⏳ Deployment pending (Docker build blocked by DNS) - βœ… Startup integration complete - βœ… Test suite created ### 3. Position Manager (`lib/trading/position-manager.ts`) **Purpose:** Software-based monitoring loop that checks prices every 2 seconds and closes positions via market orders **CRITICAL BUG (#77):** Logs say "added to monitoring" but isMonitoring stays false - see Health Monitoring System above for detection **Singleton pattern:** Always use `getInitializedPositionManager()` - never instantiate directly ```typescript const positionManager = await getInitializedPositionManager() await positionManager.addTrade(activeTrade) ``` **Key behaviors:** - Tracks `ActiveTrade` objects in a Map - **TP2-as-Runner system**: TP1 (configurable %, default 60%) β†’ TP2 trigger (no close, activate trailing) β†’ Runner (remaining 40%) with ATR-based trailing stop - **ADX-based runner SL after TP1 (Nov 19, 2025):** Adaptive positioning based on trend strength - ADX < 20: SL at 0% (breakeven) - Weak trend, preserve capital - ADX 20-25: SL at -0.3% - Moderate trend, some retracement room - ADX > 25: SL at -0.55% - Strong trend, full retracement tolerance - **Implementation:** Checks `trade.adxAtEntry` in TP1 handler, calculates SL dynamically - **Logging:** Shows ADX and selected SL: `πŸ”’ ADX-based runner SL: 29.3 β†’ -0.55%` - **Rationale:** Entry at candle close = top of candle, -1% to -1.5% pullbacks are normal - **Data collection:** After 50-100 trades, will optimize ADX thresholds (20/25) based on stop-out rates - **On-chain order synchronization:** After TP1 hits, calls `cancelAllOrders()` then `placeExitOrders()` with updated SL price (uses `retryWithBackoff()` for rate limit handling) - **PHASE 7.3: Adaptive Trailing Stop with Real-Time ADX (Nov 27, 2025 - DEPLOYED):** - **Purpose:** Dynamically adjust trailing stop based on current trend strength changes, not static entry-time ADX - **Implementation:** Queries market data cache for fresh 1-minute ADX every monitoring loop (2-second interval) - **Adaptive Multiplier Logic:** * **Base:** `trailingStopAtrMultiplier` (1.5Γ—) Γ— ATR percentage * **Current ADX Strength Tier (uses fresh 1-min ADX):** - Current ADX > 30: 1.5Γ— multiplier (very strong trend) - log "πŸ“ˆ 1-min ADX very strong" - Current ADX 25-30: 1.25Γ— multiplier (strong trend) - log "πŸ“ˆ 1-min ADX strong" - Current ADX < 25: 1.0Γ— base multiplier * **ADX Acceleration Bonus (NEW):** If ADX increased >5 points since entry β†’ Additional 1.3Γ— multiplier - Example: Entry ADX 22.5 β†’ Current ADX 29.5 (+7 points) β†’ Widens trail to capture extended move - Log: "πŸš€ ADX acceleration (+X points): Trail multiplier YΓ— β†’ ZΓ—" * **ADX Deceleration Penalty (NEW):** If ADX decreased >3 points since entry β†’ 0.7Γ— multiplier (tightens trail) - Log: "⚠️ ADX deceleration (-X points): tighter to protect" * **Profit Acceleration (existing):** Profit > 2% β†’ Additional 1.3Γ— multiplier - Log: "πŸ’° Large profit (X%): Trail multiplier YΓ— β†’ ZΓ—" * **Combined Max:** 1.5 (base) Γ— 1.5 (strong ADX) Γ— 1.3 (acceleration) Γ— 1.3 (profit) = **3.16Γ— multiplier** - **Example Calculation:** * Entry: SOL $140.00, ADX 22.5, ATR 0.43 * After 30 min: Price $143.50 (+2.5%), Current ADX 29.5 (+7 points) * OLD (entry ADX): 0.43 / 140 Γ— 100 = 0.307% β†’ 0.307% Γ— 1.5 = 0.46% trail = stop at $142.84 * NEW (adaptive): 0.307% Γ— 1.5 (base) Γ— 1.25 (strong) Γ— 1.3 (accel) Γ— 1.3 (profit) = 0.99% trail = stop at $141.93 * **Impact:** $0.91 more room (2.15Γ— wider) = captures $43 MFE instead of $23 - **Logging:** * "πŸ“Š 1-min ADX update: Entry X β†’ Current Y (Β±Z change)" - Shows ADX progression * "πŸ“Š Adaptive trailing: ATR X (Y%) Γ— ZΓ— = W%" - Shows final trail calculation - **Fallback:** Uses `trade.adxAtEntry` if market cache unavailable (backward compatible) - **Safety:** Trail distance clamped between min/max % bounds (0.25%-0.9%) - **Code:** `lib/trading/position-manager.ts` lines 1356-1450, imports `getMarketDataCache()` - **Expected Impact:** +$2,000-3,000 over 100 trades by capturing trend acceleration moves (like MA crossover ADX 22.5β†’29.5 pattern) - **Risk Profile:** Only affects 25% runner position (main 75% already closed at TP1) - **See:** `PHASE_7.3_ADAPTIVE_TRAILING_DEPLOYED.md` and `1MIN_DATA_ENHANCEMENTS_ROADMAP.md` Phase 7.3 section - Trailing stop: Activates when TP2 price hit, tracks `peakPrice` and trails dynamically - Closes positions via `closePosition()` market orders when targets hit - Acts as backup if on-chain orders don't fill - State persistence: Saves to database, restores on restart via `configSnapshot.positionManagerState` - **Startup validation:** On container restart, cross-checks last 24h "closed" trades against Drift to detect orphaned positions (see `lib/startup/init-position-manager.ts`) - **Grace period for new trades:** Skips "external closure" detection for positions <30 seconds old (Drift positions take 5-10s to propagate) - **Exit reason detection:** Uses trade state flags (`tp1Hit`, `tp2Hit`) and realized P&L to determine exit reason, NOT current price (avoids misclassification when price moves after order fills) - **Real P&L calculation:** Calculates actual profit based on entry vs exit price, not SDK's potentially incorrect values - **Rate limit-aware exit:** On 429 errors during close, keeps trade in monitoring (doesn't mark closed), retries naturally on next price update ### 3. Telegram Bot (`telegram_command_bot.py`) **Purpose:** Python-based Telegram bot for manual trading commands and position status monitoring **Manual trade commands via plain text:** ```python # User sends plain text message (not slash commands) "long sol" β†’ Validates via analytics, then opens SOL-PERP long "short eth" β†’ Validates via analytics, then opens ETH-PERP short "long btc --force" β†’ Skips analytics validation, opens BTC-PERP long immediately ``` **Key behaviors:** - MessageHandler processes all text messages (not just commands) - Maps user-friendly symbols (sol, eth, btc) to Drift format (SOL-PERP, etc.) - **Analytics validation:** Calls `/api/analytics/reentry-check` before execution - Blocks trades with score <55 unless `--force` flag used - Uses fresh TradingView data (<5min old) when available - Falls back to historical metrics with penalty - Considers recent trade performance (last 3 trades) - Calls `/api/trading/execute` directly with preset healthy metrics (ATR=0.45, ADX=32, RSI=58/42) - Bypasses n8n workflow and TradingView requirements - 60-second timeout for API calls - Responds with trade confirmation or analytics rejection message **Status command:** ```python /status β†’ Returns JSON of open positions from Drift ``` **Implementation details:** - Uses `python-telegram-bot` library - Deployed via `docker-compose.telegram-bot.yml` - Requires `TELEGRAM_BOT_TOKEN` and `TELEGRAM_CHANNEL_ID` in .env - API calls to `http://trading-bot:3000/api/trading/execute` **Drift client integration:** - Singleton pattern: Use `initializeDriftService()` and `getDriftService()` - maintains single connection ```typescript const driftService = await initializeDriftService() const health = await driftService.getAccountHealth() ``` - Wallet handling: Supports both JSON array `[91,24,...]` and base58 string formats from Phantom wallet ### 4. Rate Limit Monitoring (`lib/drift/orders.ts` + `app/api/analytics/rate-limits`) **Purpose:** Track and analyze Solana RPC rate limiting (429 errors) to prevent silent failures **Helius RPC Limits (Free Tier):** - **Burst:** 100 requests/second - **Sustained:** 10 requests/second - **Monthly:** 100k requests - See `docs/HELIUS_RATE_LIMITS.md` for upgrade recommendations **Retry mechanism with exponential backoff (Nov 14, 2025 - Updated):** ```typescript await retryWithBackoff(async () => { return await driftClient.cancelOrders(...) }, maxRetries = 3, baseDelay = 5000) // Increased from 2s to 5s ``` **Progression:** 5s β†’ 10s β†’ 20s (vs old 2s β†’ 4s β†’ 8s) **Rationale:** Gives Helius time to recover, reduces cascade pressure by 2.5x **Database logging:** Three event types in SystemEvent table: - `rate_limit_hit`: Each 429 error (logged with attempt #, delay, error snippet) - `rate_limit_recovered`: Successful retry (logged with total time, retry count) - `rate_limit_exhausted`: Failed after max retries (CRITICAL - order operation failed) **Analytics endpoint:** ```bash curl http://localhost:3001/api/analytics/rate-limits ``` Returns: Total hits/recoveries/failures, hourly patterns, recovery times, success rate **Key behaviors:** - Only RPC calls wrapped: `cancelAllOrders()`, `placeExitOrders()`, `closePosition()` - Position Manager monitoring: Event-driven via Pyth WebSocket (not polling) - Rate limit-aware exit: Position Manager keeps monitoring on 429 errors (retries naturally) - Logs to both console and database for post-trade analysis **Monitoring queries:** See `docs/RATE_LIMIT_MONITORING.md` for SQL queries **Startup Position Validation (Nov 14, 2025 - Added):** On container startup, cross-checks last 24h of "closed" trades against actual Drift positions: - If DB says closed but Drift shows open β†’ reopens in DB to restore Position Manager tracking - Prevents orphaned positions from failed close transactions - Logs: `πŸ”΄ CRITICAL: ${symbol} marked as CLOSED in DB but still OPEN on Drift!` - Implementation: `lib/startup/init-position-manager.ts` - `validateOpenTrades()` ### 5. Order Placement (`lib/drift/orders.ts`) **Critical functions:** - `openPosition()` - Opens market position with transaction confirmation - `closePosition()` - Closes position with transaction confirmation - `placeExitOrders()` - Places TP/SL orders on-chain - `cancelAllOrders()` - Cancels all reduce-only orders for a market **CRITICAL BUG (#76 - Dec 8, 2025):** placeExitOrders() can return SUCCESS with missing SL order - Symptom: Logs "Exit orders placed: [2 signatures]" but SL missing (expected 3) - Impact: Position completely unprotected from downside - Detection: Health monitor checks slOrderTx/softStopOrderTx/hardStopOrderTx every 30s - Fix required: Validate signatures.length before returning, add error handling around SL placement - Additional guard (Dec 10, 2025): tp2SizePercent of 0 or undefined now normalizes to 100% of remaining size so TP2 orders are placed and validation counts stay aligned with expected signatures. **CRITICAL: Transaction Confirmation Pattern** Both `openPosition()` and `closePosition()` MUST confirm transactions on-chain: ```typescript const txSig = await driftClient.placePerpOrder(orderParams) console.log('⏳ Confirming transaction on-chain...') const connection = driftService.getConnection() const confirmation = await connection.confirmTransaction(txSig, 'confirmed') if (confirmation.value.err) { throw new Error(`Transaction failed: ${JSON.stringify(confirmation.value.err)}`) } console.log('βœ… Transaction confirmed on-chain') ``` Without this, the SDK returns signatures for transactions that never execute, causing phantom trades/closes. **CRITICAL: Drift SDK position.size is BASE ASSET TOKENS, not USD** The Drift SDK returns `position.size` as token quantity (SOL/ETH/BTC), NOT USD notional: ```typescript // CORRECT: Convert tokens to USD by multiplying by current price const positionSizeUSD = Math.abs(position.size) * currentPrice // WRONG: Using position.size directly as USD (off by 150x+ for SOL!) const positionSizeUSD = Math.abs(position.size) ``` **This affects Position Manager's TP1/TP2 detection** - if position.size is not converted to USD before comparing to tracked USD values, the system will never detect partial closes correctly. See Common Pitfall #22 for the full bug details and fix applied Nov 12, 2025. **Solana RPC Rate Limiting with Exponential Backoff** Solana RPC endpoints return 429 errors under load. Always use retry logic for order operations: ```typescript export async function retryWithBackoff( operation: () => Promise, maxRetries: number = 3, initialDelay: number = 5000 // Increased from 2000ms to 5000ms (Nov 14, 2025) ): Promise { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await operation() } catch (error: any) { if (error?.message?.includes('429') && attempt < maxRetries - 1) { const delay = initialDelay * Math.pow(2, attempt) console.log(`⏳ Rate limited, retrying in ${delay/1000}s... (attempt ${attempt + 1}/${maxRetries})`) await new Promise(resolve => setTimeout(resolve, delay)) continue } throw error } } throw new Error('Max retries exceeded') } // Usage in cancelAllOrders await retryWithBackoff(() => driftClient.cancelOrders(...)) ``` **Note:** Increased from 2s to 5s base delay to give Helius RPC more recovery time. See `docs/HELIUS_RATE_LIMITS.md` for detailed analysis. Without this, order cancellations fail silently during TP1β†’breakeven order updates, leaving ghost orders that cause incorrect fills. **Dual Stop System** (USE_DUAL_STOPS=true): ```typescript // Soft stop: TRIGGER_LIMIT at -1.5% (avoids wicks) // Hard stop: TRIGGER_MARKET at -2.5% (guarantees exit) ``` **Order types:** - Entry: MARKET (immediate execution) - TP1/TP2: LIMIT reduce-only orders - Soft SL: TRIGGER_LIMIT reduce-only - Hard SL: TRIGGER_MARKET reduce-only ### 6. Database (`lib/database/trades.ts` + `prisma/schema.prisma`) **Purpose:** PostgreSQL via Prisma ORM for trade history and analytics **Models:** Trade, PriceUpdate, SystemEvent, DailyStats, BlockedSignal **Singleton pattern:** Use `getPrismaClient()` - never instantiate PrismaClient directly **Key functions:** - `createTrade()` - Save trade after execution (includes dual stop TX signatures + signalQualityScore) - `updateTradeExit()` - Record exit with P&L - `addPriceUpdate()` - Track price movements (called by Position Manager) - `getTradeStats()` - Win rate, profit factor, avg win/loss - `getLastTrade()` - Fetch most recent trade for analytics dashboard - `createBlockedSignal()` - Save blocked signals for data-driven optimization analysis - `getRecentBlockedSignals()` - Query recent blocked signals - `getBlockedSignalsForAnalysis()` - Fetch signals needing price analysis (future automation) **Important fields:** - `signalSource` (String?) - Identifies trade origin: 'tradingview', 'manual', or NULL (old trades) - **CRITICAL:** Manual Telegram trades are marked `signalSource='manual'` and excluded from TradingView indicator analysis - Use filter: `WHERE ("signalSource" IS NULL OR "signalSource" != 'manual')` for indicator optimization queries - See `docs/MANUAL_TRADE_FILTERING.md` for complete SQL filtering guide - `signalQualityScore` (Int?) - 0-100 score for data-driven optimization - `signalQualityVersion` (String?) - Tracks which scoring logic was used ('v1', 'v2', 'v3', 'v4') - v1: Original logic (price position < 5% threshold) - v2: Added volume compensation for low ADX (2025-11-07) - v3: Stricter breakdown requirements: positions < 15% require (ADX > 18 AND volume > 1.2x) OR (RSI < 35 for shorts / RSI > 60 for longs) - v4: CURRENT - Blocked signals tracking enabled for data-driven threshold optimization (2025-11-11) - All new trades tagged with current version for comparative analysis - `maxFavorableExcursion` / `maxAdverseExcursion` - Track best/worst P&L during trade lifetime - `maxFavorablePrice` / `maxAdversePrice` - Track prices at MFE/MAE points - `configSnapshot` (Json) - Stores Position Manager state for crash recovery - `atr`, `adx`, `rsi`, `volumeRatio`, `pricePosition` - Context metrics from TradingView **BlockedSignal model fields (NEW):** - Signal metrics: `atr`, `adx`, `rsi`, `volumeRatio`, `pricePosition`, `timeframe` - Quality scoring: `signalQualityScore`, `signalQualityVersion`, `scoreBreakdown` (JSON), `minScoreRequired` - Indicator provenance (Nov 28, 2025): `indicatorVersion` now stored for every blocked signal (defaults to `v5` if alert omits it). Older rows have `NULL` hereβ€”only new entries track v8/v9/v10 so quality vs indicator comparisons work going forward. - Block tracking: `blockReason` (QUALITY_SCORE_TOO_LOW, COOLDOWN_PERIOD, HOURLY_TRADE_LIMIT, etc.), `blockDetails` - Future analysis: `priceAfter1/5/15/30Min`, `wouldHitTP1/TP2/SL`, `analysisComplete` - Automatically saved by check-risk endpoint when signals are blocked - Enables data-driven optimization: collect 10-20 blocked signals β†’ analyze patterns β†’ adjust thresholds **Per-symbol functions:** - `getLastTradeTimeForSymbol(symbol)` - Get last trade time for specific coin (enables per-symbol cooldown) - Each coin (SOL/ETH/BTC) has independent cooldown timer to avoid missed opportunities ## ATR-Based Risk Management (Nov 17, 2025) **Purpose:** Regime-agnostic TP/SL system that adapts to market volatility automatically instead of using fixed percentages that work in one market regime but fail in another. **Core Concept:** ATR (Average True Range) measures actual market volatility - when volatility increases (trending markets), targets expand proportionally. When volatility decreases (choppy markets), targets tighten. This solves the "bull/bear optimization bias" problem where fixed % targets optimized in bearish markets underperform in bullish conditions. **Calculation Formula:** ```typescript function calculatePercentFromAtr( atrValue: number, // Absolute ATR value (e.g., 0.43 for SOL) entryPrice: number, // Position entry price (e.g., $140) multiplier: number, // ATR multiplier (2.0, 4.0, 3.0) minPercent: number, // Safety floor (e.g., 0.5%) maxPercent: number // Safety ceiling (e.g., 1.5%) ): number { // Convert absolute ATR to percentage of price const atrPercent = (atrValue / entryPrice) * 100 // Apply multiplier (TP1=2x, TP2=4x, SL=3x) const targetPercent = atrPercent * multiplier // Clamp between min/max bounds for safety return Math.max(minPercent, Math.min(maxPercent, targetPercent)) } ``` **Example Calculation (SOL at $140 with ATR 0.43):** ```typescript // ATR as percentage: 0.43 / 140 = 0.00307 = 0.307% // TP1 (close 60%): // 0.307% Γ— 2.0 = 0.614% β†’ clamped to [0.5%, 1.5%] = 0.614% // Price target: $140 Γ— 1.00614 = $140.86 // TP2 (activate trailing): // 0.307% Γ— 4.0 = 1.228% β†’ clamped to [1.0%, 3.0%] = 1.228% // Price target: $140 Γ— 1.01228 = $141.72 // SL (emergency exit): // 0.307% Γ— 3.0 = 0.921% β†’ clamped to [0.8%, 2.0%] = 0.921% // Price target: $140 Γ— 0.99079 = $138.71 ``` **Configuration (ENV variables):** ```bash # Enable ATR-based system USE_ATR_BASED_TARGETS=true # ATR multipliers (tuned for SOL volatility) ATR_MULTIPLIER_TP1=2.0 # TP1: 2Γ— ATR (first target) ATR_MULTIPLIER_TP2=4.0 # TP2: 4Γ— ATR (trailing stop activation) ATR_MULTIPLIER_SL=3.0 # SL: 3Γ— ATR (stop loss) # Safety bounds (prevent extreme targets) MIN_TP1_PERCENT=0.5 # Don't go below 0.5% for TP1 MAX_TP1_PERCENT=1.5 # Don't go above 1.5% for TP1 MIN_TP2_PERCENT=1.0 # Don't go below 1.0% for TP2 MAX_TP2_PERCENT=3.0 # Don't go above 3.0% for TP2 MIN_SL_PERCENT=0.8 # Don't go below 0.8% for SL MAX_SL_PERCENT=2.0 # Don't go above 2.0% for SL # Legacy fallback (used when ATR unavailable) STOP_LOSS_PERCENT=-1.5 TAKE_PROFIT_1_PERCENT=0.8 TAKE_PROFIT_2_PERCENT=0.7 ``` **Data-Driven ATR Values:** - **SOL-PERP:** Median ATR 0.43 (from 162 trades, Nov 2024-Nov 2025) - Range: 0.0-1.17 (extreme outliers during high volatility) - Typical: 0.32%-0.40% of price - Used in Telegram manual trade presets - **ETH-PERP:** TBD (collect 50+ trades with ATR tracking) - **BTC-PERP:** TBD (collect 50+ trades with ATR tracking) **When ATR is Available:** - TradingView signals include `atr` field in webhook payload - Execute endpoint calculates dynamic TP/SL using ATR Γ— multipliers - Logs show: `πŸ“Š ATR-based targets: TP1 0.86%, TP2 1.72%, SL 1.29%` - Database saves `atrAtEntry` for post-trade analysis **When ATR is NOT Available:** - Falls back to fixed percentages from ENV (STOP_LOSS_PERCENT, etc.) - Logs show: `⚠️ No ATR data, using fixed percentages` - Less optimal but still functional **Regime-Agnostic Benefits:** 1. **Bull markets:** Higher volatility β†’ ATR increases β†’ targets expand automatically 2. **Bear markets:** Lower volatility β†’ ATR decreases β†’ targets tighten automatically 3. **Asset-agnostic:** SOL volatility β‰  BTC volatility, ATR adapts to each 4. **No re-optimization needed:** System adapts in real-time without manual tuning **Performance Analysis (Nov 17, 2025):** - **Old fixed targets:** v6 shorts captured 3% of avg +20.74% MFE moves (TP2 at +0.7%) - **New ATR targets:** TP2 at ~1.72% + 40% runner with trailing stop - **Expected improvement:** Capture 8-10% of move (3Γ— better than fixed targets) - **Real-world validation:** Awaiting 50+ trades with ATR-based exits for statistical confirmation **Code Locations:** - `config/trading.ts` - ATR multiplier fields in TradingConfig interface - `app/api/trading/execute/route.ts` - calculatePercentFromAtr() function - `telegram_command_bot.py` - MANUAL_METRICS with ATR 0.43 - `.env` - ATR_MULTIPLIER_* and MIN/MAX_*_PERCENT variables **Integration with TradingView:** Ensure alerts include ATR field: ```json { "symbol": "{{ticker}}", "direction": "{{strategy.order.action}}", "atr": {{ta.atr(14)}}, // CRITICAL: Include 14-period ATR "adx": {{ta.dmi(14, 14)}}, "rsi": {{ta.rsi(14)}}, // ... other fields } ``` **Lesson Learned (Nov 17, 2025):** Optimizing fixed % targets in one market regime (bearish Nov 2024) creates bias that fails when market shifts (bullish Dec 2024+). ATR-based targets eliminate this bias by adapting to actual volatility, not historical patterns. This is the correct long-term solution for regime-agnostic trading. ## Configuration System **Three-layer merge:** 1. `DEFAULT_TRADING_CONFIG` (config/trading.ts) 2. Environment variables (.env) via `getConfigFromEnv()` 3. Runtime overrides via `getMergedConfig(overrides)` **Always use:** `getMergedConfig()` to get final config - never read env vars directly in business logic **Per-symbol position sizing:** Use `getPositionSizeForSymbol(symbol, config)` which returns `{ size, leverage, enabled }` ```typescript const { size, leverage, enabled } = getPositionSizeForSymbol('SOL-PERP', config) if (!enabled) { return NextResponse.json({ success: false, error: 'Symbol trading disabled' }, { status: 400 }) } ``` **Symbol normalization:** TradingView sends "SOLUSDT" β†’ must convert to "SOL-PERP" for Drift ```typescript const driftSymbol = normalizeTradingViewSymbol(body.symbol) ``` **Adaptive Leverage Configuration:** - **Helper function:** `getLeverageForQualityScore(qualityScore, config)` returns leverage tier based on quality - **Quality threshold:** Configured via `QUALITY_LEVERAGE_THRESHOLD` (default: 95) - **Leverage tiers:** HIGH_QUALITY_LEVERAGE (default: 15x), LOW_QUALITY_LEVERAGE (default: 10x) - **Integration:** Pass `qualityScore` parameter to `getActualPositionSizeForSymbol(symbol, config, qualityScore?)` - **Flow:** Quality score β†’ getLeverageForQualityScore() β†’ returns 15x or 10x β†’ applied to position sizing - **Logging:** System logs adaptive leverage decisions for monitoring and validation ```typescript // Example usage in execute endpoint const qualityResult = scoreSignalQuality({ atr, adx, rsi, volumeRatio, pricePosition, timeframe }) const { size, leverage } = getActualPositionSizeForSymbol(driftSymbol, config, qualityResult.score) // leverage is now 15x for quality β‰₯95, or 10x for quality 90-94 ``` ## API Endpoints Architecture **Authentication:** All `/api/trading/*` endpoints (except `/test`) require `Authorization: Bearer API_SECRET_KEY` **Pattern:** Each endpoint follows same flow: 1. Auth check 2. Get config via `getMergedConfig()` 3. Initialize Drift service 4. Check account health 5. Execute operation 6. Save to database 7. Add to Position Manager if applicable **Key endpoints:** - `/api/trading/execute` - Main entry point from n8n (production, requires auth), **auto-caches market data** - `/api/trading/check-risk` - Pre-execution validation (duplicate check, quality score β‰₯91, **per-symbol cooldown**, rate limits, **symbol enabled check**, **saves blocked signals automatically**) - `/api/trading/test` - Test trades from settings UI (no auth required, **respects symbol enable/disable**) - `/api/trading/close` - Manual position closing (requires symbol normalization) - `/api/trading/sync-positions` - **Force Position Manager sync with Drift** (POST, requires auth) - restores tracking for orphaned positions - `/api/trading/cancel-orders` - **Manual order cleanup** (for stuck/ghost orders after rate limit failures) - `/api/trading/positions` - Query open positions from Drift - `/api/trading/market-data` - Webhook for TradingView market data updates (GET for debug, POST for data) - `/api/drift/account-health` - **GET account metrics** (Dec 1, 2025) - Returns { totalCollateral, freeCollateral, totalLiability, marginRatio } from Drift Protocol for real-time UI display - `/api/settings` - Get/update config (writes to .env file, **includes per-symbol settings** and **direction-specific leverage thresholds**) - `/api/analytics/last-trade` - Fetch most recent trade details for dashboard (includes quality score) - `/api/analytics/reentry-check` - **Validate manual re-entry** with fresh TradingView data + recent performance - `/api/analytics/version-comparison` - Compare performance across signal quality logic versions (v1/v2/v3/v4) - `/api/restart` - Create restart flag for watch-restart.sh script ## Critical Workflows ### Execute Trade (Production) ``` TradingView alert β†’ n8n Parse Signal Enhanced (extracts metrics + timeframe + MA crossover flags) ↓ /api/trading/check-risk [validates quality score β‰₯81, checks duplicates, per-symbol cooldown] ↓ /api/trading/execute ↓ normalize symbol (SOLUSDT β†’ SOL-PERP) ↓ getMergedConfig() ↓ scoreSignalQuality({ ..., timeframe }) [CRITICAL: calculate EARLY for ALL timeframes - line 112, Nov 26] ↓ IF timeframe !== '5': Save to BlockedSignal with quality scores β†’ return success ↓ IF timeframe === '5': Continue to execution (production trade) ↓ getPositionSizeForSymbol(qualityScore) [adaptive leverage based on quality score] ↓ openPosition() [MARKET order with adaptive leverage] ↓ calculate dual stop prices if enabled ↓ placeExitOrders() [on-chain TP1/TP2/SL orders] ↓ createTrade() [CRITICAL: save to database FIRST - see Common Pitfall #27] ↓ positionManager.addTrade() [ONLY after DB save succeeds - prevents unprotected positions] ``` **n8n Parse Signal Enhanced Workflow (Nov 27, 2025 - Updated Dec 7, 2025):** - **File:** `workflows/trading/parse_signal_enhanced.json` - **CRITICAL: Symbol Normalization Happens HERE (Dec 7, 2025 discovery):** - TradingView sends raw symbol (SOLUSDT, FARTCOIN, etc.) - n8n extracts symbol from message body and normalizes to Drift format (*-PERP) - Bot receives ALREADY NORMALIZED symbols (SOL-PERP, FARTCOIN-PERP) - **Bot normalization code is NOT used** - n8n does it first - **To add new symbols:** Update n8n workflow regex + mapping logic, then import to n8n - **Symbol Extraction Regex (Dec 7, 2025):** ```javascript const symbolMatch = body.match(/\b(FARTCOIN|FART|SOL|BTC|ETH)\b/i); // CRITICAL: FARTCOIN checked BEFORE SOL (substring match issue) if (matched === 'FARTCOIN' || matched === 'FART') { symbol = 'FARTCOIN-PERP'; } else { symbol = matched + '-PERP'; // SOL β†’ SOL-PERP, BTC β†’ BTC-PERP, etc. } ``` - **Extracts from TradingView alerts:** - Standard metrics: symbol, direction, timeframe, ATR, ADX, RSI, VOL, POS, MAGAP, signalPrice, indicatorVersion - **MA Crossover Detection (NEW):** `isMACrossover`, `isDeathCross`, `isGoldenCross` flags - **Detection logic:** Searches for "crossing" keyword (case-insensitive) in alert message - `isMACrossover = true` if "crossing" found - `isDeathCross = true` if MA50 crossing below MA200 (short/sell direction) - `isGoldenCross = true` if MA50 crossing above MA200 (long/buy direction) - **Purpose:** Enables data collection for MA crossover pattern validation (ADX weakβ†’strong hypothesis) - **TradingView Alert Setup:** "MA50&200 Crossing" condition, once per bar close, 5-minute chart - **Goal:** Collect 5-10 crossover examples to validate v9's early detection pattern (signals 35 min before actual cross) **CRITICAL EXECUTION ORDER (Nov 26, 2025 - Multi-Timeframe Quality Scoring):** Quality scoring MUST happen BEFORE timeframe filtering - this is NOT arbitrary: - All timeframes (5min, 15min, 1H, 4H, Daily) need real quality scores for analysis - Data collection signals (15min+) save to BlockedSignal with full quality metadata - Enables SQL queries: `WHERE blockReason = 'DATA_COLLECTION_ONLY' AND signalQualityScore >= X` - Purpose: Compare quality-filtered win rates across timeframes to determine optimal trading interval - Old flow: Timeframe check β†’ Quality score only for 5min β†’ Data collection signals get hardcoded 0 - **New flow:** Quality score ALL signals β†’ Timeframe routing β†’ Data collection gets real scores **CRITICAL EXECUTION ORDER (Nov 24, 2025 - Adaptive Leverage):** The order of quality scoring β†’ position sizing is NOT arbitrary - it's a requirement: - Quality score MUST be calculated BEFORE position sizing - Adaptive leverage depends on quality score value - Old flow: Open position β†’ Calculate quality β†’ Save to DB (quality used for records only) - New flow: Calculate quality β†’ Determine leverage β†’ Open position with adaptive size - **Never calculate quality after position opening** - leverage must be determined first **CRITICAL EXECUTION ORDER (Nov 13, 2025 Fix):** The order of database save β†’ Position Manager add is NOT arbitrary - it's a safety requirement: - If database save fails, API returns HTTP 500 with critical warning - User sees: "CLOSE POSITION MANUALLY IMMEDIATELY" with transaction signature - Position Manager only tracks database-persisted trades - Container restarts can restore all positions from database - **Never add to Position Manager before database save** - creates unprotected positions ### Position Monitoring Loop ``` Position Manager every 2s: ↓ Verify on-chain position still exists (detect external closures) ↓ getPythPriceMonitor().getLatestPrice() ↓ Calculate current P&L and update MAE/MFE metrics ↓ Check emergency stop (-2%) β†’ closePosition(100%) ↓ Check SL hit β†’ closePosition(100%) ↓ Check TP1 hit β†’ closePosition(75%), cancelAllOrders(), placeExitOrders() with SL at breakeven ↓ Check profit lock trigger (+1.2%) β†’ move SL to +configured% ↓ Check TP2 hit β†’ closePosition(80% of remaining), activate runner ↓ Check trailing stop (if runner active) β†’ adjust SL dynamically based on peakPrice ↓ addPriceUpdate() [save to database every N checks] ↓ saveTradeState() [persist Position Manager state + MAE/MFE for crash recovery] ``` ### Settings Update ``` Web UI β†’ /api/settings POST ↓ Validate new settings ↓ Write to .env file using string replacement ↓ Return success ↓ User clicks "Restart Bot" β†’ /api/restart ↓ Creates /tmp/trading-bot-restart.flag ↓ watch-restart.sh detects flag ↓ Executes: docker restart trading-bot-v4 ``` ## Docker Context **Multi-stage build:** deps β†’ builder β†’ runner (Node 20 Alpine) **Critical Dockerfile steps:** 1. Install deps with `npm install --production` 2. Copy source and `npx prisma generate` (MUST happen before build) 3. `npm run build` (Next.js standalone output) 4. Runner stage copies standalone + static + node_modules + Prisma client **Container networking:** - External: `trading-bot-v4` on port 3001 - Internal: Next.js on port 3000 - Database: `trading-bot-postgres` on 172.28.0.0/16 network **DATABASE_URL caveat:** Use `trading-bot-postgres` (container name) in .env for runtime, but `localhost:5432` for Prisma CLI migrations from host ## High Availability Infrastructure (Nov 25, 2025 - PRODUCTION READY) **Status:** βœ… FULLY AUTOMATED - Zero-downtime failover validated in production **Architecture Overview:** ``` Primary Server (srvdocker02) Secondary Server (Hostinger) 95.216.52.28:3001 72.62.39.24:3001 β”œβ”€β”€ trading-bot-v4 (Docker) β”œβ”€β”€ trading-bot-v4-secondary (Docker) β”œβ”€β”€ trading-bot-postgres β”œβ”€β”€ trading-bot-postgres (replica) β”œβ”€β”€ nginx (HTTPS/SSL) β”œβ”€β”€ nginx (HTTPS/SSL) └── Source: Active deployment └── Source: Standby (real-time sync) ↓ DNS: tradervone.v4.dedyn.io (INWX automatic failover) ↓ Monitoring: dns-failover.service (systemd service on secondary) ``` **Key Components:** 1. **Database Replication (PostgreSQL Streaming)** - Type: Asynchronous streaming replication - Lag: <1 second typical - Config: `/home/icke/traderv4/docs/DEPLOY_SECONDARY_MANUAL.md` - Verify: `ssh root@72.62.39.24 'docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "SELECT status, write_lag FROM pg_stat_replication;"'` 2. **DNS Failover Monitor (Automated)** - Service: `/etc/systemd/system/dns-failover.service` - Script: `/usr/local/bin/dns-failover-monitor.py` - Check interval: 30 seconds - Failure threshold: 3 consecutive failures (90 seconds total) - Health endpoint: `http://95.216.52.28:3001/api/health` (must return valid JSON) - Logs: `/var/log/dns-failover.log` - Status: `ssh root@72.62.39.24 'systemctl status dns-failover'` 3. **Automatic Failover Sequence:** ``` Primary Failure Detected (3 Γ— 30s checks = 90s) ↓ DNS Update via INWX API (<1 second) tradervone.v4.dedyn.io: 95.216.52.28 β†’ 72.62.39.24 ↓ Secondary Takes Over (0s downtime) TradingView webhooks β†’ Secondary bot ↓ Primary Recovery Detected ↓ Automatic Failback (<1 second) tradervone.v4.dedyn.io: 72.62.39.24 β†’ 95.216.52.28 ``` 4. **Live Test Results (Nov 25, 2025 21:53-22:00 CET):** - **Detection Time:** 90 seconds (3 Γ— 30s health checks) - **Failover Execution:** <1 second (DNS update) - **Service Downtime:** 0 seconds (seamless takeover) - **Failback:** Automatic and immediate when primary recovered - **Total Cycle:** ~7 minutes from failure to full restoration - **Result:** βœ… Zero downtime, zero duplicate trades, zero data loss **Critical Operational Notes:** - **Primary Health Check Firewall:** pfSense rule allows Hostinger (72.62.39.24) β†’ srvdocker02:3001 for health checks - **Both Bots on Port 3001:** Reverse proxies handle HTTPS, internal port standardized for consistency - **Health Endpoint Requirements:** Must return valid JSON (not HTML 404). Monitor uses JSON validation to detect failures. - **Manual Failover (Emergency):** `ssh root@72.62.39.24 'python3 /usr/local/bin/manual-dns-switch.py secondary'` - **Update Secondary Bot:** ```bash rsync -avz --exclude 'node_modules' --exclude '.next' --exclude 'logs' \ /home/icke/traderv4/ root@72.62.39.24:/root/traderv4-secondary/ ssh root@72.62.39.24 'cd /root/traderv4-secondary && docker compose build trading-bot && docker compose up -d --force-recreate trading-bot' ``` **Documentation References:** - **Deployment Guide:** `docs/DEPLOY_SECONDARY_MANUAL.md` (689 lines) - **Roadmap:** `HA_SETUP_ROADMAP.md` (all phases complete) - **Git Commits:** - `99dc736` - Deployment guide with test results - `62c7b70` - Roadmap completion documentation **Why This Matters:** - **Financial Protection:** Trading bot stays online 24/7 even if primary server fails - **Zero Downtime:** Automatic failover ensures no missed trading signals - **Data Integrity:** Database replication prevents trade history loss - **Peace of Mind:** System handles failures autonomously while user sleeps - **Cost:** ~$20-30/month for enterprise-grade 99.9%+ uptime **When Making Changes:** - **Code Deployments:** Deploy to primary first, test, then rsync to secondary - **Database Migrations:** Run on primary only (replicates automatically) - **Container Restarts:** Primary can be restarted safely, failover protection active - **Testing:** Use `docker stop trading-bot-v4` on primary to test failover (verified working) - **Monitor Logs:** `ssh root@72.62.39.24 'tail -f /var/log/dns-failover.log'` to watch health checks ## Project-Specific Patterns ### 1. Singleton Services Never create multiple instances - always use getter functions: ```typescript const driftService = await initializeDriftService() // NOT: new DriftService() const positionManager = getPositionManager() // NOT: new PositionManager() const prisma = getPrismaClient() // NOT: new PrismaClient() ``` ### 2. Price Calculations Direction matters for long vs short: ```typescript function calculatePrice(entry: number, percent: number, direction: 'long' | 'short') { if (direction === 'long') { return entry * (1 + percent / 100) // Long: +1% = higher price } else { return entry * (1 - percent / 100) // Short: +1% = lower price } } ``` ### 3. Error Handling Database failures should not fail trades - always wrap in try/catch: ```typescript try { await createTrade(params) console.log('πŸ’Ύ Trade saved to database') } catch (dbError) { console.error('❌ Failed to save trade:', dbError) // Don't fail the trade if database save fails } ``` ### 4. Reduce-Only Orders All exit orders MUST be reduce-only (can only close, not open positions): ```typescript const orderParams = { reduceOnly: true, // CRITICAL for TP/SL orders // ... other params } ``` ### 5. Nextcloud Deck Roadmap Sync **Purpose:** Visual kanban board for tracking optimization roadmap progress **Key Components:** - `scripts/discover-deck-ids.sh` - Find Nextcloud Deck board/stack IDs - `scripts/sync-roadmap-to-deck.py` - Sync roadmap files to Deck cards - `docs/NEXTCLOUD_DECK_SYNC.md` - Complete documentation **Workflow:** ```bash # One-time setup (already done) bash scripts/discover-deck-ids.sh # Creates /tmp/deck-config.json # Sync roadmap to Deck (creates/updates cards) python3 scripts/sync-roadmap-to-deck.py --init # Always dry-run first to preview changes python3 scripts/sync-roadmap-to-deck.py --init --dry-run ``` **Stack Mapping:** - πŸ“₯ **Backlog:** Future phases, ideas, ML work (status: FUTURE) - πŸ“‹ **Planning:** Next phases, ready to implement (status: PENDING, NEXT) - πŸš€ **In Progress:** Currently active work (status: CURRENT, IN PROGRESS, DEPLOYED) - βœ… **Complete:** Finished phases (status: COMPLETE) **Card Structure:** - 3 high-level initiative cards (from `OPTIMIZATION_MASTER_ROADMAP.md`) - 18 detailed phase cards (from individual roadmap files) - Total: 21 cards tracking all optimization work **When to Sync:** - After completing a phase (update markdown status β†’ re-sync) - When starting new phase (move card in Deck UI) - Weekly during active development to keep visual state current **Important Notes:** - API doesn't support duplicate detection - always use `--dry-run` first - Manual card deletion required (API returns 405 on DELETE) - Code blocks auto-removed from descriptions (prevent API errors) - Card titles cleaned (no markdown, emojis removed for readability) ## Testing Commands ```bash # Local development npm run dev # Build production npm run build && npm start # Docker build and restart docker compose build trading-bot docker compose up -d --force-recreate trading-bot docker logs -f trading-bot-v4 # Database operations npx prisma generate # Generate client DATABASE_URL="postgresql://...@localhost:5432/..." npx prisma migrate dev docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "\dt" # Test trade from UI # Go to http://localhost:3001/settings # Click "Test LONG" or "Test SHORT" ``` ## SQL Analysis Queries Essential queries for monitoring signal quality and blocked signals. Run via: ```bash docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "YOUR_QUERY" ``` ### Phase 1: Monitor Data Collection Progress ```sql -- Check blocked signals count (target: 10-20 for Phase 2) SELECT COUNT(*) as total_blocked FROM "BlockedSignal"; -- Score distribution of blocked signals SELECT CASE WHEN signalQualityScore >= 60 THEN '60-64 (Close Call)' WHEN signalQualityScore >= 55 THEN '55-59 (Marginal)' WHEN signalQualityScore >= 50 THEN '50-54 (Weak)' ELSE '0-49 (Very Weak)' END as tier, COUNT(*) as count, ROUND(AVG(signalQualityScore)::numeric, 1) as avg_score FROM "BlockedSignal" WHERE blockReason = 'QUALITY_SCORE_TOO_LOW' GROUP BY tier ORDER BY MIN(signalQualityScore) DESC; -- Recent blocked signals with full details SELECT symbol, direction, signalQualityScore as score, ROUND(adx::numeric, 1) as adx, ROUND(atr::numeric, 2) as atr, ROUND(pricePosition::numeric, 1) as pos, ROUND(volumeRatio::numeric, 2) as vol, blockReason, TO_CHAR(createdAt, 'MM-DD HH24:MI') as time FROM "BlockedSignal" ORDER BY createdAt DESC LIMIT 10; ``` ### Phase 2: Compare Blocked vs Executed Trades ```sql -- Compare executed trades in 60-69 score range SELECT signalQualityScore as score, COUNT(*) as trades, ROUND(AVG(realizedPnL)::numeric, 2) as avg_pnl, ROUND(SUM(realizedPnL)::numeric, 2) as total_pnl, ROUND(100.0 * SUM(CASE WHEN realizedPnL > 0 THEN 1 ELSE 0 END) / COUNT(*)::numeric, 1) as win_rate FROM "Trade" WHERE exitReason IS NOT NULL AND signalQualityScore BETWEEN 60 AND 69 GROUP BY signalQualityScore ORDER BY signalQualityScore; -- Block reason breakdown SELECT blockReason, COUNT(*) as count, ROUND(AVG(signalQualityScore)::numeric, 1) as avg_score FROM "BlockedSignal" GROUP BY blockReason ORDER BY count DESC; ``` ### Analyze Specific Patterns ```sql -- Blocked signals at range extremes (price position) SELECT direction, signalQualityScore as score, ROUND(pricePosition::numeric, 1) as pos, ROUND(adx::numeric, 1) as adx, ROUND(volumeRatio::numeric, 2) as vol, symbol, TO_CHAR(createdAt, 'MM-DD HH24:MI') as time FROM "BlockedSignal" WHERE blockReason = 'QUALITY_SCORE_TOO_LOW' AND (pricePosition < 10 OR pricePosition > 90) ORDER BY signalQualityScore DESC; -- ADX distribution in blocked signals SELECT CASE WHEN adx >= 25 THEN 'Strong (25+)' WHEN adx >= 20 THEN 'Moderate (20-25)' WHEN adx >= 15 THEN 'Weak (15-20)' ELSE 'Very Weak (<15)' END as adx_tier, COUNT(*) as count, ROUND(AVG(signalQualityScore)::numeric, 1) as avg_score FROM "BlockedSignal" WHERE blockReason = 'QUALITY_SCORE_TOO_LOW' AND adx IS NOT NULL GROUP BY adx_tier ORDER BY MIN(adx) DESC; ``` **Usage Pattern:** 1. Run "Monitor Data Collection" queries weekly during Phase 1 2. Once 10+ blocked signals collected, run "Compare Blocked vs Executed" queries 3. Use "Analyze Specific Patterns" to identify optimization opportunities 4. Full query reference: `BLOCKED_SIGNALS_TRACKING.md` ## Common Pitfalls **⚠️ CRITICAL REFERENCE: See `docs/COMMON_PITFALLS.md` for complete list (73 documented issues)** This section contains the **TOP 10 MOST CRITICAL** pitfalls that every AI agent must know. For full details, category breakdowns, code examples, and historical context, see the complete documentation. ### πŸ”΄ TOP 10 CRITICAL PITFALLS **1. Position Manager Monitoring Stops Randomly (#73 - CRITICAL - Dec 7, 2025)** - **Symptom:** PM last update at 23:21 Dec 6, stopped for 90+ minutes, user forced to manually close - **Root Cause:** Drift state propagation delay (5+ min) β†’ 60s timeout expires β†’ false "external closure" detection β†’ `activeTrades.delete()` β†’ monitoring stops - **Financial Impact:** Real losses during unmonitored period - **THE FIX (3 Safety Layers - DEPLOYED Dec 7, 2025):** * **Layer 1:** Extended timeout from 60s β†’ 5 minutes (allows Drift state to propagate) * **Layer 2:** Double-check with 10s delay before processing external closure (catches false positives) * **Layer 3:** Verify Drift has no positions before calling stopMonitoring() (fail-safe) - **Code Locations:** * Layer 1: `lib/trading/position-manager.ts` line ~792 (timeout extension) * Layer 2: `lib/trading/position-manager.ts` line ~603 (double-check logic) * Layer 3: `lib/trading/position-manager.ts` line ~1069 (Drift verification) - **Expected Impact:** Zero unprotected positions, false positive detection eliminated - **Status:** βœ… DEPLOYED Dec 7, 2025 02:47 UTC (commit ed9e4d5) - **See:** `docs/PM_MONITORING_STOP_ROOT_CAUSE_DEC7_2025.md` for complete analysis **2. Drift SDK Memory Leak (#1)** - JavaScript heap OOM after 10+ hours - **Solution:** Smart error-based health monitoring (`lib/monitoring/drift-health-monitor.ts`) - **Detection:** `interceptWebSocketErrors()` patches console.error - **Action:** Restarts if 50+ errors in 30-second window - **Status:** Fixed Nov 15, 2025, Enhanced Nov 24, 2025 **3. Wrong RPC Provider (#2)** - Alchemy breaks Drift SDK subscriptions - **FINAL CONCLUSION:** Use Helius RPC, NEVER use Alchemy - **Root Cause:** Alchemy rate limits break Drift's burst subscription pattern - **Evidence:** 17-71 subscription errors with Alchemy vs 0 with Helius - **Status:** Investigation Complete Nov 14, 2025 **4. P&L Compounding Race Condition (#48, #49, #59, #60, #61, #67)** - **Pattern:** Multiple monitoring loops detect same closure β†’ each adds P&L - **Result:** $6 real β†’ $92 recorded (15x inflation) - **Fix:** Use `Map.delete()` atomic return as deduplication lock (Dec 2, 2025) - **Code:** `if (!this.activeTrades.delete(tradeId)) return` - first caller wins **5. Database-First Pattern (#29)** - Save DB before Position Manager - **Rule:** `createTrade()` MUST succeed before `positionManager.addTrade()` - **Why:** If DB fails, API returns 500 with "CLOSE POSITION MANUALLY" - **Impact:** Without this, positions become untracked on container restart - **Status:** Fixed Nov 13, 2025 **6. Container Deployment Verification (#31)** - **Rule:** NEVER say "fixed" without checking container timestamp - **Verification:** `docker logs trading-bot-v4 | grep "Server starting"` vs `git log -1 --format='%ai'` - **If container older than commit:** CODE NOT DEPLOYED, FIX NOT ACTIVE - **Status:** Critical lesson from Nov 13, 2025 incident **7. Position.size Tokens vs USD (#24)** - SDK returns tokens, not USD - **Bug:** Comparing 12.28 tokens to $1,950 β†’ "99.4% reduction" β†’ false TP1 - **Fix:** `positionSizeUSD = Math.abs(position.size) * currentPrice` - **Impact:** Without fix, TP1 never triggers correctly - **Status:** Fixed Nov 12, 2025 **8. Ghost Detection Atomic Lock (#67)** - Map.delete() as deduplication - **Pattern:** Async handlers called by multiple code paths simultaneously - **Solution:** `if (!this.activeTrades.delete(tradeId)) { return }` - atomic lock - **Why:** JavaScript Map.delete() returns true only for first caller - **Status:** Fixed Dec 2, 2025 **9. Wrong Price in usdToBase() (#81 - ROOT CAUSE - Dec 10, 2025)** - CAUSED $1,000+ LOSSES - **Bug:** Changed from `usdToBase(usd, price)` to `usdToBase(usd)` using entryPrice for ALL orders - **Impact:** Wrong token quantities = Drift rejects orders = NULL database signatures = NO risk management - **Example:** $8,000 at TP1 $141.20 needs 56.66 SOL, but code calculated 57.14 SOL (used $140 entry price) - **Fix:** Reverted to original: `usdToBase(tp1USD, options.tp1Price)` - use SPECIFIC price for each order - **Original commit:** 4cc294b (Oct 26, 2025) - "All 3 exit orders placed successfully on-chain" (100% success) - **Status:** βœ… FIXED Dec 10, 2025 14:31 CET (commit 55d780c) **10. Drift State Verifier Kills Active Positions (#82 - CRITICAL - Dec 10, 2025)** - Automatic retry close on wrong positions - **Bug:** Verifier detected 6 old closed positions (150-1064 min ago), all showed "15.45 tokens" (user's CURRENT trade!), automatically called closePosition() - **Impact:** User's manual trade HAD working SL, then Telegram alert "⚠️ Retry close attempted automatically", SL orders immediately disappeared - **Root Cause:** Lines 279-283 call closePosition() for every mismatch, no verification if Drift position is OLD (should close) vs NEW (active trade) - **Evidence:** All 6 "mismatches" identical drift size = ONE position (user's current manual trade), DB exit times 2-17 hours old - **Emergency Fix:** DISABLED automatic retry close (lines 276-298), added warning logs, requires manual orphan cleanup - **Why Bug #81 Didn't Fix This:** Bug #81 = orders never placed, Bug #82 = orders placed then REMOVED by verifier - **Status:** βœ… EMERGENCY FIX DEPLOYED Dec 10, 2025 11:06 CET (commit e5714e4) --- **REMOVED FROM TOP 10 (Still documented in full section):** **Smart Entry Wrong Price (#66, #68)** - Use Pyth price, not webhook - **Bug #66:** Symbol format mismatch ("SOLUSDT" vs "SOL-PERP") caused cache miss - **Bug #68:** Webhook `signal.price` contained percentage (70.80) not market price ($142) - **Fix:** Always use `pythClient.getPrice(symbol)` for calculations - **Status:** Fixed Dec 1-3, 2025 --- ### Quick Links by Category **P&L Calculation Errors:** #11, #41, #48, #49, #54, #57, #61 **Race Conditions:** #27, #28, #59, #60, #67 **SDK/API Issues:** #1, #2, #12, #24, #36, #45, #81 **Database Operations:** #29, #35, #37, #50, #58 **Configuration:** #55, #62 **Smart Entry:** #63, #66, #68, #70 **Deployment:** #31, #47 πŸ“š **Full Documentation:** `docs/COMMON_PITFALLS.md` (73 pitfalls with code examples, git commits, deployment dates) 75. **CRITICAL: Wrong Year in SQL Queries - ALWAYS Use Current Year (CRITICAL - Dec 8, 2025):** - **Symptom:** Query returns 247 rows spanning months when expecting 5-6 recent trades - **Root Cause:** Database stores timestamps in 2024 format, AI agent queried '2024-12-07' instead of '2025-12-07' - **Impact:** Reported -$1,616 total loss when actual recent loss was -$137.55 (12Γ— inflation) - **User Dispute:** "THE LOSS WAS NOT 63$ but 120,89$. where do you get those numbers from??" - **Root Cause Analysis:** * Database exitTime field contains dates like '2024-12-02', '2024-12-05', etc. * AI agent wrote query: `WHERE "exitTime" >= '2024-12-07 00:00:00'` * This matched ALL trades from Oct 2024 onwards (247 rows) * Should have written: `WHERE "exitTime" >= '2025-12-07 00:00:00'` * Current date is Dec 8, **2025**, not 2024 - **MANDATORY SQL Pattern - ALWAYS Check Year:** ```sql -- WRONG: Hardcoded 2024 when current year is 2025 SELECT * FROM "Trade" WHERE "exitTime" >= '2024-12-07 00:00:00'; -- CORRECT: Use current year 2025 SELECT * FROM "Trade" WHERE "exitTime" >= '2025-12-07 00:00:00'; -- BEST: Verify current date first SELECT NOW()::date as current_date; -- Check what year database thinks it is -- SAFEST: Use relative dates (past 3 days) SELECT * FROM "Trade" WHERE "exitTime" >= NOW() - INTERVAL '3 days'; ``` - **Verification Before Reporting Numbers:** 1. Check row count - if querying "last 3 days" returns 247 rows, year is WRONG 2. Verify date range in results: `SELECT MIN("exitTime"), MAX("exitTime") FROM ...` 3. Use `TO_CHAR("exitTime", 'YYYY-MM-DD HH24:MI')` to see full dates including year 4. Cross-reference with context: User said "emergency today" β†’ query should return TODAY's data only - **Why This Matters:** * **This is a REAL MONEY system** - wrong loss figures = incorrect financial decisions * User was EXACTLY RIGHT with $120.89 figure (actual Dec 5-8 losses) * AI agent gave wrong numbers due to year mismatch in query * Wasted user time disputing correct figures * User mandate: "drift tells the truth not you" - trust user's numbers, verify queries - **Prevention Rules:** 1. ALWAYS use `NOW()` or `CURRENT_DATE` for relative date queries 2. NEVER hardcode year without verifying current year first 3. ALWAYS check row counts before declaring results accurate 4. When user disputes numbers, re-verify query year immediately 5. Include full YYYY-MM-DD in SELECT to catch year mismatches - **Red Flags Indicating Year Mismatch:** * Query for "recent trades" returns 100+ rows * Date range spans months when expecting days * User says "that's wrong" and provides different figure * exitTime dates show 2024 but current date is 2025 - **Git commit:** [Document wrong year SQL query lesson - Dec 8, 2025] - **Status:** βœ… Documented - Future AI agents must verify year in date queries 76. **CRITICAL: Silent SL Placement Failure - placeExitOrders() Returns SUCCESS With Missing Orders (CRITICAL - Dec 8, 2025):** - **Symptom:** Position opened with TP1 and TP2 orders but NO stop loss, completely unprotected from downside - **User Report:** "when i opened the manually trade we hade a sl and tp but it was removed by the system" - **Financial Impact:** Part of $1,000+ losses - positions left open with no SL protection - **Real Incident (Dec 8, 2025 13:39:24):** * Trade: cmix773hk019gn307fjjhbikx * Symbol: SOL-PERP LONG at $138.45, size $2,003 * TP1 order EXISTS: 2QzE4q9Q... ($139.31) * TP2 order EXISTS: 5AQRiwRK... ($140.17) * SL order MISSING: NULL in database (should be $137.16) * stopLossPrice: Correctly calculated ($137.1551) and passed to placeExitOrders() * Logs: "πŸ“¨ Exit orders placed on-chain: [2 signatures]" (expected 3!) * Function returned: `{success: true, signatures: [tp1Sig, tp2Sig]}` (SL missing) - **Root Cause:** * File: `lib/drift/orders.ts` function `placeExitOrders()` (lines 252-495) * Lines 465-473: TRIGGER_MARKET SL placement code exists but never executed * No "πŸ›‘οΈ Placing SL..." log found in container logs * No error handling around SL placement section * Function returns SUCCESS even if signatures.length < 3 * No validation before return statement - **Why It's Silent:** * placeExitOrders() doesn't check signatures.length before returning * Execute endpoint trusts SUCCESS status without validation * No alerts, no errors, no indication of failure * Position appears protected but actually isn't - **How It Bypasses Checks:** * Size check: Position 14.47 SOL >> minOrderSize 0.1 SOL (146Γ— above threshold) * All inputs valid: stopLossPrice calculated correctly, market exists, wallet has balance * Code path exists but doesn't execute - unknown reason (rate limit? SDK bug? network?) * Function returns early or skips SL section without throwing error - **THE FIX (βœ… DEPLOYED Dec 9, 2025):** ```typescript // In lib/drift/orders.ts at end of placeExitOrders() (lines 505-520) if (signatures.length < expectedOrderCount) { const errorMsg = `MISSING EXIT ORDERS: Expected ${expectedOrderCount}, got ${signatures.length}. Position is UNPROTECTED!` console.error(`❌ ${errorMsg}`) console.error(` Expected: TP1 + TP2 + ${useDualStops ? 'Soft SL + Hard SL' : 'SL'}`) console.error(` Got ${signatures.length} signatures:`, signatures) return { success: false, error: errorMsg, signatures // Return partial signatures for debugging } } logger.log(`βœ… All ${expectedOrderCount} exit orders placed successfully`) return { success: true, signatures } ``` - **Execute Endpoint Enhancement:** * Added validation logging for missing exit orders * System will now alert immediately if SL placement fails * Returns error instead of success when orders missing - **Detection: Health Monitoring System (Dec 8, 2025):** * File: `lib/health/position-manager-health.ts` (177 lines) * Function: `checkPositionManagerHealth()` runs every 30 seconds * Check: Open positions missing SL orders β†’ CRITICAL ALERT per position * Validates: slOrderTx, softStopOrderTx, hardStopOrderTx all present * Log format: "🚨 CRITICAL: Position {id} missing SL order (symbol: {symbol}, size: ${size})" * Started automatically via `lib/startup/init-position-manager.ts` line ~78 - **Why This Matters:** * **This is a REAL MONEY system** - no SL = unlimited loss exposure * Position can drop 5%, 10%, 20% with no protection * User may be asleep, away, unavailable for hours * Silent failures are the most dangerous kind * Function says "success" but position is unprotected - **Prevention Rules:** 1. ALWAYS validate signatures.length matches expected count 2. NEVER return success without verifying all orders placed 3. ADD try/catch around ALL order placement sections 4. LOG errors explicitly, don't fail silently 5. Health monitor will detect missing orders within 30 seconds 6. Execute endpoint must validate placeExitOrders() result - **Red Flags Indicating This Bug:** * Logs show "Exit orders placed: [2 signatures]" * Database slOrderTx field is NULL * No "πŸ›‘οΈ Placing SL..." log messages * placeExitOrders() returned success: true * Position open with TP1/TP2 but no SL - **Git commit:** 63b9401 "fix: Implement critical risk management fixes for bugs #76, #77, #78, #80" (Dec 9, 2025) - **Deployment:** Dec 9, 2025 22:42 UTC (container trading-bot-v4) - **Status:** βœ… FIXED AND DEPLOYED - System will now fail loudly instead of silently 77. **CRITICAL: Position Manager Never Actually Monitors - Logs Say "Added" But isMonitoring Stays False (CRITICAL - Dec 8, 2025):** - **Symptom:** System logs "βœ… Trade added to position manager for monitoring" but position never monitored - **User Report:** "we have lost 1000$...... i hope with the new test system this is an issue of the past" - **Financial Impact:** $1,000+ losses because positions completely unprotected despite logs saying otherwise - **Real Incident (Dec 8, 2025):** * Trade: cmix773hk019gn307fjjhbikx created at 13:39:24 * Logs: "βœ… Trade added to position manager for monitoring" * Database: `configSnapshot.positionManagerState` = NULL (not monitoring!) * Reality: No price checks, no TP/SL monitoring, no protection whatsoever * No Pyth price monitor startup logs found * No price update logs found * No "checking conditions" logs found - **Root Cause:** * File: `lib/trading/position-manager.ts` (2027 lines) * Function: `addTrade()` (lines 257-271) - Adds to Map, calls startMonitoring() * Function: `startMonitoring()` (lines 482-518) - Calls priceMonitor.start() * Problem: startMonitoring() exists and looks correct but doesn't execute properly * No verification that monitoring actually started * No health check that isMonitoring matches activeTrades.size * Pyth price monitor never starts (no WebSocket connection logs) - **Why It's Catastrophic:** * System SAYS position is protected * User trusts the logs * Position actually has ZERO protection * No TP/SL checks, no emergency stop, no trailing stop * Position can move 10%+ with no action * Database shows NULL for positionManagerState (smoking gun) - **The Deception:** * Log message: "βœ… Trade added to position manager for monitoring" * Reality: Trade added to Map but monitoring never starts * isMonitoring flag stays false * No price monitor callbacks registered * Silent failure - no errors thrown - **Detection: Health Monitoring System (Dec 8, 2025):** * File: `lib/health/position-manager-health.ts` (177 lines) * Function: `checkPositionManagerHealth()` runs every 30 seconds * Critical Check #1: DB has open trades but PM not monitoring * Critical Check #2: PM has trades but isMonitoring = false * Critical Check #3: DB vs PM trade count mismatch * Alert format: "🚨 CRITICAL: Position Manager not monitoring! DB: {dbCount} open trades, PM: {pmCount} trades, Monitoring: {isMonitoring}" * Started automatically via `lib/startup/init-position-manager.ts` line ~78 - **Test Suite Created:** * File: `tests/integration/position-manager/monitoring-verification.test.ts` (201 lines) * Test Suite: "CRITICAL: Monitoring Actually Starts" (4 tests) - Validates startMonitoring() calls priceMonitor.start() - Validates symbols array passed correctly - Validates isMonitoring flag set to true - Validates monitoring doesn't start twice * Test Suite: "CRITICAL: Price Updates Actually Trigger Checks" (2 tests) * Test Suite: "CRITICAL: Monitoring Stops When No Trades" (2 tests) * Test Suite: "CRITICAL: Error Handling Doesnt Break Monitoring" (1 test) * Purpose: Validate Position Manager ACTUALLY monitors, not just logs "added" - **THE FIX (βœ… DEPLOYED Dec 9, 2025):** ```typescript // In lib/trading/position-manager.ts after startMonitoring() call // Added monitoring verification if (this.activeTrades.size > 0 && !this.isMonitoring) { const errorMsg = `CRITICAL: Failed to start monitoring! activeTrades=${this.activeTrades.size}, isMonitoring=${this.isMonitoring}` console.error(`❌ ${errorMsg}`) // Log to persistent file const { logCriticalError } = await import('../utils/persistent-logger') await logCriticalError('MONITORING_START_FAILED', { activeTradesCount: this.activeTrades.size, symbols: Array.from(this.activeTrades.values()).map(t => t.symbol) }) } ``` - **Why This Matters:** * **This is a REAL MONEY system** - no monitoring = no protection * TP/SL orders can fail, monitoring is the backup * Position Manager is the "safety net" - if it doesn't work, nothing does * User trusts logs saying "monitoring" - but it's a lie * $1,000+ losses prove this is NOT theoretical - **Prevention Rules:** 1. NEVER trust log messages about state - verify actual state 2. Health checks MUST validate isMonitoring matches activeTrades 3. Test suite MUST validate monitoring actually starts 4. Add verification after startMonitoring() calls 5. Health monitor detects failures within 30 seconds 6. If monitoring fails to start, throw error immediately - **Red Flags Indicating This Bug:** * Logs say "Trade added to position manager for monitoring" * Database configSnapshot.positionManagerState is NULL * No Pyth price monitor startup logs * No price update logs * No "checking conditions" logs * Position moves significantly with no PM action - **Git commit:** 63b9401 "fix: Implement critical risk management fixes for bugs #76, #77, #78, #80" (Dec 9, 2025) - **Deployment:** Dec 9, 2025 22:42 UTC (container trading-bot-v4) - **Status:** βœ… FIXED - System now throws error if monitoring fails to start 78. **CRITICAL: Orphan Detection Removes Active Position Orders - CancelAllOrders Affects ALL Positions On Symbol (CRITICAL - Dec 8, 2025):** - **Symptom:** User opens new position with TP/SL orders, system immediately removes them, position left unprotected - **User Report:** "when i opened the manually trade we hade a sl and tp but it was removed by the system" - **Financial Impact:** Part of $1,000+ losses - active positions stripped of protection while system tries to close old positions - **Real Incident Timeline (Dec 8, 2025):** * **06:46:23** - Old orphaned position: 14.47 SOL-PERP (DB says closed, Drift says open) * **13:39:24** - User opens NEW manual SOL-PERP LONG at $138.45, size $2,003 * **13:39:25** - placeExitOrders() places TP1 + TP2 (SL fails silently - Bug #76) * **13:39:26** - Drift state verifier detects OLD orphan (7 hours old) * **13:39:27** - System attempts to close orphan via market order * **13:39:28** - Close fails (Drift state propagation delay 5+ min) * **13:39:30** - Position Manager removeTrade() calls cancelAllOrders(symbol='SOL-PERP') * **13:39:31** - cancelAllOrders() cancels ALL SOL-PERP orders (TP1 + TP2 from NEW position) * **Result** - NEW position left open with NO TP, NO SL, NO PROTECTION - **Root Cause:** * File: `lib/trading/position-manager.ts` function `removeTrade()` (lines 275-300) * Code: `await cancelAllOrders(symbol)` - operates on SYMBOL level, not position level * Problem: Doesn't distinguish between old orphaned position and new active position * When closing orphan, cancels orders for ALL positions on that symbol * User's NEW position gets orders removed while orphan cleanup runs - **Why It's Dangerous:** * Orphan detection is GOOD (recovers lost positions) * But cleanup affects ALL positions on symbol, not just orphan * If user opens position while orphan cleanup runs, new position loses protection * Window of vulnerability: 5+ minutes (Drift state propagation delay) * Multiple close attempts = multiple cancelAllOrders() calls - **Code Evidence:** ```typescript // lib/trading/position-manager.ts lines ~285-300 async removeTrade(tradeId: string, reason: string) { const trade = this.activeTrades.get(tradeId) if (!trade) return try { // PROBLEM: This cancels ALL orders for the symbol // Doesn't check if other active positions exist on same symbol await cancelAllOrders(trade.symbol) console.log(`🧹 Cancelled all orders for ${trade.symbol}`) } catch (error) { console.error(`❌ Error cancelling orders:`, error) } this.activeTrades.delete(tradeId) } ``` - **Orphan Detection Context:** * File: `lib/startup/init-position-manager.ts` function `detectOrphanedPositions()` * Runs every 10 minutes via Drift state verifier * Checks: DB says closed but Drift says open β†’ orphan detected * Action: Attempts to close orphan position * Side effect: Calls removeTrade() β†’ cancelAllOrders() β†’ affects ALL positions - **THE FIX (βœ… DEPLOYED Dec 9, 2025):** ```typescript // In lib/trading/position-manager.ts removeTrade() function async removeTrade(tradeId: string): Promise { const trade = this.activeTrades.get(tradeId) if (trade) { logger.log(`πŸ—‘οΈ Removing trade: ${trade.symbol}`) // BUG #78 FIX: Check Drift position size before canceling orders // If Drift shows an open position, DON'T cancel orders (may belong to active position) try { const driftService = getDriftService() const marketConfig = getMarketConfig(trade.symbol) // Query Drift for current position const driftPosition = await driftService.getPosition(marketConfig.driftMarketIndex) if (driftPosition && Math.abs(driftPosition.size) >= 0.01) { // Position still open on Drift - DO NOT cancel orders console.warn(`⚠️ SAFETY CHECK: ${trade.symbol} position still open on Drift (size: ${driftPosition.size})`) console.warn(` Skipping order cancellation to avoid removing active position protection`) console.warn(` Removing from tracking only`) // Just remove from map, don't cancel orders this.activeTrades.delete(tradeId) return } } catch (error) { console.error(`❌ Error checking Drift position:`, error) } } } ``` - **Detection: Health Monitoring System:** * File: `lib/health/position-manager-health.ts` * Check: Open positions missing TP1/TP2 orders β†’ WARNING * Check: Open positions missing SL orders β†’ CRITICAL ALERT * Detects orders removed within 30 seconds * Logs: "🚨 CRITICAL: Position {id} missing SL order" - **Why This Matters:** * **This is a REAL MONEY system** - removed orders = lost protection * Orphan detection is necessary (recovers stuck positions) * But must not affect active positions on same symbol * User opens position expecting protection, system removes it * Silent removal - no notification, no alert - **Prevention Rules:** 1. NEVER cancel orders without verifying position actually closed 2. Check Drift position size = 0 before cancelAllOrders() 3. Store order IDs per trade, cancel specific orders only 4. Health monitor detects missing orders within 30 seconds 5. Add grace period for new positions (skip orphan checks <5 min old) 6. Log CRITICAL alert when orders removed from active position - **Red Flags Indicating This Bug:** * Position initially has TP/SL orders * Orders disappear shortly after opening * Orphan detection logs around same time * Multiple close attempts on old position * cancelAllOrders() logs for symbol * New position left with no orders - **Git commit:** 63b9401 "fix: Implement critical risk management fixes for bugs #76, #77, #78, #80" (Dec 9, 2025) - **Deployment:** Dec 9, 2025 22:42 UTC (container trading-bot-v4) - **Status:** βœ… FIXED - Active position orders now protected from orphan cleanup 79. **CRITICAL: Smart Validation Queue Never Monitors - In-Memory Queue Lost on Container Restart (CRITICAL - Dec 9, 2025):** - **Symptom:** Quality 50-89 signals blocked and saved to database, but validation queue never monitors them for price confirmation - **User Report:** "the smart validation system should have entered the trade as it shot up shouldnt it?" - **Financial Impact:** Missed +$18.56 manual entry (SOL-PERP LONG quality 85, price moved +1.21% in 1 minute = 4Γ— the +0.3% confirmation threshold) - **Real Incident (Dec 9, 2025 15:40):** * Signal: cmiyqy6uf03tcn30722n02lnk * Quality: 85/90 (blocked correctly per thresholds) * Entry Price: $134.94 * Price after 1min: $136.57 (+1.21%) * Confirmation threshold: +0.3% * System should have: Queued β†’ Monitored β†’ Entered at confirmation * What happened: Signal saved to database, queue NEVER monitored it * User had to manually enter β†’ +$18.56 profit - **Root Cause #1 - In-Memory Queue Lost on Restart:** * File: `lib/trading/smart-validation-queue.ts` * Queue uses `Map` in-memory storage * BlockedSignal records saved to PostgreSQL βœ… * But queue Map is empty after container restart ❌ * startSmartValidation() just created empty singleton, never loaded from database - **Root Cause #2 - Production Logger Silencing:** * logger.log() calls silenced when NODE_ENV=production * File: `lib/utils/logger.ts` - logger.log() only works in dev mode * Startup messages never appeared in container logs * Silent failure - no errors, no indication queue was empty - **THE FIX (Dec 9, 2025 - DEPLOYED):** ```typescript // In lib/trading/smart-validation-queue.ts startSmartValidation() export async function startSmartValidation(): Promise { const queue = getSmartValidationQueue() // Query BlockedSignal table for signals within 30-minute entry window const thirtyMinutesAgo = new Date(Date.now() - 30 * 60 * 1000) const recentBlocked = await prisma.blockedSignal.findMany({ where: { blockReason: 'QUALITY_SCORE_TOO_LOW', signalQualityScore: { gte: 50, lt: 90 }, // Marginal quality range createdAt: { gte: thirtyMinutesAgo }, }, }) console.log(`πŸ”„ Restoring ${recentBlocked.length} pending signals from database`) // Re-queue each signal with original parameters for (const signal of recentBlocked) { await queue.addSignal({ /* signal params */ }) } console.log(`βœ… Smart validation restored ${recentBlocked.length} signals, monitoring started`) } ``` - **Why Both Fixes Were Needed:** 1. Database restoration: Load pending signals from PostgreSQL on startup 2. console.log(): Replace logger.log() calls with console.log() for production visibility 3. Without #1: Queue always empty after restart 4. Without #2: Couldn't debug why queue was empty (no logs) - **Expected Behavior After Fix:** * Container restart: Queries database for signals within 30-minute window * Signals found: Re-queued with original entry price, quality score, metrics * Monitoring starts: 30-second price checks begin immediately * Logs show: "πŸ”„ Restoring N pending signals from database" * Confirmation: "πŸ‘οΈ Smart validation monitoring started (checks every 30s)" - **Verification Commands:** ```bash # Check startup logs docker logs trading-bot-v4 2>&1 | grep -E "(Smart|validation|Restor)" # Expected output: # 🧠 Starting smart entry validation system... # πŸ”„ Restoring N pending signals from database # βœ… Smart validation restored N signals, monitoring started # πŸ‘οΈ Smart validation monitoring started (checks every 30s) # If N=0, check database for recent signals docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c \ "SELECT COUNT(*) FROM \"BlockedSignal\" WHERE \"blockReason\" = 'QUALITY_SCORE_TOO_LOW' AND \"signalQualityScore\" BETWEEN 50 AND 89 AND \"createdAt\" > NOW() - INTERVAL '30 minutes';" ``` - **Why This Matters:** * **This is a REAL MONEY system** - validation queue is designed to catch marginal signals that confirm * Quality 50-89 signals = borderline setups that need price confirmation * Without validation: Miss profitable confirmed moves (like +$18.56 opportunity) * System appeared to work (signals blocked correctly, saved to database) * But critical validation step never executed (queue empty, monitoring off) - **Prevention Rules:** 1. NEVER use in-memory-only data structures for critical financial logic 2. ALWAYS restore state from database on startup for trading systems 3. ALWAYS use console.log() for critical startup messages (not logger.log()) 4. ALWAYS verify monitoring actually started (check logs for confirmation) 5. Add database restoration for ANY system that queues/monitors signals 6. Test container restart scenarios to catch in-memory state loss - **Red Flags Indicating This Bug:** * BlockedSignal records exist in database but no queue monitoring logs * Price moves meet confirmation threshold but no execution * User manually enters trades that validation queue should have handled * No "πŸ‘οΈ Smart validation check: N pending signals" logs every 30 seconds * Telegram shows "⏰ SIGNAL QUEUED FOR VALIDATION" but nothing after - **Files Changed:** * lib/trading/smart-validation-queue.ts (Lines 456-500, 137-175, 117-127) - **Git commit:** 2a1badf "critical: Fix Smart Validation Queue - restore signals from database on startup" (Dec 9, 2025) - **Deploy Status:** βœ… DEPLOYED Dec 9, 2025 17:07 CET - **Status:** βœ… Fixed - Queue now restores pending signals on startup, production logging enabled 80. **CRITICAL: 1-Minute Market Data Webhook Action Mismatch - Fresh ATR Data Never Arriving (CRITICAL - Dec 9, 2025):** - **Symptom:** Telegram bot timing out waiting for fresh ATR data, falling back to stale preset (0.43) - **User Report:** "for some reason we are not getting fresh atr data from the 1 minute data feed" - **Financial Impact:** Manual trades executing with stale volatility metrics instead of fresh real-time data - **Real Incident (Dec 9, 2025 19:00):** * User sent "long sol" via Telegram * Bot response: "⏳ Waiting for next 1-minute datapoint... Will execute with fresh ATR (max 60s)" * After 60s: "⚠️ Timeout waiting for fresh data. Using preset ATR: 0.43" * Cache inspection: Only contained "manual" timeframe data (97 seconds old), no fresh 1-minute data * Logs: No "Received market data webhook" entries - **Root Cause - Webhook Action Validation Mismatch:** * File: `app/api/trading/market-data/route.ts` lines 64-71 * Endpoint validated: `if (body.action !== 'market_data')` (exact string match) * TradingView alert sends: `"action": "market_data_1min"` (line 54 in 1min_market_data_feed.pinescript) * Result: Webhook returned 400 Bad Request, data never cached * Smart entry timer polled empty cache, timed out after 60 seconds - **THE FIX (Dec 9, 2025 - DEPLOYED):** ```typescript // BEFORE (lines 64-71): if (body.action !== 'market_data') { return NextResponse.json( { error: 'Invalid action - expected "market_data"' }, { status: 400 } ) } // AFTER: const validActions = ['market_data', 'market_data_1min'] if (!validActions.includes(body.action)) { return NextResponse.json( { error: `Invalid action - expected one of: ${validActions.join(', ')}` }, { status: 400 } ) } ``` - **Why This Fix:** * Endpoint now accepts BOTH action variants * TradingView 1-minute alerts use "market_data_1min" to distinguish from 5-minute signals * Higher timeframe alerts (15min, 1H, 4H, Daily) use "market_data" * Single endpoint serves both data collection systems * Error message updated to show all valid options - **Build Challenges:** * Initial build used cached layers, fix not included in compiled code * Required `docker compose build trading-bot --no-cache` to force TypeScript recompilation * Verification: `docker exec trading-bot-v4 grep "validActions" /app/.next/server/app/api/trading/market-data/route.js` - **Verification Complete (Dec 9, 2025 19:18 CET):** * Manual test: `curl -X POST /api/trading/market-data -d '{"action": "market_data_1min", ...}'` * Response: `{"success": true, "symbol": "SOL-PERP", "message": "Market data cached and stored successfully"}` * Cache inspection: Fresh data with ATR 0.55, ADX 28.5, RSI 62, timeframe "1", age 9 seconds * Logs: "πŸ“‘ Received market data webhook: { action: 'market_data_1min', symbol: 'SOLUSDT', atr: 0.55 }" * Logs: "βœ… Market data cached for SOL-PERP" - **Expected Behavior After Fix:** * TradingView 1-minute alert fires β†’ webhook accepted β†’ data cached * Telegram "long sol" command β†’ waits for next datapoint β†’ receives fresh data within 60s * Bot shows: "βœ… Fresh data received | ATR: 0.55 | ADX: 28.5 | RSI: 62.0" * Trade executes with real-time ATR-based TP/SL targets (not stale preset) - **Why This Matters:** * **This is a REAL MONEY system** - stale volatility metrics = wrong position sizing * ATR changes with market conditions (0.43 preset vs 0.55 actual = 28% difference) * TP/SL targets calculated from ATR multipliers (2.0Γ—, 4.0Γ—, 3.0Γ—) * Wrong ATR = targets too tight (missed profits) or too wide (unnecessary risk) * User's manual trades require fresh data for optimal execution - **Prevention Rules:** 1. NEVER use exact string match for webhook action validation (use array inclusion) 2. ALWAYS accept multiple action variants when endpoints serve similar purposes 3. ALWAYS verify Docker build includes TypeScript changes (check compiled JS) 4. ALWAYS test webhook endpoints with curl before declaring fix working 5. Add monitoring alerts when cache shows only stale "manual" timeframe data 6. Log webhook rejections with 400 errors for debugging - **Red Flags Indicating This Bug:** * Telegram bot times out waiting for fresh data every time * Cache only contains "manual" timeframe (not "1" for 1-minute data) * No "Received market data webhook" logs in container output * TradingView alerts configured but endpoint returns 400 errors * Bot always falls back to preset metrics (ATR 0.43, ADX 32, RSI 58/42) - **Files Changed:** * app/api/trading/market-data/route.ts (Lines 64-71 - webhook validation) - **TradingView Alert Setup:** * Alert name: "1-Minute Market Data Feed" * Chart: SOL-PERP 1-minute timeframe * Condition: "1min Market Data" indicator, "Once Per Bar Close" * Webhook URL: n8n or direct bot endpoint * Alert message: Auto-generated JSON with `"action": "market_data_1min"` * Expected rate: 1 alert per minute (60/hour per symbol) - **Troubleshooting Commands:** ```bash # Check if webhook firing docker logs trading-bot-v4 2>&1 | grep "Received market data webhook" | tail -5 # Check cache contents curl -s http://localhost:3001/api/trading/market-data | jq '.cache."SOL-PERP"' # Check data age (should be < 60 seconds) curl -s http://localhost:3001/api/trading/market-data | jq '.cache."SOL-PERP".ageSeconds' # Monitor webhook hits in real-time docker logs -f trading-bot-v4 2>&1 | grep "market data webhook" ``` - **Git commit:** 9668349 "fix: Accept market_data_1min action in webhook endpoint" (Dec 9, 2025) - **Deploy Status:** βœ… DEPLOYED Dec 9, 2025 19:18 CET (--no-cache build) - **Status:** βœ… FIXED - Endpoint accepts both action variants, fresh data flow operational - **Documentation:** `docs/1MIN_ALERT_SETUP_INSTRUCTIONS.md` - Complete setup guide for TradingView alerts - **Note:** This bug was unrelated to the main $1,000 loss incident but fixed during same session 81. **CRITICAL: Wrong Price in usdToBase() Causes Order Rejection - ROOT CAUSE OF $1,000+ LOSSES (CRITICAL - Dec 10, 2025):** - **Symptom:** Positions repeatedly created without stop loss orders, database shows NULL signatures despite placeExitOrders() being called - **User Report:** "i had to close the most recent position again due to a removal of risk management by the system again. man we had this working perfectly in the past" - **Financial Impact:** $1,000+ losses from positions without stop loss protection - **Real Incidents (Dec 6-10, 2025):** * Position cmizjcvaa0152r007q99au751 (Dec 10, 04:55): ALL NULL risk management fields * Position cmiznlvpz000ap407281f9vxo (Dec 10, 06:54): Database NULL signatures despite orders visible in Drift UI * User had to manually close multiple positions due to missing risk management - **Root Cause Discovery (Dec 10, 2025):** * **Original working implementation (commit 4cc294b, Oct 26, 2025):** ```typescript // Used SPECIFIC price for each order const usdToBase = (usd: number, price: number) => { const base = usd / price // TP1 price, TP2 price, or SL price return Math.floor(base * 1e9) } // TP1: Used TP1 price const baseAmount = usdToBase(tp1USD, options.tp1Price) // TP2: Used TP2 price const baseAmount = usdToBase(tp2USD, options.tp2Price) // SL: Used SL price const slBaseAmount = usdToBase(slUSD, options.stopLossPrice) ``` * Original commit message: "All 3 exit orders placed successfully on-chain" * 100% success rate when deployed * **Broken implementation (current before fix):** ```typescript // Used entryPrice for ALL orders const usdToBase = (usd: number) => { const base = usd / options.entryPrice // WRONG - always entry price return Math.floor(base * 1e9) } // TP1: Uses entryPrice instead of TP1 price ❌ const baseAmount = usdToBase(tp1USD) // TP2: Uses entryPrice instead of TP2 price ❌ const baseAmount = usdToBase(tp2USD) // SL: Uses entryPrice instead of SL price ❌ const slBaseAmount = usdToBase(slUSD) ``` - **Why This Breaks:** * To close 60% at TP1 price $141.20, need DIFFERENT token quantity than at entry price $140.00 * Example: Entry $140, TP1 $141.20 (0.86% higher) - Correct: $8,000 / $141.20 = 56.66 SOL - Wrong: $8,000 / $140.00 = 57.14 SOL (0.48 SOL more = 0.85% error) * Using wrong price = wrong token quantity = Drift rejects order OR creates wrong size * Orders may fail silently, signatures not returned, database records NULL - **THE FIX (βœ… DEPLOYED Dec 10, 2025 14:31 CET):** ```typescript // Reverted to original working implementation const usdToBase = (usd: number, price: number) => { const base = usd / price // Use the specific order price return Math.floor(base * 1e9) } // TP1: Now uses options.tp1Price (CORRECT) const baseAmount = usdToBase(tp1USD, options.tp1Price) // TP2: Now uses options.tp2Price (CORRECT) const baseAmount = usdToBase(tp2USD, options.tp2Price) // SL: Now uses options.stopLossPrice (CORRECT) const slBaseAmount = usdToBase(slUSD, options.stopLossPrice) ``` - **Why This Matters:** * **This is a REAL MONEY system** - wrong order sizes = orders rejected = no protection * System "worked perfectly in the past" because original implementation was correct * Complexity added over time broke the core calculation * Positions left unprotected despite placeExitOrders() being called * User lost $1,000+ from this single calculation error - **Prevention Rules:** 1. NEVER use entryPrice for TP/SL order size calculations 2. ALWAYS use the specific order price (TP1 price, TP2 price, SL price) 3. Order size = USD amount / order price (not entry price) 4. When reverting to simpler implementations, verify token quantity calculations 5. Original working implementations are working for a reason - preserve core logic - **Red Flags Indicating This Bug:** * placeExitOrders() called but database signatures are NULL * Drift UI shows orders but database doesn't record them * Orders fail silently without errors * Token quantity calculations use entryPrice for all orders * Position opened successfully but no TP/SL orders placed - **Files Changed:** * lib/drift/orders.ts: Lines 275-282 (usdToBase function) * lib/drift/orders.ts: Line 301 (TP1 call site) * lib/drift/orders.ts: Line 325 (TP2 call site) * lib/drift/orders.ts: Line 354 (SL call site) - **Git commit:** 55d780c "critical: Fix usdToBase() to use specific prices (TP1/TP2/SL) not entryPrice" (Dec 10, 2025) - **Deployment:** Dec 10, 2025 14:31 CET (container trading-bot-v4) - **Status:** βœ… FIXED AND DEPLOYED - Restored original working implementation with 100% success rate - **Lesson Learned:** When system "worked perfectly in the past," find the original implementation and restore its core logic. Added complexity broke what was already proven to work. 82. **CRITICAL: Drift State Verifier Kills Active Position SL Orders - Automatic Retry Close on Wrong Positions (CRITICAL - Dec 10, 2025):** - **Symptom:** User had manual trade OPEN with working SL orders, then Telegram alert from Drift State Verifier about "6 position(s) that should be closed but are still open on Drift" with "⚠️ Retry close attempted automatically", SL orders immediately disappeared - **User Report:** "FOR FUCK SAKES. STILL THE FUCKING SAME. THE SYSTEM KILLED MY SL AFTER THIS MESSAGE!!!!" - **Financial Impact:** Part of $1,000+ loss series - active positions left completely unprotected when verifier incorrectly closes them - **Real Incident (Dec 10, 2025 ~14:50 CET):** * User opened manual SOL-PERP trade with SL working correctly * Drift State Verifier detected 6 old closed DB records (closed 150-1064 minutes ago = 2.5 to 17+ hours) * All 6 showed "15.45 tokens open on Drift" (EXACT SAME NUMBER - suspicious!) * That 15.45 SOL position was user's CURRENT manual trade, not 6 old ghosts * Verifier sent Telegram alert: "⚠️ Retry close attempted automatically" * Called `closePosition()` with 100% close for each "mismatch" * Closed user's ACTIVE position, removing SL orders * User's position left completely unprotected - **Root Cause:** * File: `lib/monitoring/drift-state-verifier.ts` * Lines 93-103: Queries trades marked as closed in last **24 hours** * Lines 116-131: For each closed trade, checks if Drift still shows position open * Lines 279-283: **Automatically calls `closePosition()` with 100% close for EVERY mismatch** * **CRITICAL FLAW:** No verification if "open" Drift position is the SAME position or a NEW position at same symbol * **No position ID matching:** Can't distinguish: - Scenario A: OLD position still open on Drift (should retry close) βœ“ - Scenario B: NEW position opened at same symbol (should NOT touch) βœ— - **Why This Bug Persisted After Bug #81 Fix:** * Bug #81 (FIXED Dec 10, 14:31): Orders never placed initially (wrong token quantities) * Bug #82 (Dec 10, 14:50): Orders placed and working, then REMOVED by verifier's automatic close * Different root causes, same symptom (no SL protection) * Bug #81 fix did NOT address verifier closing active positions - **Evidence This Is Bug #82:** * User's position HAD working SL before Telegram alert * Telegram alert timing matches exactly when SL disappeared * All 6 "mismatches" show identical drift size (15.45 tokens) = ONE position (user's current trade) * DB records show exit times 150-1064 minutes ago = way too old to be same position * Verifier doesn't check: Is this position newer than DB exit time? - **THE EMERGENCY FIX (Dec 10, 2025 11:06 CET - DEPLOYED):** * Disabled automatic retry close completely * Protected active positions from false closure * Git commit: e5714e4 - **THE COMPREHENSIVE FIX (βœ… DEPLOYED Dec 10, 2025 11:25 CET):** ```typescript // In lib/monitoring/drift-state-verifier.ts // COMPREHENSIVE 6-STEP VERIFICATION PROCESS: async retryClose(mismatch) { // STEP 1: Cooldown enforcement (prevents retry spam) const cooldown = await this.checkCooldown(mismatch.dbTradeId, mismatch.symbol) if (!cooldown.canRetry) { return } // STEP 2: Load full trade context from database const trade = await prisma.trade.findUnique({ where: { id: mismatch.dbTradeId } }) // STEP 3: Verify Drift position still exists const driftPosition = await driftService.getPosition(marketIndex) // STEP 4: CRITICAL - verifyPositionIdentity() runs 6 safety checks: const verification = await this.verifyPositionIdentity(trade, driftPosition, currentPrice) if (!verification.isOldGhost) { // PROTECTED: Log skip reason and return await this.logProtectedPosition(trade, driftPosition, verification) return } // STEP 5: All checks passed β†’ proceed with close + full logging const result = await closePosition({ symbol, percentToClose: 100 }) await this.updateCooldown(trade.id, trade.symbol) } ``` - **Verification Logic - verifyPositionIdentity():** * **Grace Period:** 10-minute wait after DB exit (allows Drift state propagation) * **Direction Match:** Drift position side must match DB direction (long/short) * **Size Match:** Position size within 85-115% tolerance (allows partial fills, funding impacts) * **Entry Price Match:** Drift entry price within 2% of DB entry price (large difference = new position) * **Newer Trade Detection:** Query database for trades created after exitTime (confirms new position exists) * **Cooldown:** 5-minute per-symbol retry prevention (in-memory + database backup) - **Safety Guarantees:** * Fail-open bias: When uncertain β†’ skip and alert * Protection logging: All skipped closes recorded in database with full details * Comprehensive decision logs: Every verification result logged with evidence * No false closes: Multiple independent verification methods must all agree - **Test Coverage (420 lines):** * CRITICAL: Active Position Protection (5 tests) - Should NOT close when newer trade exists - Should NOT close when entry price differs >2% - Should NOT close when size differs >15% - Should NOT close when direction differs - Should NOT close within 10-minute grace period * CRITICAL: Verified Ghost Closure (2 tests) - Should close when all verification checks pass - Should enforce 5-minute cooldown between attempts * CRITICAL: Edge Case Handling (4 tests) * CRITICAL: Fail-Open Bias (1 test) - **Files Changed:** * lib/monitoring/drift-state-verifier.ts (276 lines added - comprehensive verification) * tests/integration/drift-state-verifier/position-verification.test.ts (420 lines - full test coverage) - **Git commits:** * e5714e4 "critical: Bug #82 EMERGENCY FIX" (Dec 10, 11:06 CET) * 9e78761 "critical: Bug #82 LONG-TERM FIX" (Dec 10, 11:25 CET) - **Deployment:** Dec 10, 2025 11:25 CET (container trading-bot-v4) - **Status:** βœ… COMPREHENSIVE FIX DEPLOYED - Intelligent verification active, automatic orphan cleanup RE-ENABLED safely - **Red Flags Indicating This Bug:** * Position initially has TP/SL orders * Telegram alert: "X position(s) that should be closed but are still open on Drift" * Alert message: "⚠️ Retry close attempted automatically" * Orders disappear shortly after alert * Multiple "mismatches" all showing SAME drift size * DB exit times are hours/days old (way older than current position) - **Why This Matters:** * **This is a REAL MONEY system** - removed orders = lost protection = unlimited risk * Verifier is designed to clean up "ghosts" but has no safety checks * Bug affects WORKING trades, not just initial order placement * Even with Bug #81 fixed, this bug was actively killing SL orders * User's frustration: "STILL THE FUCKING SAME" - thought Bug #81 fix would solve everything 73. **CRITICAL: MFE Data Unit Mismatch - ALWAYS Filter by Date (CRITICAL - Dec 5, 2025):** - **Symptom:** SQL analysis shows "20%+ average MFE" but TP1 (0.6% target) never hits - **Root Cause:** Old Trade records stored MFE/MAE in DOLLARS, new records store PERCENTAGES - **Data Corruption Examples:** * Entry $126.51, Peak $128.21 = 1.35% actual move * But stored as maxFavorableExcursion = 90.73 (dollars, not percent) * SQL AVG() returns meaningless mix: (1.35 + 90.73 + 0.85 + 87.22) / 4 = 45.04 - **Incident (Dec 5, 2025):** * Agent analyzed blocked vs executed signals * SQL showed executed signals: 20.15% avg MFE (appeared AMAZING) * Implemented "optimizations": tighter targets, higher TP1 close, 5Γ— leverage * User questioned: "tp1 barely hits that has nothing to do with our software monitoring does it?" * Investigation revealed: Only 2/11 trades reached TP1 price * TRUE MFE after filtering: 0.76% (long), 1.20% (short) - NOT 20%! * 26Γ— inflation due to unit mismatch in old data - **MANDATORY SQL Pattern:** ```sql -- WRONG: Includes corrupted old data SELECT AVG("maxFavorableExcursion") FROM "Trade" WHERE "signalQualityScore" >= 90; -- CORRECT: Filter to after Nov 23, 2025 fix SELECT AVG("maxFavorableExcursion") FROM "Trade" WHERE "signalQualityScore" >= 90 AND "createdAt" >= '2025-11-23'; -- After MFE fix -- OR: Recalculate from prices (always correct) SELECT AVG( CASE WHEN direction = 'long' THEN (("maxFavorablePrice" - "entryPrice") / "entryPrice") * 100 ELSE (("entryPrice" - "maxFavorablePrice") / "entryPrice") * 100 END ) FROM "Trade" WHERE "signalQualityScore" >= 90; ``` - **Why This Matters:** * **This is a REAL MONEY system** - wrong analysis = wrong trades = financial losses * MFE/MAE used for exit timing optimization, trade analysis, quality validation * Agent made "data-driven" decisions based on 26Γ— inflated numbers * Optimizations REVERTED via commits f65aae5 and a67a338 (Dec 5, 2025) - **Verification Before Any MFE/MAE Analysis:** ```sql -- Check if data is percentages or dollars SELECT "entryPrice", "maxFavorablePrice", "maxFavorableExcursion" as stored, CASE WHEN direction = 'long' THEN (("maxFavorablePrice" - "entryPrice") / "entryPrice") * 100 ELSE (("entryPrice" - "maxFavorablePrice") / "entryPrice") * 100 END as calculated_pct FROM "Trade" WHERE "exitReason" IS NOT NULL ORDER BY "createdAt" DESC LIMIT 5; -- If stored β‰  calculated_pct β†’ OLD DATA, use date filter ``` - **See Also:** * Common Pitfall #54 - MFE/MAE stored as dollars (supposedly fixed Nov 23) * Revert commits: a15f17f "revert: Undo exit strategy optimization based on corrupted MFE data" * Original bug commits: a67a338 (code), f65aae5 (docs) - **Git commits:** a15f17f (revert), a67a338 (incorrect optimization), f65aae5 (incorrect docs) - **Status:** βœ… Fixed - Analysis methodology documented, incorrect changes reverted 73. **CRITICAL SECURITY: .env file tracked in git (CRITICAL - Fixed Dec 5, 2025 - PR #3):** - **Symptom:** Sensitive credentials exposed in git repository history - **Credentials exposed:** * Database connection strings (PostgreSQL) * Drift Protocol private keys (wallet access) * Telegram bot tokens * API keys and secrets * RPC endpoints - **Root Cause:** `.env` file was tracked in git from initial commit, exposing all secrets to anyone with repository access - **Files modified:** * `.gitignore` - Added `.env`, `.env.local`, `.env.*.local` patterns * `.env` - Removed from git tracking (kept locally) * `.env.telegram-bot` - Removed from git tracking (contains bot token) - **Fix Process (Dec 5, 2025):** ```bash # 1. Update .gitignore first (add these lines if not present) # .env # .env.local # .env.*.local # 2. Remove from git tracking (keeps local file) git rm --cached .env git rm --cached .env.telegram-bot # 3. Commit the fix git commit -m "security: Remove .env from git tracking" ``` - **Impact:** * βœ… Future commits will NOT include .env files * βœ… Local development unaffected (files still exist locally) * ⚠️ Historical commits still contain secrets (until git history rewrite) - **POST-FIX ACTIONS REQUIRED:** 1. **Rotate all credentials immediately:** - Database passwords - Telegram bot token (create new bot if needed) - Drift Protocol keys (if exposed to public) - Any API keys in .env 2. **Verify .env.example exists** - Template for new developers 3. **Consider git history cleanup** - Use BFG Repo-Cleaner if secrets were public - **Prevention:** * Always add `.env` to `.gitignore` BEFORE first commit * Use `.env.example` with placeholder values * CI/CD should fail if .env detected in commit * Regular security audits with `git log -p | grep -i password` - **Why This Matters for Trading Bot:** * **Private keys = wallet access** - Could drain trading account * **Database = trade history** - Could manipulate records * **Telegram = notifications** - Could send fake alerts * This is a **real money system** managing $540 capital - **Verification:** ```bash # Confirm .env is ignored git check-ignore .env # Should output: .env # Confirm .env not tracked git ls-files | grep "\.env" # Should output: only .env.example ``` - **Git commit:** PR #3 on branch `copilot/remove-env-from-git-tracking` - **Status:** βœ… Fixed - .env removed from tracking, .gitignore updated 73. **CRITICAL: Service Initialization Never Ran - $1,000 Lost (CRITICAL - Dec 5, 2025):** - **Symptom:** 4 critical services coded correctly but never started for 16 days - **Financial Impact:** $700-1,400 in missed opportunities (user estimate: $1,000) - **Duration:** Nov 19 - Dec 5, 2025 (16 days) - **Root Cause:** Services initialized AFTER validation function with early return - **Code Flow (BROKEN):** ```typescript // lib/startup/init-position-manager.ts await validateOpenTrades() // Line 43 // validateOpenTrades() returns early if no trades (line 111) // SERVICE INITIALIZATION (Lines 59-72) - NEVER REACHED startDataCleanup() startBlockedSignalTracking() await startStopHuntTracking() await startSmartValidation() ``` - **Affected Services:** 1. **Stop Hunt Revenge Tracker** (Nov 20) - Never attempted revenge on quality 85+ stop-outs 2. **Smart Entry Validation** (Nov 30) - Manual Telegram trades used stale data instead of fresh TradingView metrics 3. **Blocked Signal Price Tracker** (Nov 19) - No data collected for threshold optimization 4. **Data Cleanup Service** (Dec 2) - Database bloat, no 28-day retention enforcement - **Why It Went Undetected:** * **Silent failure:** No errors thrown, services simply never initialized * **Logger silencing:** Production logger (`logger.log`) silenced by `NODE_ENV=production` * **Split logging:** Some logs appeared (from service functions), others didn't (from init function) * **Common trigger:** Bug only occurred when `openTrades.length === 0` (frequent in production) - **Financial Breakdown:** * Stop hunt revenge: $300-600 lost (missed reversal opportunities) * Smart validation: $200-400 lost (stale data caused bad entries) * Blocked signals: $200-400 lost (suboptimal quality thresholds) * Total: $700-1,400 over 16 days - **Fix (Dec 5, 2025):** ```typescript // CORRECT ORDER: // 1. Start services FIRST (lines 34-50) startDataCleanup() startBlockedSignalTracking() await startStopHuntTracking() await startSmartValidation() // 2. THEN validate (line 56) - can return early safely await validateAllOpenTrades() await validateOpenTrades() // Early return OK now // 3. Finally init Position Manager const manager = await getInitializedPositionManager() ``` - **Logging Fix:** Changed `logger.log()` to `console.log()` for production visibility - **Verification:** ```bash $ docker logs trading-bot-v4 | grep -E "🧹|πŸ”¬|🎯|🧠|πŸ“Š" 🧹 Starting data cleanup service... πŸ”¬ Starting blocked signal price tracker... 🎯 Starting stop hunt revenge tracker... πŸ“Š No active stop hunts - tracker will start when needed 🧠 Starting smart entry validation system... ``` - **Prevention Measures:** 1. **Test suite (PR #2):** 113 tests covering Position Manager - add service initialization tests 2. **CI/CD pipeline (PR #5):** Automated quality gates - add service startup validation 3. **Startup health check:** Verify all expected services initialized, throw error if missing 4. **Production logging standard:** Critical operations use `console.log()`, not `logger.log()` - **Lessons Learned:** * Service initialization order matters - never place critical services after functions with early returns * Silent failures are dangerous - add explicit verification that services started * Production logging must be visible - logger utilities that silence logs = debugging nightmare * Test real-world conditions - bug only occurred with `NODE_ENV=production` + `openTrades.length === 0` - **Timeline:** * Nov 19: Blocked Signal Tracker deployed (never ran) * Nov 20: Stop Hunt Revenge deployed (never ran) * Nov 30: Smart Validation deployed (never ran) * Dec 2: Data Cleanup deployed (never ran) * Dec 5: Bug discovered and fixed * Result: **16 days of development with 0 production execution** - **Git commits:** 51b63f4 (service order fix), f6c9a7b (console.log fix), 35c2d7f (stop hunt logs fix) - **Full documentation:** `docs/CRITICAL_SERVICE_INITIALIZATION_BUG_DEC5_2025.md` - **Status:** βœ… Fixed - All services now start on every container restart, verified in production logs ## File Conventions - **API routes:** `app/api/[feature]/[action]/route.ts` (Next.js 15 App Router) - **Services:** `lib/[service]/[module].ts` (drift, pyth, trading, database) - **Config:** Single source in `config/trading.ts` with env merging - **Types:** Define interfaces in same file as implementation (not separate types directory) - **Console logs:** Use emojis for visual scanning: 🎯 πŸš€ βœ… ❌ πŸ’° πŸ“Š πŸ›‘οΈ ## Re-Entry Analytics System (Phase 1) **Purpose:** Validate manual Telegram trades using fresh TradingView data + recent performance analysis **Components:** 1. **Market Data Cache** (`lib/trading/market-data-cache.ts`) - Singleton service storing TradingView metrics - 5-minute expiry on cached data - Tracks: ATR, ADX, RSI, volume ratio, price position, timeframe 2. **Market Data Webhook** (`app/api/trading/market-data/route.ts`) - Receives TradingView alerts every 1-5 minutes - POST: Updates cache with fresh metrics - GET: View cached data (debugging) 3. **Re-Entry Check Endpoint** (`app/api/analytics/reentry-check/route.ts`) - Validates manual trade requests - Uses fresh TradingView data if available (<5min old) - Falls back to historical metrics from last trade - Scores signal quality + applies performance modifiers: - **-20 points** if last 3 trades lost money (avgPnL < -5%) - **+10 points** if last 3 trades won (avgPnL > +5%, WR >= 66%) - **-5 points** for stale data, **-10 points** for no data - Minimum score: 55 (vs 60 for new signals) 4. **Auto-Caching** (`app/api/trading/execute/route.ts`) - Every trade signal from TradingView auto-caches metrics - Ensures fresh data available for manual re-entries 5. **Telegram Integration** (`telegram_command_bot.py`) - Calls `/api/analytics/reentry-check` before executing manual trades - Shows data freshness ("βœ… FRESH 23s old" vs "⚠️ Historical") - Blocks low-quality re-entries unless `--force` flag used - Fail-open: Proceeds if analytics check fails **User Flow:** ``` User: "long sol" ↓ Check cache for SOL-PERP ↓ Fresh data? β†’ Use real TradingView metrics ↓ Stale/missing? β†’ Use historical + penalty ↓ Score quality + recent performance ↓ Score >= 55? β†’ Execute ↓ Score < 55? β†’ Block (unless --force) ``` **TradingView Setup:** Create alerts that fire every 1-5 minutes with this webhook message: ```json { "action": "market_data", "symbol": "{{ticker}}", "timeframe": "{{interval}}", "atr": {{ta.atr(14)}}, "adx": {{ta.dmi(14, 14)}}, "rsi": {{ta.rsi(14)}}, "volumeRatio": {{volume / ta.sma(volume, 20)}}, "pricePosition": {{(close - ta.lowest(low, 100)) / (ta.highest(high, 100) - ta.lowest(low, 100)) * 100}}, "currentPrice": {{close}} } ``` Webhook URL: `https://your-domain.com/api/trading/market-data` ## Per-Symbol Trading Controls **Purpose:** Independent enable/disable toggles and position sizing for SOL and ETH to support different trading strategies (e.g., ETH for data collection at minimal size, SOL for profit generation). **Configuration Priority:** 1. **Per-symbol ENV vars** (highest priority) - `SOLANA_ENABLED`, `SOLANA_POSITION_SIZE`, `SOLANA_LEVERAGE` - `ETHEREUM_ENABLED`, `ETHEREUM_POSITION_SIZE`, `ETHEREUM_LEVERAGE` 2. **Market-specific config** (from `MARKET_CONFIGS` in config/trading.ts) 3. **Global ENV vars** (fallback for BTC and other symbols) - `MAX_POSITION_SIZE_USD`, `LEVERAGE` 4. **Default config** (lowest priority) **Settings UI:** `app/settings/page.tsx` has dedicated sections: - πŸ’Ž Solana section: Toggle + position size + leverage + risk calculator - ⚑ Ethereum section: Toggle + position size + leverage + risk calculator - πŸ’° Global fallback: For BTC-PERP and future symbols **Example usage:** ```typescript // In execute/test endpoints const { size, leverage, enabled } = getPositionSizeForSymbol(driftSymbol, config) if (!enabled) { return NextResponse.json({ success: false, error: 'Symbol trading disabled' }, { status: 400 }) } ``` **Test buttons:** Settings UI has symbol-specific test buttons: - πŸ’Ž Test SOL LONG/SHORT (disabled when `SOLANA_ENABLED=false`) - ⚑ Test ETH LONG/SHORT (disabled when `ETHEREUM_ENABLED=false`) ## When Making Changes 1. **Adding new config:** Update DEFAULT_TRADING_CONFIG + getConfigFromEnv() + .env file 2. **Adding database fields:** Update prisma/schema.prisma β†’ `npx prisma migrate dev` β†’ `npx prisma generate` β†’ rebuild Docker 3. **Changing order logic:** Test with DRY_RUN=true first, use small position sizes ($10) 4. **API endpoint changes:** Update both endpoint + corresponding n8n workflow JSON (Check Risk and Execute Trade nodes) 5. **Docker changes:** Rebuild with `docker compose build trading-bot` then restart container 6. **Modifying quality score logic:** Update BOTH `/api/trading/check-risk` and `/api/trading/execute` endpoints, ensure timeframe-aware thresholds are synchronized 7. **Exit strategy changes:** Modify Position Manager logic + update on-chain order placement in `placeExitOrders()` 8. **TradingView alert changes:** - Ensure alerts pass `timeframe` field (e.g., `"timeframe": "5"`) to enable proper signal quality scoring - **CRITICAL:** Include `atr` field for ATR-based TP/SL system: `"atr": {{ta.atr(14)}}` - Without ATR, system falls back to less optimal fixed percentages 9. **ATR-based risk management changes:** - Update multipliers or bounds in `.env` (ATR_MULTIPLIER_TP1/TP2/SL, MIN/MAX_*_PERCENT) - Test with known ATR values to verify calculation (e.g., SOL ATR 0.43) - Log shows: `πŸ“Š ATR-based targets: TP1 X.XX%, TP2 Y.YY%, SL Z.ZZ%` - Verify targets fall within safety bounds (TP1: 0.5-1.5%, TP2: 1.0-3.0%, SL: 0.8-2.0%) - Update Telegram manual trade presets if median ATR changes (currently 0.43 for SOL) 10. **Position Manager changes:** ALWAYS run tests BEFORE deployment, then validate in production - **CRITICAL (Dec 8, 2025):** Health monitoring system detects PM failures within 30 seconds - Health checks: `docker logs -f trading-bot-v4 | grep "πŸ₯"` - Expected: "πŸ₯ Starting Position Manager health monitor (every 30 sec)..." - If issues: "🚨 CRITICAL: Position Manager not monitoring!" or "🚨 CRITICAL: Position {id} missing SL order" - **STEP 1 - Run tests locally (MANDATORY):** ```bash npm test # Run all 113 tests (takes ~30 seconds) # OR run specific test file: npm test tests/integration/position-manager/tp1-detection.test.ts ``` - **Why mandatory:** Tests catch bugs (tokens vs USD, TP1 false detection, wrong SL price) BEFORE they cost real money - **If tests fail:** Fix the issue or update tests - DO NOT deploy broken code - **STEP 2 - Deploy and validate with test trade:** - Use `/api/trading/test` endpoint or Telegram `long sol --force` - Monitor `docker logs -f trading-bot-v4` for full cycle - Verify TP1 hit β†’ 75% close β†’ SL moved to breakeven - SQL: Check `tp1Hit`, `slMovedToBreakeven`, `currentSize` in Trade table - Compare: Position Manager logs vs actual Drift position size - **Phase 7.3 Adaptive trailing stop verification (Nov 27, 2025+):** * Watch for "πŸ“Š 1-min ADX update: Entry X β†’ Current Y (Β±Z change)" every 60 seconds * Verify ADX acceleration bonus: "πŸš€ ADX acceleration (+X points)" * Verify ADX deceleration penalty: "⚠️ ADX deceleration (-X points)" * Check final calculation: "πŸ“Š Adaptive trailing: ATR X (Y%) Γ— ZΓ— = W%" * Confirm multiplier adjusts dynamically (not static like old system) * Example: ADX 22.5β†’29.5 should show multiplier increase from 1.5Γ— to 2.4Γ—+ 11. **Trailing stop changes:** - **CRITICAL (Nov 27, 2025):** Phase 7.3 uses REAL-TIME 1-minute ADX, not entry-time ADX - Code location: `lib/trading/position-manager.ts` lines 1356-1450 - Queries `getMarketDataCache()` for fresh ADX every monitoring loop (2-second interval) - Adaptive multipliers: Base 1.5Γ— + ADX strength tier (1.0Γ—-1.5Γ—) + acceleration (1.3Γ—) + deceleration (0.7Γ—) + profit (1.3Γ—) - Test with known ADX progression: Entry 22.5 β†’ Current 29.5 = expect acceleration bonus - Fallback: Uses `trade.adxAtEntry` if cache unavailable (backward compatible) - Log shows: "πŸ“Š Adaptive trailing: ATR 0.43 (0.31%) Γ— 3.16Γ— = 0.99%" - Expected: Trail width changes dynamically as ADX changes (captures acceleration, protects on deceleration) 12. **Calculation changes:** Add verbose logging and verify with SQL - Log every intermediate step, especially unit conversions - Never assume SDK data format - log raw values to verify - SQL query with manual calculation to compare results - Test boundary cases: 0%, 100%, min/max values 13. **Adaptive leverage changes:** When modifying quality-based leverage tiers - Quality score MUST be calculated BEFORE position sizing (execute endpoint line ~172) - Update `getLeverageForQualityScore()` helper in config/trading.ts - Test with known quality scores to verify tier selection (95+ = 15x, 90-94 = 10x) - Log shows: `πŸ“Š Adaptive leverage: Quality X β†’ Yx leverage (threshold: 95)` - Update ENV variables: USE_ADAPTIVE_LEVERAGE, HIGH_QUALITY_LEVERAGE, LOW_QUALITY_LEVERAGE, QUALITY_LEVERAGE_THRESHOLD - Monitor first 10-20 trades to verify correct leverage applied 14. **DEPLOYMENT VERIFICATION (MANDATORY):** Before declaring ANY fix working: - Check container start time vs commit timestamp - If container older than commit: CODE NOT DEPLOYED - Restart container and verify new code is running - Never say "fixed" or "protected" without deployment confirmation - This is a REAL MONEY system - unverified fixes cause losses 15. **GIT COMMIT AND PUSH (MANDATORY):** After completing ANY feature, fix, or significant change: - ALWAYS commit changes with descriptive message - ALWAYS push to remote repository - User should NOT have to ask for this - it's part of completion - **DUAL REMOTE SETUP (Dec 5, 2025):** * **origin**: Production Gitea (ssh://git@127.0.0.1:222/root/trading_bot_v4.git) * **github**: Copilot PR workflow (https://github.com/mindesbunister/trading_bot_v4.git) * Post-commit hook automatically pushes to github after every commit * Manual push to origin required: `git push origin master` * Verify sync status: `git log --oneline -1 && git remote -v && git branch -vv` - Commit message format: ```bash git add -A git commit -m "type: brief description - Bullet point details - Files changed - Why the change was needed " # Hook auto-pushes to github git push origin master # Manual push to production ``` - Types: * `feat:` (new feature) * `fix:` (bug fix) * `docs:` (documentation) * `refactor:` (code restructure) * `critical:` (financial/safety critical fixes) - This is NOT optional - code exists only when committed and pushed - **Automation Setup:** * File: `.git/hooks/post-commit` (executable) * Purpose: Auto-sync commits to GitHub for Copilot PR workflow * Status: Active and verified (Dec 5, 2025) * Testing: Commits auto-appear on github/master * Manual setup: Copy hook script if cloning fresh repository - **Recent examples:** * `test: Verify GitHub auto-sync hook` (de77cfe, Dec 5, 2025) - Verified post-commit hook working correctly - All remotes synced (origin/master, github/master) * `fix: Implement Associated Token Account for USDC withdrawals` (c37a9a3, Nov 19, 2025) - Fixed PublicKey undefined, ATA resolution, excluded archive - Successfully tested $6.58 withdrawal with on-chain confirmation * `fix: Correct MIN_QUALITY_SCORE to MIN_SIGNAL_QUALITY_SCORE` (Nov 19, 2025) - Settings UI using wrong ENV variable name - Quality score changes now take effect * `critical: Fix withdrawal statistics to use actual Drift deposits` (8d53c4b, Nov 19, 2025) - Query cumulativeDeposits from Drift ($1,440.61 vs hardcoded $546) - Created /api/drift/account-summary endpoint 16. **DOCKER MAINTENANCE (AFTER BUILDS):** Clean up accumulated cache to prevent disk full: ```bash # Remove dangling images (old builds) docker image prune -f # Remove build cache (biggest space hog - 40+ GB typical) docker builder prune -f # Optional: Remove dangling volumes (if no important data) docker volume prune -f # Check space saved docker system df ``` - **When to run:** After successful deployments, weekly if building frequently, when disk warnings appear - **Space freed:** Dangling images (2-5 GB), Build cache (40-50 GB), Dangling volumes (0.5-1 GB) - **Safe to delete:** `` tagged images, build cache (recreated on next build), dangling volumes - **Keep:** Named volumes (`trading-bot-postgres`), active containers, tagged images in use - **Why critical:** Docker builds create 1.3+ GB per build, cache accumulates to 40-50 GB without cleanup 17. **NEXTCLOUD DECK SYNC (MANDATORY):** After completing phases or making significant roadmap progress: - Update roadmap markdown files with new status (πŸ”„ IN PROGRESS, βœ… COMPLETE, πŸ”œ NEXT) - Run sync to update Deck cards: `python3 scripts/sync-roadmap-to-deck.py --init` - Move cards between stacks in Nextcloud Deck UI to reflect progress visually - Backlog (πŸ“₯) β†’ Planning (πŸ“‹) β†’ In Progress (πŸš€) β†’ Complete (βœ…) - Keep Deck in sync with actual work - it's the visual roadmap tracker - Documentation: `docs/NEXTCLOUD_DECK_SYNC.md` 18. **UPDATE COPILOT-INSTRUCTIONS.MD (MANDATORY):** After implementing ANY significant feature or system change: - Document new database fields and their purpose - Add filtering requirements (e.g., manual vs TradingView trades) - Update "Important fields" sections with new schema changes - Add new API endpoints to the architecture overview - Document data integrity requirements (what must be excluded from analysis) - Add SQL query patterns for common operations - Update "When Making Changes" section with new patterns learned - Create reference docs in `docs/` for complex features (e.g., `MANUAL_TRADE_FILTERING.md`) - **WHY:** Future AI agents need complete context to maintain data integrity and avoid breaking analysis - **EXAMPLES:** signalSource field for filtering, MAE/MFE tracking, phantom trade detection 19. **MULTI-TIMEFRAME DATA COLLECTION CHANGES (Nov 26, 2025):** When modifying signal processing for different timeframes: - Quality scoring MUST happen BEFORE timeframe filtering (execute endpoint line 112) - All timeframes need real quality scores for analysis (not hardcoded 0) - Data collection signals (15min/1H/4H/Daily) save to BlockedSignal with full quality metadata - BlockedSignal fields to populate: signalQualityScore, signalQualityVersion, minScoreRequired, scoreBreakdown - Enables SQL: `WHERE blockReason = 'DATA_COLLECTION_ONLY' AND signalQualityScore >= X` - Purpose: Compare quality-filtered win rates across timeframes to determine optimal trading interval - Update Multi-Timeframe section in copilot-instructions.md when changing flow ## Development Roadmap **Current Status (Nov 14, 2025):** - **168 trades executed** with quality scores and MAE/MFE tracking - **Capital:** $97.55 USDC at 100% health (zero debt, all USDC collateral) - **Leverage:** 15x SOL (reduced from 20x for safer liquidation cushion) - **Three active optimization initiatives** in data collection phase: 1. **Signal Quality:** 0/20 blocked signals collected β†’ need 10-20 for analysis 2. **Position Scaling:** 161 v5 trades, collecting v6 data β†’ need 50+ v6 trades 3. **ATR-based TP:** 1/50 trades with ATR data β†’ need 50 for validation - **Expected combined impact:** 35-40% P&L improvement when all three optimizations complete - **Master roadmap:** See `OPTIMIZATION_MASTER_ROADMAP.md` for consolidated view See `SIGNAL_QUALITY_OPTIMIZATION_ROADMAP.md` for systematic signal quality improvements: - **Phase 1 (πŸ”„ IN PROGRESS):** Collect 10-20 blocked signals with quality scores (1-2 weeks) - **Phase 2 (πŸ”œ NEXT):** Analyze patterns and make data-driven threshold decisions - **Phase 3 (🎯 FUTURE):** Implement dual-threshold system or other optimizations based on data - **Phase 4 (πŸ€– FUTURE):** Automated price analysis for blocked signals - **Phase 5 (🧠 DISTANT):** ML-based scoring weight optimization See `POSITION_SCALING_ROADMAP.md` for planned position management optimizations: - **Phase 1 (βœ… COMPLETE):** Collect data with quality scores (20-50 trades needed) - **Phase 2:** ATR-based dynamic targets (adapt to volatility) - **Phase 3:** Signal quality-based scaling (high quality = larger runners) - **Phase 4:** Direction-based optimization (shorts vs longs have different performance) - **Phase 5 (βœ… COMPLETE):** TP2-as-runner system implemented - configurable runner (default 25%, adjustable via TAKE_PROFIT_1_SIZE_PERCENT) with ATR-based trailing stop - **Phase 6:** ML-based exit prediction (future) **Recent Implementation:** TP2-as-runner system provides 5x larger runner (default 25% vs old 5%) for better profit capture on extended moves. When TP2 price is hit, trailing stop activates on full remaining position instead of closing partial amount. Runner size is configurable (100% - TP1 close %). **Blocked Signals Tracking (Nov 11, 2025):** System now automatically saves all blocked signals to database for data-driven optimization. See `BLOCKED_SIGNALS_TRACKING.md` for SQL queries and analysis workflows. **Multi-Timeframe Data Collection (Nov 18-19, 2025):** Execute endpoint now supports parallel data collection across timeframes: - **5min signals:** Execute trades (production) - **15min/1H/4H/Daily signals:** Save to BlockedSignal table with `blockReason='DATA_COLLECTION_ONLY'` - Enables cross-timeframe performance comparison (which timeframe has best win rate?) - Zero financial risk - non-5min signals just collect data for future analysis - TradingView alerts on multiple timeframes β†’ n8n passes `timeframe` field β†’ bot routes accordingly - After 50+ trades: SQL analysis to determine optimal timeframe for live trading - Implementation: `app/api/trading/execute/route.ts` lines 106-145 - **n8n Parse Signal Enhanced (Nov 19):** Supports multiple timeframe formats: - `"buy 5"` β†’ `"5"` (5 minutes) - `"buy 15"` β†’ `"15"` (15 minutes) - `"buy 60"` or `"buy 1h"` β†’ `"60"` (1 hour) - `"buy 240"` or `"buy 4h"` β†’ `"240"` (4 hours) - `"buy D"` or `"buy 1d"` β†’ `"D"` (daily) - Extracts indicator version from `IND:v8` format **Data-driven approach:** Each phase requires validation through SQL analysis before implementation. No premature optimization. **Signal Quality Version Tracking:** Database tracks `signalQualityVersion` field to compare algorithm performance: - Analytics dashboard shows version comparison: trades, win rate, P&L, extreme position stats - v4 (current) includes blocked signals tracking for data-driven optimization - Focus on extreme positions (< 15% range) - v3 aimed to reduce losses from weak ADX entries - SQL queries in `docs/analysis/SIGNAL_QUALITY_VERSION_ANALYSIS.sql` for deep-dive analysis - Need 20+ trades per version before meaningful comparison **Indicator Version Tracking (Nov 18-28, 2025):** Database tracks `indicatorVersion` field for TradingView strategy comparison: - **v9:** Money Line with Momentum-Based SHORT Filter (Nov 26+) - **PRODUCTION SYSTEM** - Built on v8 foundation (0.6% flip threshold, momentum confirmation, anti-whipsaw) - **MA Gap Analysis:** +5 to +15 quality points based on MA50-MA200 convergence - **Momentum-Based SHORT Filter (Nov 26, 2025 - CRITICAL ENHANCEMENT):** * **REMOVED:** RSI filter for SHORTs (data showed RSI 50+ has BEST 68.2% WR) * **ADDED:** ADX β‰₯23 requirement (filters weak chop like ADX 20.7 failure) * **ADDED:** Price Position β‰₯60% (catches tops) OR ≀40% with Vol β‰₯2.0x (capitulation) * **Rationale:** v8 shorted oversold (RSI 25-35), v9 shorts momentum at tops * **Blocks:** Weak chop at range bottom * **Catches:** Massive downtrends from top of range - **Data Evidence (95 SHORT trades analyzed):** * RSI < 35: 37.5% WR, -$655.23 (4 biggest disasters) * RSI 50+: 68.2% WR, +$29.88 (BEST performance!) * Winners: ADX 23.7-26.9, Price Pos 19-64% * Losers: ADX 21.8-25.4, Price Pos 13.6% - **Quality threshold (Nov 28, 2025):** LONG β‰₯90, SHORT β‰₯80 - File: `workflows/trading/moneyline_v9_ma_gap.pinescript` - **v8:** Money Line Sticky Trend (Nov 18-26) - ARCHIVED - 8 trades completed (57.1% WR, +$262.70) - **Failure pattern:** 5 oversold SHORT disasters (RSI 25-35), 1 weak chop (ADX 20.7) - Purpose: Baseline for v9 momentum improvements - **ARCHIVED (historical baseline for comparison):** - **v5:** Buy/Sell Signal strategy (pre-Nov 12) - 36.4% WR, +$25.47 - **v6:** HalfTrend + BarColor (Nov 12-18) - 48% WR, -$47.70 - **v7:** v6 with toggles (deprecated - minimal data, no improvements) - **Purpose:** v9 is production, archived versions provide baseline for future enhancements - **Analytics UI:** v9 highlighted, archived versions greyed out but kept for statistical reference **Financial Roadmap Integration:** All technical improvements must align with current phase objectives (see top of document): - **Phase 1 (CURRENT):** Prove system works, compound aggressively, 60%+ win rate mandatory - **Phase 2-3:** Transition to sustainable growth while funding withdrawals - **Phase 4+:** Scale capital while reducing risk progressively - See `TRADING_GOALS.md` for complete 8-phase plan ($106 β†’ $1M+) - SQL queries in `docs/analysis/SIGNAL_QUALITY_VERSION_ANALYSIS.sql` for deep-dive analysis - Need 20+ trades per version before meaningful comparison **Blocked Signals Analysis:** See `BLOCKED_SIGNALS_TRACKING.md` for: - SQL queries to analyze blocked signal patterns - Score distribution and metric analysis - Comparison with executed trades at similar quality levels - Future automation of price tracking (would TP1/TP2/SL have hit?) ## Telegram Notifications (Nov 16, 2025 - Enhanced Nov 20, 2025) **Position Closure Notifications:** System sends direct Telegram messages for all position closures via `lib/notifications/telegram.ts` **Implemented for:** - **TP1 partial closes (NEW Nov 20, 2025):** Immediate notification when TP1 hits (60% closed) - **Runner exits:** Full close notifications when remaining position exits (TP2/SL/trailing) - Stop loss triggers (SL, soft SL, hard SL, emergency) - Manual closures (via API or settings UI) - Ghost position cleanups (external closure detection) **Notification format:** ``` 🎯 POSITION CLOSED πŸ“ˆ SOL-PERP LONG πŸ’° P&L: $12.45 (+2.34%) πŸ“Š Size: $48.75 πŸ“ Entry: $168.50 🎯 Exit: $172.45 ⏱ Hold Time: 1h 23m πŸ”š Exit: TP1 (60% closed, 40% runner remaining) πŸ“ˆ Max Gain: +3.12% πŸ“‰ Max Drawdown: -0.45% ``` **Key Features (Nov 20, 2025):** - **Immediate TP1 feedback:** User sees profit as soon as TP1 hits, doesn't wait for runner to close - **Partial close details:** Exit reason shows percentage split (e.g., "TP1 (60% closed, 40% runner remaining)") - **Separate notifications:** TP1 close gets one notification, runner close gets another - **Complete P&L tracking:** Each notification shows its portion of realized P&L **Configuration:** Requires `TELEGRAM_BOT_TOKEN` and `TELEGRAM_CHAT_ID` in .env **Code location:** - `lib/notifications/telegram.ts` - sendPositionClosedNotification() - `lib/trading/position-manager.ts` - Integrated in executeExit() (both partial and full closes) and handleExternalClosure() **Commits:** - b1ca454 "feat: Add Telegram notifications for position closures" (Nov 16, 2025) - 79e7ffe "feat: Add Telegram notification for TP1 partial closes" (Nov 20, 2025) ## Stop Hunt Revenge System (Nov 20, 2025) **Purpose:** Automatically re-enters positions after high-quality signals (score 85+) get stopped out, when price reverses back through original entry. Captures the reversal with same position size as original. **Architecture:** - **4-Hour Revenge Window:** Monitors for price reversal within 4 hours of stop-out - **Quality Threshold:** Only quality score 85+ signals eligible (top-tier setups) - **Position Size:** 1.0Γ— original size (same as original - user at 100% allocation) - **One Revenge Per Stop Hunt:** Maximum 1 revenge trade per stop-out event - **Monitoring Interval:** 30-second price checks for active stop hunts - **Database:** StopHunt table (20 fields, 4 indexes) tracks all stop hunt events **Revenge Conditions:** ```typescript // LONG stopped above entry β†’ Revenge when price drops back below entry if (direction === 'long' && currentPrice < originalEntryPrice - (0.005 * originalEntryPrice)) { // Price dropped 0.5% below entry β†’ Stop hunt reversal confirmed executeRevengeTrade() } // SHORT stopped below entry β†’ Revenge when price rises back above entry if (direction === 'short' && currentPrice > originalEntryPrice + (0.005 * originalEntryPrice)) { // Price rose 0.5% above entry β†’ Stop hunt reversal confirmed executeRevengeTrade() } ``` **How It Works:** 1. **Recording:** Position Manager detects SL close with `signalQualityScore >= 85` 2. **Database:** Creates StopHunt record with entry price, quality score, ADX, ATR 3. **Monitoring:** Background job checks every 30 seconds for price reversals 4. **Trigger:** Price crosses back through entry + 0.5% buffer within 4 hours 5. **Execution:** Calls `/api/trading/execute` with same position size, same direction 6. **Telegram:** Sends "πŸ”₯ REVENGE TRADE ACTIVATED" notification 7. **Completion:** Updates database with revenge trade ID, marks revengeExecuted=true **Database Schema (StopHunt table):** - **Original Trade:** `originalTradeId`, `symbol`, `direction`, `stopHuntPrice`, `originalEntryPrice` - **Quality Metrics:** `originalQualityScore` (85+), `originalADX`, `originalATR` - **Financial:** `stopLossAmount` (how much user lost), `revengeEntryPrice` - **Timing:** `stopHuntTime`, `revengeTime`, `revengeExpiresAt` (4 hours after stop) - **Tracking:** `revengeTradeId`, `revengeExecuted`, `revengeWindowExpired` - **Price Extremes:** `highestPriceAfterStop`, `lowestPriceAfterStop` (for analysis) - **Indexes:** symbol, revengeExecuted, revengeWindowExpired, stopHuntTime **Code Components:** ```typescript // lib/trading/stop-hunt-tracker.ts (293 lines) class StopHuntTracker { recordStopHunt() // Save stop hunt to database startMonitoring() // Begin 30-second checks checkRevengeOpportunities()// Find active stop hunts needing revenge shouldExecuteRevenge() // Validate price reversal conditions executeRevengeTrade() // Call execute API with 1.2x size } // lib/startup/init-position-manager.ts (integration) await startStopHuntTracking() // Initialize on server startup // lib/trading/position-manager.ts (recording - ready for next deployment) if (reason === 'SL' && trade.signalQualityScore >= 85) { const tracker = getStopHuntTracker() await tracker.recordStopHunt({ /* trade details */ }) } ``` **Telegram Notification Format:** ``` πŸ”₯ REVENGE TRADE ACTIVATED πŸ”₯ Original Trade: πŸ“ Entry: $142.48 SHORT ❌ Stopped Out: -$138.35 🎯 Quality Score: 90 (ADX 26) Revenge Trade: πŸ“ Re-Entry: $138.20 SHORT πŸ’ͺ Size: Same as original ($8,350) 🎯 Targets: TP1 +0.86%, TP2 +1.72% Stop Hunt Reversal Confirmed βœ“ Time to get our money back! ``` **Singleton Pattern:** ```typescript // CORRECT: Use getter function const tracker = getStopHuntTracker() await tracker.recordStopHunt({ /* params */ }) // WRONG: Direct instantiation creates multiple instances const tracker = new StopHuntTracker() // ❌ Don't do this ``` **Startup Behavior:** - Container starts β†’ Checks database for active stop hunts (not expired, not executed) - If activeCount > 0: Starts monitoring immediately, logs count - If activeCount = 0: Logs "No active stop hunts - tracker will start when needed" - Monitoring auto-starts when Position Manager records new stop hunt **Common Pitfalls:** 1. **Database query hanging:** Fixed with try-catch error handling (Nov 20, 2025) 2. **Import path errors:** Use `'../database/trades'` not `'../database/client'` 3. **Multiple instances:** Always use `getStopHuntTracker()` singleton getter 4. **Quality threshold:** Only 85+ eligible, don't lower without user approval 5. **Position size math:** 1.0Γ— means execute with `originalSize`, same as original trade 6. **Revenge window:** 4 hours from stop-out, not from signal generation 7. **One revenge limit:** Check `revengeExecuted` flag before executing again **Real-World Use Case (Nov 20, 2025 motivation):** - User had v8 signal: Quality 90, ADX 26, called exact top at $141.37 - Stopped at $142.48 for -$138.35 loss - Price then dropped to $131.32 (8.8% move) - Missed +$490 potential profit if not stopped - Revenge system would've re-entered SHORT at ~$141.50 with same size, captured full reversal move **Revenge Timing Enhancement - 90s Confirmation (Nov 26, 2025):** - **Problem Identified:** Immediate entry at reversal price caused retest stop-outs - **Real Incident (Nov 26, 14:51 CET):** * LONG stopped at $138.00, quality 105 * Price dropped to $136.32 (would trigger immediate revenge) * Retest bounce to $137.50 (would stop out again at $137.96) * Actual move: $136 β†’ $144.50 (+$530 opportunity MISSED) - **Root Cause:** Entry at candle close = top of move, natural 1-1.5% pullbacks common - **OLD System:** * LONG: Enter immediately when price < entry * SHORT: Enter immediately when price > entry * Result: Retest wicks stop out before real move - **NEW System (Option 2 - 90s Confirmation):** * **LONG:** Require price below entry for 90 seconds (1.5 minutes) before entry * **SHORT:** Require price above entry for 90 seconds (1.5 minutes) before entry * Tracks `firstCrossTime`, resets if price leaves zone * Logs progress: "⏱️ LONG/SHORT revenge: X.Xmin in zone (need 1.5min)" * **Rationale:** Fast enough to catch moves (not full 5min candle), slow enough to filter retest wicks - **Implementation Details:** ```typescript // lib/trading/stop-hunt-tracker.ts (lines 254-310) // LONG revenge: if (timeInZone >= 90000) { // 90 seconds = 1.5 minutes console.log(`βœ… LONG revenge: Price held below entry for ${(timeInZone/60000).toFixed(1)}min, confirmed!`) return true } // SHORT revenge: if (timeInZone >= 90000) { // 90 seconds = 1.5 minutes console.log(`βœ… SHORT revenge: Price held above entry for ${(timeInZone/60000).toFixed(1)}min, confirmed!`) return true } ``` - **User Insight:** "i think atr bands are no good for this kind of stuff" - ATR measures volatility, not support/resistance - **Future Consideration:** TradingView signals every 1 minute for better granularity (pending validation) - **Git Commit:** 40ddac5 "feat: Revenge timing Option 2 - 90s confirmation (DEPLOYED)" - **Deployed:** Nov 26, 2025 20:52:55 CET - **Status:** βœ… DEPLOYED and VERIFIED in production **Deployment Status:** - βœ… Database schema created (StopHunt table with indexes) - βœ… Tracker service implemented (293 lines, 8 methods) - βœ… Startup integration active (initializes on container start) - βœ… Error handling added (try-catch for database operations) - βœ… Clean production logs (DEBUG logs removed) - ⏳ Position Manager recording (code ready, deploys on next Position Manager change) - ⏳ Real-world validation (waiting for first quality 85+ stop-out) **Git Commits:** - 702e027 "feat: Stop Hunt Revenge System - DEPLOYED (Nov 20, 2025)" - Fixed import paths, added error handling, removed debug logs - Full system operational, monitoring active ## v9 Parameter Optimization & Backtesting (Nov 28-29, 2025) **Purpose:** Comprehensive parameter sweep to optimize v9 Money Line indicator for maximum profitability while maintaining quality standards. **Background - v10 Removal (Nov 28, 2025):** - **v10 Status:** FULLY REMOVED - discovered to be "garbage" during initial backtest analysis - **v10 Problems Discovered:** 1. **Parameter insensitivity:** 72 different configurations produced identical $498.12 P&L 2. **Bug in penalty logic:** Price position penalty incorrectly applied to 18.9% position (should only apply to 40-60% chop zone) 3. **No edge over v9:** Despite added complexity, no performance improvement - **Removal Actions (Nov 28, 2025):** * Removed moneyline_v10_adaptive_position_scoring.pinescript * Removed v10-specific code from backtester modules * Updated all documentation to remove v10 references * Docker rebuild completed successfully * Git commit: 5f77024 "remove: Complete v10 indicator removal - proven garbage" - **Lesson:** Parameter insensitivity = no real edge, just noise. Simpler is better. **v9 Baseline Performance:** - **Data:** Nov 2024 - Nov 2025 SOLUSDT 5-minute OHLCV (139,678 rows) - **Default Parameters:** flip_threshold=0.6, ma_gap=0.35, momentum_adx=23, long_pos=70, short_pos=25, cooldown_bars=2, momentum_spacing=3, momentum_cooldown=2 - **Results:** $405.88 PnL, 569 trades, 60.98% WR, 1.022 PF, -$1,360.58 max DD - **Baseline established:** Nov 28, 2025 **Adaptive Leverage Implementation (Nov 28, 2025 - Updated Dec 1, 2025):** - **Purpose:** Increase profit potential while maintaining risk management - **CURRENT Configuration (Dec 1, 2025):** ```bash USE_ADAPTIVE_LEVERAGE=true HIGH_QUALITY_LEVERAGE=10 # 10x for high-quality signals LOW_QUALITY_LEVERAGE=5 # 5x for borderline signals QUALITY_LEVERAGE_THRESHOLD_LONG=95 # LONG quality threshold (configurable via UI) QUALITY_LEVERAGE_THRESHOLD_SHORT=90 # SHORT quality threshold (configurable via UI) QUALITY_LEVERAGE_THRESHOLD=95 # Backward compatibility fallback ``` - **Settings UI (Dec 1, 2025 - FULLY IMPLEMENTED):** * Web interface at http://localhost:3001/settings * **Adaptive Leverage Section** with 5 configurable fields: - Enable/Disable toggle (USE_ADAPTIVE_LEVERAGE) - High Quality Leverage (10x default) - Low Quality Leverage (5x default) - **LONG Quality Threshold (95 default)** - independent control - **SHORT Quality Threshold (90 default)** - independent control * **Dynamic Collateral Display:** Fetches real-time balance from Drift account * **Position Size Calculator:** Shows notional positions for each leverage tier * **API Endpoint:** GET /api/drift/account-health returns { totalCollateral, freeCollateral, totalLiability, marginRatio } * **Real-time Updates:** Collateral fetched on page load via React useEffect * **Fallback:** Uses $560 if Drift API unavailable - **Direction-Specific Thresholds:** * LONGs: Quality β‰₯95 β†’ 10x, Quality 90-94 β†’ 5x * SHORTs: Quality β‰₯90 β†’ 10x, Quality 80-89 β†’ 5x * Lower quality than thresholds β†’ blocked by execute endpoint - **Expected Impact:** 10Γ— profit on high-quality signals, 5Γ— on borderline (2Γ— better than Nov 28 config) - **Status:** βœ… ACTIVE in production with full UI control (Dec 1, 2025) - **Commits:** * 2e511ce - Config update to 10x/5x (Dec 1 morning) * 21c13b9 - Initial adaptive leverage UI (Dec 1 afternoon) * a294f44 - Docker env vars for UI controls (Dec 1 afternoon) * 67ef5b1 - Direction-specific thresholds + dynamic collateral (Dec 1 evening) - **See:** `ADAPTIVE_LEVERAGE_SYSTEM.md` for implementation details **Parameter Sweep Strategy:** - **8 Parameters to Optimize:** 1. **flip_threshold:** 0.4, 0.5, 0.6, 0.7 (4 values) - EMA flip confirmation threshold 2. **ma_gap:** 0.20, 0.30, 0.40, 0.50 (4 values) - MA50-MA200 convergence bonus 3. **momentum_adx:** 18, 21, 24, 27 (4 values) - ADX requirement for momentum filter 4. **momentum_long_pos:** 60, 65, 70, 75 (4 values) - Price position for LONG momentum entry 5. **momentum_short_pos:** 20, 25, 30, 35 (4 values) - Price position for SHORT momentum entry 6. **cooldown_bars:** 1, 2, 3, 4 (4 values) - Bars between signals 7. **momentum_spacing:** 2, 3, 4, 5 (4 values) - Bars between momentum confirmations 8. **momentum_cooldown:** 1, 2, 3, 4 (4 values) - Momentum-specific cooldown - **Total Combinations:** 4^8 = 65,536 exhaustive search - **Grid Design:** 4 values per parameter = balanced between granularity and computation time **Sweep Results - Narrow Grid (27 combinations):** - **Date:** Nov 28, 2025 (killed early to port to EPYC) - **Top Result:** $496.41 PnL (22% improvement over baseline) - **Key Finding:** Parameter insensitivity observed again * Multiple different configurations produced identical results * Suggests v9 edge comes from core EMA logic, not parameter tuning * Similar pattern to v10 (but v9 has proven baseline edge) - **Decision:** Proceed with exhaustive 65,536 combo search on EPYC to confirm pattern **EPYC Server Exhaustive Sweep (Nov 28-29, 2025):** - **Hardware:** AMD EPYC 7282 16-Core Processor, Debian 12 Bookworm - **Configuration:** 24 workers, 1.60s per combo (4Γ— faster than local 6 workers) - **Total Combinations:** 65,536 (full 4^8 grid) - **Duration:** ~29 hours estimated - **Output:** Top 100 results saved to sweep_v9_exhaustive_epyc.csv - **Setup:** * Package: backtest_v9_sweep.tar.gz (1.1MB compressed) * Contents: data/solusdt_5m.csv (1.9MB), backtester modules, sweep scripts * Python env: 3.11.2 with pandas 2.3.3, numpy 2.3.5 * Virtual environment: /home/backtest/.venv/ - **Status:** βœ… RUNNING (started Nov 28, 2025 ~17:00 UTC, ~17h remaining as of Nov 29) - **Critical Fixes Applied:** 1. Added `source .venv/bin/activate` to run script (fixes ModuleNotFoundError) 2. Kept `--top 100` limit (tests all 65,536, saves top 100 to CSV) 3. Proper output naming: sweep_v9_exhaustive_epyc.csv **Backtesting Infrastructure:** - **Location:** `/home/icke/traderv4/backtester/` and `/home/backtest/` (EPYC) - **Modules:** * `backtester_core.py` - Core backtesting engine with ATR-based TP/SL * `v9_moneyline_ma_gap.py` - v9 indicator logic implementation * `moneyline_core.py` - Shared EMA/signal detection logic - **Data:** `data/solusdt_5m.csv` - Nov 2024 to Nov 2025 OHLCV (139,678 5-min bars) - **Sweep Script:** `scripts/run_backtest_sweep.py` - Multiprocessing parameter grid search * Progress bar shows hours/minutes (not seconds) for long-running sweeps * Supports --top N to limit output file size * Uses multiprocessing.Pool for parallel execution - **Python Environments:** * Local: Python 3.7.3 with .venv (pandas/numpy) * EPYC: Python 3.11.2 with .venv (pandas 2.3.3, numpy 2.3.5) - **Setup Scripts:** * `setup_epyc.sh` - Installs python3-venv, creates .venv, installs pandas/numpy * `run_sweep_epyc.sh` - Executes parameter sweep with proper venv activation **Expected Outcomes:** 1. **If parameter insensitivity persists:** v9 edge is in core EMA logic, not tuning - Action: Use baseline parameters in production - Conclusion: v9 works because of momentum filter logic, not specific values 2. **If clear winners emerge:** Optimize production parameters - Action: Update .pinescript with optimal values - Validation: Confirm via forward testing (50-100 trades) 3. **If quality thresholds need adjustment:** - SHORT threshold 80 may be too strict (could be missing profitable setups) - Analyze win rate distribution around thresholds **Post-Sweep Analysis Plan:** 1. Review top 100 results for parameter clustering 2. Check if top performers share common characteristics 3. Identify "stability zones" (parameters that consistently perform well) 4. Compare exhaustive results to baseline ($405.88) and narrow sweep ($496.41) 5. Make production parameter recommendations 6. Consider if SHORT quality threshold (80) needs lowering based on blocked signals analysis **Key Files:** - `workflows/trading/moneyline_v9_ma_gap.pinescript` - Production v9 indicator - `backtester/v9_moneyline_ma_gap.py` - Python implementation for backtesting - `scripts/run_backtest_sweep.py` - Parameter sweep orchestration - `run_sweep_epyc.sh` - EPYC execution script (24 workers, venv activation) - `ADAPTIVE_LEVERAGE_SYSTEM.md` - Adaptive leverage implementation docs - `INDICATOR_V9_MA_GAP_ROADMAP.md` - v9 development roadmap **Current Production State (Nov 28-29, 2025):** - **Indicator:** v9 Money Line with MA Gap + Momentum SHORT Filter - **Quality Thresholds:** LONG β‰₯90, SHORT β‰₯80 - **Adaptive Leverage:** ACTIVE (5x high quality, 1x borderline) - **Capital:** $540 USDC at 100% health - **Expected Profit Boost:** 5Γ— on high-quality signals with adaptive leverage - **Backtesting:** Exhaustive parameter sweep in progress (17h remaining) **Lessons Learned:** 1. **Parameter insensitivity indicates overfitting:** When many configs give identical results, the edge isn't in parameters 2. **Simpler is better:** v10 added complexity but no edge β†’ removed completely 3. **Quality-based leverage scales winners:** 5x on Q95+ signals amplifies edge without increasing borderline risk 4. **Exhaustive search validates findings:** 65,536 combos confirm if pattern is real or sampling artifact 5. **Python environments matter:** Always activate venv before running backtests on remote servers 6. **Portable packages enable distributed computing:** 1.1MB tar.gz enables 16-core EPYC utilization ## Cluster Status Detection: Database-First Architecture (Nov 30, 2025) **Purpose:** Distributed parameter sweep cluster monitoring system with database-driven status detection **Critical Problem Discovered (Nov 30, 2025):** - **Symptom:** Web dashboard showed "IDLE" status with 0 active workers despite 22+ worker processes running on EPYC cluster - **Root Cause:** SSH-based status detection timing out due to network latency β†’ catch blocks returning "offline" β†’ false negative cluster status - **Impact:** System appeared idle when actually processing 4,000 parameter combinations across 2 active chunks - **Financial Risk:** In production trading system, false idle status could prevent monitoring of critical distributed processes **Solution: Database-First Status Detection** **Architectural Principle:** **Database is the source of truth for business logic, NOT infrastructure availability** **Implementation (app/api/cluster/status/route.ts):** ```typescript export async function GET(request: NextRequest) { try { // CRITICAL FIX (Nov 30, 2025): Check database FIRST before SSH detection // Database shows actual work state, SSH just provides supplementary metrics const explorationData = await getExplorationData() const hasRunningChunks = explorationData.chunks.running > 0 console.log(`πŸ“Š Database status: ${explorationData.chunks.running} running chunks`) // Get SSH status for supplementary metrics (CPU, load, process count) const [worker1Status, worker2Status] = await Promise.all([ getWorkerStatus('worker1', WORKERS.worker1.host, WORKERS.worker1.port), getWorkerStatus('worker2', WORKERS.worker2.host, WORKERS.worker2.port, { proxyJump: WORKERS.worker1.host }) ]) // DATABASE-FIRST: Override SSH "offline" status if database shows running chunks const workers = [worker1Status, worker2Status].map(w => { if (hasRunningChunks && w.status === 'offline') { console.log(`βœ… ${w.name}: Database shows running chunks - overriding SSH offline to active`) return { ...w, status: 'active' as const, activeProcesses: w.activeProcesses || 1 } } return w }) // DATABASE-FIRST cluster status let clusterStatus: 'active' | 'idle' = 'idle' if (hasRunningChunks) { clusterStatus = 'active' console.log('βœ… Cluster status: ACTIVE (database shows running chunks)') } else if (workers.some(w => w.status === 'active')) { clusterStatus = 'active' console.log('βœ… Cluster status: ACTIVE (workers detected via SSH)') } return NextResponse.json({ cluster: { status: clusterStatus, activeWorkers: workers.filter(w => w.status === 'active').length, totalStrategiesExplored: explorationData.strategies.explored, totalStrategiesToExplore: explorationData.strategies.total, }, workers, chunks: { pending: explorationData.chunks.pending, running: explorationData.chunks.running, completed: explorationData.chunks.completed, total: explorationData.chunks.total, }, }) } catch (error) { console.error('❌ Error getting cluster status:', error) return NextResponse.json({ error: 'Failed to get cluster status' }, { status: 500 }) } } ``` **Why This Approach:** 1. **Database persistence:** SQLite exploration.db records chunk assignments with status='running' 2. **Business logic integrity:** Work state exists in database regardless of SSH availability 3. **SSH supplementary only:** Process counts, CPU metrics are nice-to-have, not critical 4. **Network resilience:** SSH timeouts don't cause false negative status 5. **Single source of truth:** All cluster control operations write to database first **Verification Methodology (Nov 30, 2025):** **Before Fix:** ```bash curl -s http://localhost:3001/api/cluster/status | jq '.cluster' { "status": "idle", "activeWorkers": 0, "totalStrategiesExplored": 0, "totalStrategiesToExplore": 4096 } ``` **After Fix:** ```bash curl -s http://localhost:3001/api/cluster/status | jq '.cluster' { "status": "active", "activeWorkers": 2, "totalStrategiesExplored": 0, "totalStrategiesToExplore": 4096 } ``` **Container Logs Showing Fix Working:** ``` πŸ“Š Database status: 2 running chunks βœ… worker1: Database shows running chunks - overriding SSH offline to active βœ… worker2: Database shows running chunks - overriding SSH offline to active βœ… Cluster status: ACTIVE (database shows running chunks) ``` **Database State Verification:** ```bash sqlite3 cluster/exploration.db "SELECT id, start_combo, end_combo, status, assigned_worker FROM chunks WHERE status='running';" v9_chunk_000000|0|2000|running|worker1 v9_chunk_000001|2000|4000|running|worker2 ``` **SSH Process Verification (Manual):** ```bash ssh root@10.10.254.106 "ps aux | grep [p]ython | grep backtest | wc -l" 22 # 22 worker processes actively running ssh root@10.10.254.106 "ssh root@10.20.254.100 'ps aux | grep [p]ython | grep backtest | wc -l'" 18 # 18 worker processes on worker2 via hop ``` **Cluster Control System:** **Start Button (app/cluster/page.tsx):** ```tsx {status.cluster.status === 'idle' ? ( ) : ( )} ``` **Control API (app/api/cluster/control/route.ts):** - **start:** Runs distributed_coordinator.py β†’ creates chunks in database β†’ starts workers via SSH - **stop:** Kills coordinator process β†’ workers auto-stop when chunks complete β†’ database cleanup - **status:** Returns coordinator process status (supplementary to database status) **Database Schema (exploration.db):** ```sql CREATE TABLE chunks ( id TEXT PRIMARY KEY, -- v9_chunk_000000, v9_chunk_000001, etc. start_combo INTEGER NOT NULL, -- Starting combination index (0, 2000, 4000, etc.) end_combo INTEGER NOT NULL, -- Ending combination index (exclusive) total_combos INTEGER NOT NULL, -- Total combinations in chunk (2000) status TEXT NOT NULL, -- 'pending', 'running', 'completed', 'failed' assigned_worker TEXT, -- 'worker1', 'worker2', NULL for pending started_at INTEGER, -- Unix timestamp when work started completed_at INTEGER, -- Unix timestamp when work completed created_at INTEGER DEFAULT (strftime('%s', 'now')) ); CREATE TABLE strategies ( id INTEGER PRIMARY KEY AUTOINCREMENT, chunk_id TEXT NOT NULL, params TEXT NOT NULL, -- JSON of parameter values pnl REAL NOT NULL, win_rate REAL NOT NULL, profit_factor REAL NOT NULL, max_drawdown REAL NOT NULL, total_trades INTEGER NOT NULL, created_at INTEGER DEFAULT (strftime('%s', 'now')), FOREIGN KEY (chunk_id) REFERENCES chunks(id) ); ``` **Deployment Details:** - **Container:** trading-bot-v4 on port 3001 - **Build Time:** Nov 30 21:12 UTC (TypeScript compilation 77.4s) - **Restart Time:** Nov 30 21:18 UTC with `--force-recreate` - **Volume Mount:** `./cluster:/app/cluster` (database persistence) - **Git Commits:** * cc56b72 "fix: Database-first cluster status detection" * c5a8f5e "docs: Add comprehensive cluster status fix documentation" **Telegram Notifications (Dec 2, 2025):** - **Purpose:** Alert user when parameter sweep completes or stops prematurely - **Implementation:** Added to `v9_advanced_coordinator.py` (197 lines) - **Credentials:** * Bot Token: `8240234365:AAEm6hg_XOm54x8ctnwpNYreFKRAEvWU3uY` * Chat ID: `579304651` * Source: `/home/icke/traderv4/.env` - **Notifications Sent:** 1. **Startup:** When coordinator starts (includes worker count, total combos, start time) 2. **Completion:** When all chunks finish (includes duration stats, completion time) 3. **Premature Stop:** When coordinator receives SIGINT/SIGTERM (crash/manual kill) - **Technical Details:** * Uses `urllib.request` for HTTP POST to Telegram Bot API * Signal handlers registered for graceful shutdown detection * Messages formatted with HTML parse mode for bold/structure * 10-second timeout on HTTP requests * Errors logged but don't crash coordinator - **Code Locations:** * `send_telegram_message()` function: Lines ~25-45 * `signal_handler()` function: Lines ~47-55 * Startup notification: In `main()` after banner * Completion notification: When `pending == 0 and running == 0` - **Deployment:** Dec 2, 2025 08:08:24 (coordinator PID 1477050) - **User Benefit:** "works through entire dataset without having to check all the time" **Lessons Learned:** 1. **Infrastructure availability β‰  business logic state** - SSH timeouts are infrastructure failures - Running chunks in database are business state - Never let infrastructure failures dictate false business states 2. **Database as source of truth** - All state-changing operations write to database first - Status detection reads from database first - External checks (SSH, API calls) are supplementary metrics only 3. **Fail-open vs fail-closed** - SSH timeout β†’ assume active if database says so (fail-open) - Database unavailable β†’ hard error, don't guess (fail-closed) - Business logic requires authoritative data source 4. **Verification before declaration** - curl test confirmed API response changed - Log analysis confirmed database-first logic executing - Manual SSH verification confirmed workers actually running - NEVER say "fixed" without testing deployed container 5. **Conditional UI rendering** - Stop button already existed in codebase - Shown conditionally based on cluster status - Status detection fix made Stop button visible automatically - Search codebase before claiming features are "missing" **Documentation References:** - **Full technical details:** `cluster/STATUS_DETECTION_FIX_COMPLETE.md` - **Database queries:** `cluster/lib/db.ts` - getExplorationData() - **Worker management:** `cluster/distributed_coordinator.py` - chunk creation and assignment - **Status API:** `app/api/cluster/status/route.ts` - database-first implementation **Current Operational State (Nov 30, 2025):** - **Cluster:** ACTIVE with 2 workers processing 4,000 combinations - **Database:** 2 chunks status='running' (0-2000 on worker1, 2000-4000 on worker2) - **Remaining:** 96 combinations (4000-4096) will be assigned after current chunks complete - **Dashboard:** Shows accurate "active" status with 2 active workers - **SSH Status:** May show "offline" due to latency, but database override ensures accurate cluster status ## Integration Points - **n8n:** Expects exact response format from `/api/trading/execute` (see n8n-complete-workflow.json) - **Drift Protocol:** Uses SDK v2.75.0 - check docs at docs.drift.trade for API changes - **Pyth Network:** WebSocket + HTTP fallback for price feeds (handles reconnection) - **PostgreSQL:** Version 16-alpine, must be running before bot starts - **EPYC Cluster:** Database-first status detection via SQLite exploration.db (SSH supplementary) --- **Key Mental Model:** Think of this as two parallel systems (on-chain orders + software monitoring) working together. The Position Manager is the "backup brain" that constantly watches and acts if on-chain orders fail. Both write to the same database for complete trade history. **Cluster Mental Model:** Database is the authoritative source of cluster state. SSH detection is supplementary metrics. If database shows running chunks, cluster is active regardless of SSH availability. Infrastructure failures don't change business logic state.