Files
trading_bot_v4/docs/analysis/ANALYTICS_STATUS_AND_NEXT_STEPS.md
mindesbunister 4c36fa2bc3 docs: Major documentation reorganization + ENV variable reference
**Documentation Structure:**
- Created docs/ subdirectory organization (analysis/, architecture/, bugs/,
  cluster/, deployments/, roadmaps/, setup/, archived/)
- Moved 68 root markdown files to appropriate categories
- Root directory now clean (only README.md remains)
- Total: 83 markdown files now organized by purpose

**New Content:**
- Added comprehensive Environment Variable Reference to copilot-instructions.md
- 100+ ENV variables documented with types, defaults, purpose, notes
- Organized by category: Required (Drift/RPC/Pyth), Trading Config (quality/
  leverage/sizing), ATR System, Runner System, Risk Limits, Notifications, etc.
- Includes usage examples (correct vs wrong patterns)

**File Distribution:**
- docs/analysis/ - Performance analyses, blocked signals, profit projections
- docs/architecture/ - Adaptive leverage, ATR trailing, indicator tracking
- docs/bugs/ - CRITICAL_*.md, FIXES_*.md bug reports (7 files)
- docs/cluster/ - EPYC setup, distributed computing docs (3 files)
- docs/deployments/ - *_COMPLETE.md, DEPLOYMENT_*.md status (12 files)
- docs/roadmaps/ - All *ROADMAP*.md strategic planning files (7 files)
- docs/setup/ - TradingView guides, signal quality, n8n setup (8 files)
- docs/archived/2025_pre_nov/ - Obsolete verification checklist (1 file)

**Key Improvements:**
- ENV variable reference: Single source of truth for all configuration
- Common Pitfalls #68-71: Already complete, verified during audit
- Better findability: Category-based navigation vs 68 files in root
- Preserves history: All files git mv (rename), not copy/delete
- Zero broken functionality: Only documentation moved, no code changes

**Verification:**
- 83 markdown files now in docs/ subdirectories
- Root directory cleaned: 68 files → 0 files (except README.md)
- Git history preserved for all moved files
- Container running: trading-bot-v4 (no restart needed)

**Next Steps:**
- Create README.md files in each docs subdirectory
- Add navigation index
- Update main README.md with new structure
- Consolidate duplicate deployment docs
- Archive truly obsolete files (old SQL backups)

See: docs/analysis/CLEANUP_PLAN.md for complete reorganization strategy
2025-12-04 08:29:59 +01:00

12 KiB

Analytics System Status & Next Steps

Date: November 8, 2025

📊 Current Status

What's Already Working

1. Re-Entry Analytics System (Phase 1) - IMPLEMENTED

  • Market data cache service (lib/trading/market-data-cache.ts)
  • /api/trading/market-data webhook endpoint (GET/POST)
  • /api/analytics/reentry-check validation endpoint
  • Telegram bot integration with analytics pre-check
  • Auto-caching of metrics from TradingView signals
  • --force flag override capability

2. Data Collection - IN PROGRESS

  • 122 total completed trades
  • 59 trades with signal quality scores (48%)
  • 67 trades with MAE/MFE data (55%)
  • Good data split: 32 shorts (avg score 73.9), 27 longs (avg score 70.4)

3. Code Infrastructure - READY

  • Signal quality scoring system with timeframe awareness
  • MAE/MFE tracking in Position Manager
  • Database schema with all necessary fields
  • Analytics endpoints ready for expansion

⚠️ What's NOT Yet Configured

1. TradingView Market Data Alerts - MISSING

  • No alerts firing every 1-5 minutes to update cache
  • This is why market data cache is empty: {"availableSymbols":[],"count":0,"cache":{}}
  • CRITICAL: Without this, manual Telegram trades use stale/historical data

2. Optimal SL/TP Analytics - NOT IMPLEMENTED

  • Have 59 trades with quality scores (need 70-100 for Phase 2)
  • Have MAE/MFE data showing:
    • Shorts: Avg MFE +3.63%, MAE -4.52%
    • Longs: Avg MFE +4.01%, MAE -2.59%
  • Need SQL analysis to determine optimal exit levels
  • Need to implement ATR-based dynamic targets

3. Entry Quality Analytics - PARTIALLY IMPLEMENTED ⚙️

  • Signal quality scoring: Working
  • Re-entry validation: Working (but no fresh data)
  • Performance-based modifiers: Working
  • Missing: Fresh TradingView data due to missing alerts

🎯 Immediate Action Plan

Priority 1: Setup TradingView Market Data Alerts (30 mins)

This will enable fresh data for manual Telegram trades!

For Each Symbol (SOL, ETH, BTC):

Step 1: Open TradingView chart

  • Symbol: SOLUSDT (or ETHUSDT, BTCUSDT)
  • Timeframe: 5-minute chart

Step 2: Create Alert

  • Click Alert icon (🔔)
  • Condition: ta.change(time("1")) (fires every bar close)
  • Alert Name: Market Data - SOL 5min

Step 3: Webhook Configuration

  • URL: https://YOUR-DOMAIN.COM/api/trading/market-data
    • Example: https://flow.egonetix.de/api/trading/market-data (if bot is on same domain)
    • Or: http://YOUR-SERVER-IP:3001/api/trading/market-data (if direct access)

Step 4: Alert Message (JSON)

{
  "action": "market_data",
  "symbol": "{{ticker}}",
  "timeframe": "{{interval}}",
  "atr": {{ta.atr(14)}},
  "adx": {{ta.dmi(14, 14)}},
  "rsi": {{ta.rsi(14)}},
  "volumeRatio": {{volume / ta.sma(volume, 20)}},
  "pricePosition": {{(close - ta.lowest(low, 100)) / (ta.highest(high, 100) - ta.lowest(low, 100)) * 100}},
  "currentPrice": {{close}}
}

Step 5: Settings

  • Frequency: Once Per Bar Close (fires every 5 minutes)
  • Expires: Never
  • Send Webhook: Enabled

Step 6: Verify

# Wait 5 minutes, then check cache
curl http://localhost:3001/api/trading/market-data

# Should see:
# {"success":true,"availableSymbols":["SOL-PERP"],"count":1,"cache":{...}}

Step 7: Test Telegram

You: "long sol"

# Should now show:
# ✅ Data: tradingview_real (23s old)  ← Fresh data!

Priority 2: Run SQL Analysis for Optimal SL/TP (1 hour)

Goal: Determine data-driven optimal exit levels

Analysis Queries to Run:

1. MFE/MAE Distribution Analysis

-- See where trades actually move (not where we exit)
SELECT 
  direction,
  ROUND(AVG("maxFavorableExcursion")::numeric, 2) as avg_best_profit,
  ROUND(PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY "maxFavorableExcursion")::numeric, 2) as q25_mfe,
  ROUND(PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY "maxFavorableExcursion")::numeric, 2) as median_mfe,
  ROUND(PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY "maxFavorableExcursion")::numeric, 2) as q75_mfe,
  ROUND(AVG("maxAdverseExcursion")::numeric, 2) as avg_worst_loss,
  ROUND(PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY "maxAdverseExcursion")::numeric, 2) as q25_mae
FROM "Trade"
WHERE "exitReason" IS NOT NULL AND "maxFavorableExcursion" IS NOT NULL
GROUP BY direction;

2. Quality Score vs Exit Performance

-- Do high quality signals really move further?
SELECT 
  CASE 
    WHEN "signalQualityScore" >= 80 THEN 'High (80-100)'
    WHEN "signalQualityScore" >= 70 THEN 'Medium (70-79)'
    ELSE 'Low (60-69)'
  END as quality_tier,
  COUNT(*) as trades,
  ROUND(AVG("realizedPnL")::numeric, 2) as avg_pnl,
  ROUND(AVG("maxFavorableExcursion")::numeric, 2) as avg_mfe,
  ROUND(100.0 * SUM(CASE WHEN "realizedPnL" > 0 THEN 1 ELSE 0 END) / COUNT(*)::numeric, 1) as win_rate,
  -- How many went beyond current TP2 (+0.7%)?
  ROUND(100.0 * SUM(CASE WHEN "maxFavorableExcursion" > 0.7 THEN 1 ELSE 0 END) / COUNT(*)::numeric, 1) as pct_exceeded_tp2
FROM "Trade"
WHERE "signalQualityScore" IS NOT NULL AND "exitReason" IS NOT NULL
GROUP BY quality_tier
ORDER BY quality_tier;

3. Runner Potential Analysis

-- How often do trades move 2%+ (runner territory)?
SELECT 
  direction,
  "exitReason",
  COUNT(*) as count,
  ROUND(AVG("maxFavorableExcursion")::numeric, 2) as avg_mfe,
  SUM(CASE WHEN "maxFavorableExcursion" > 2.0 THEN 1 ELSE 0 END) as moved_beyond_2pct,
  SUM(CASE WHEN "maxFavorableExcursion" > 3.0 THEN 1 ELSE 0 END) as moved_beyond_3pct,
  SUM(CASE WHEN "maxFavorableExcursion" > 5.0 THEN 1 ELSE 0 END) as moved_beyond_5pct
FROM "Trade"
WHERE "exitReason" IS NOT NULL AND "maxFavorableExcursion" IS NOT NULL
GROUP BY direction, "exitReason"
ORDER BY direction, count DESC;

4. ATR Correlation

-- Does higher ATR = bigger moves?
SELECT 
  CASE 
    WHEN atr < 0.3 THEN 'Low (<0.3%)'
    WHEN atr < 0.6 THEN 'Medium (0.3-0.6%)'
    ELSE 'High (>0.6%)'
  END as atr_bucket,
  COUNT(*) as trades,
  ROUND(AVG("maxFavorableExcursion")::numeric, 2) as avg_mfe,
  ROUND(AVG("maxAdverseExcursion")::numeric, 2) as avg_mae,
  ROUND(AVG(atr)::numeric, 3) as avg_atr
FROM "Trade"
WHERE atr IS NOT NULL AND "exitReason" IS NOT NULL
GROUP BY atr_bucket
ORDER BY avg_atr;

Expected Insights:

After running these queries, you'll know:

  • Where to set TP1/TP2: Based on median MFE (not averages, which are skewed by outliers)
  • Runner viability: What % of trades actually move 3%+ (current runner territory)
  • Quality-based strategy: Should high-score signals use different exits?
  • ATR effectiveness: Does ATR predict movement range?

Priority 3: Implement Optimal Exit Strategy (2-3 hours)

ONLY AFTER Priority 2 analysis shows clear improvements!

Based on preliminary data (shorts: +3.63% MFE, longs: +4.01% MFE):

Option A: Conservative (Take What Market Gives)

// If median MFE is around 2-3%, don't chase runners
TP1: +0.4%   Close 75%  (current)
TP2: +0.7%   Close 25%  (no runner)
SL: -1.5%   (current)

Option B: Runner-Friendly (If >50% trades exceed +2%)

TP1: +0.4%   Close 75%
TP2: +1.0%   Activate trailing stop on 25%
Runner: 25% with ATR-based trailing (current)
SL: -1.5%

Option C: Quality-Based Tiers (If score correlation is strong)

High Quality (80-100):
  TP1: +0.5%  Close 50%
  TP2: +1.5%  Close 25%
  Runner: 25% with 1.0% trailing

Medium Quality (70-79):
  TP1: +0.4%  Close 75%
  TP2: +0.8%  Close 25%
  
Low Quality (60-69):
  TP1: +0.3%  Close 100% (quick exit)

Implementation Files to Modify:

  1. config/trading.ts - Add tier configs if using Option C
  2. lib/drift/orders.ts - Update placeExitOrders() with new logic
  3. lib/trading/position-manager.ts - Update monitoring logic
  4. app/api/trading/execute/route.ts - Pass quality score to order placement

🔍 Current System Gaps

1. TradingView → n8n Integration

Status: Mostly working (59 trades with scores = n8n is calling execute endpoint)

Check: Do you have these n8n workflows?

  • Money_Machine.json - Main trading workflow
  • parse_signal_enhanced.json - Signal parser with metrics extraction

Verify n8n is extracting metrics:

  • Open n8n workflow
  • Check "Parse Signal Enhanced" node
  • Should extract: atr, adx, rsi, volumeRatio, pricePosition, timeframe
  • These get passed to /api/trading/execute → auto-cached

2. Market Data Webhook Flow

Status: ⚠️ Endpoint exists but no alerts feeding it

TradingView Alert (every 5min)
   ↓ POST /api/trading/market-data
Market Data Cache
   ↓ Used by
Manual Telegram Trades ("long sol")

Currently missing: The TradingView alerts (Priority 1 above)


📈 Success Metrics

Phase 1 Completion Checklist:

  • Market data alerts active for SOL, ETH, BTC
  • Market data cache shows fresh data (<5min old)
  • Manual Telegram trades show "tradingview_real" data source
  • 70+ trades with signal quality scores collected
  • SQL analysis completed with clear exit level recommendations

Phase 2 Readiness:

  • Clear correlation between quality score and MFE proven
  • ATR correlation with move size demonstrated
  • Runner viability confirmed (>40% of trades move 2%+)
  • New exit strategy implemented and tested
  • 10 test trades with new strategy show improvement

🚦 What to Do RIGHT NOW

1. Setup TradingView Market Data Alerts (30 mins)

  • Follow Priority 1 steps above
  • Create 3 alerts: SOL, ETH, BTC on 5min charts
  • Verify cache populates after 5 minutes

2. Test Telegram with Fresh Data (5 mins)

You: "long sol"

# Should see:
✅ Data: tradingview_real (X seconds old)
Score: XX/100

3. Run SQL Analysis (1 hour)

  • Execute all 4 queries from Priority 2
  • Save results to a file
  • Look for patterns: MFE distribution, quality correlation, runner potential

4. Make Go/No-Go Decision

  • IF analysis shows clear improvements → Implement new strategy (Priority 3)
  • IF data is unclear → Collect 20 more trades, re-analyze
  • IF current strategy is optimal → Document findings, skip changes

5. Optional: n8n Workflow Check

  • Verify Money_Machine.json includes metric extraction
  • Confirm /api/trading/check-risk is being called
  • Test manually with TradingView alert

📚 Reference Files

Setup Guides:

  • docs/guides/REENTRY_ANALYTICS_QUICKSTART.md - Complete market data setup
  • docs/guides/N8N_WORKFLOW_GUIDE.md - n8n workflow configuration
  • POSITION_SCALING_ROADMAP.md - Full Phase 1-6 roadmap

Analysis Queries:

  • docs/analysis/SIGNAL_QUALITY_VERSION_ANALYSIS.sql - Quality score deep-dive

API Endpoints:

  • GET /api/trading/market-data - View cache status
  • POST /api/trading/market-data - Update cache (from TradingView)
  • POST /api/analytics/reentry-check - Validate manual trades

Key Files:

  • lib/trading/market-data-cache.ts - Cache service (5min expiry)
  • app/api/analytics/reentry-check/route.ts - Re-entry validation
  • telegram_command_bot.py - Manual trade execution

Questions to Answer

For Priority 1 (TradingView Setup):

  • What's your TradingView webhook URL? (bot domain + port 3001)
  • Do you want 1min or 5min bar closes? (recommend 5min to save alerts)
  • Are webhooks enabled on your TradingView plan?

For Priority 2 (Analysis):

  • What's your target win rate vs R:R trade-off preference?
  • Do you prefer quick exits or letting runners develop?
  • What's acceptable MAE before you want emergency exit?

For Priority 3 (Implementation):

  • Should we implement quality-based tiers or one universal strategy?
  • Keep current TP2-as-runner (25%) or go back to partial close?
  • Test with DRY_RUN first or go live immediately?

Bottom Line: You're 80% done! Just need TradingView alerts configured (Priority 1) and then run the SQL analysis (Priority 2) to determine optimal exits. The infrastructure is solid and ready.