Commit Graph

9 Commits

Author SHA1 Message Date
mindesbunister
93dd950821 critical: Fix ghost detection P&L compounding - delete from Map BEFORE check
Bug: Multiple monitoring loops detect ghost simultaneously
- Loop 1: has(tradeId) → true → proceeds
- Loop 2: has(tradeId) → true → ALSO proceeds (race condition)
- Both send Telegram notifications with compounding P&L

Real incident (Dec 2, 2025):
- Manual SHORT at $138.84
- 23 duplicate notifications
- P&L compounded: -$47.96 → -$1,129.24 (23× accumulation)
- Database shows single trade with final compounded value

Fix: Map.delete() returns true if key existed, false if already removed
- Call delete() FIRST
- Check return value
 proceeds
- All other loops get false → skip immediately
- Atomic operation prevents race condition

Pattern: This is variant of Common Pitfalls #48, #49, #59, #60, #61
- All had "check then delete" pattern
- All vulnerable to async timing issues
- Solution: "delete then check" pattern
- Map.delete() is synchronous and atomic

Files changed:
- lib/trading/position-manager.ts lines 390-410

Related: DUPLICATE PREVENTED message was working but too late
2025-12-02 18:25:56 +01:00
mindesbunister
79ab30782c fix: MarketData storage now working in execute endpoint
- Added debug logging to trace execution
- Confirmed 1-minute signals being stored continuously
- Database accumulating rows every 1-3 minutes
- All indicators (ATR, ADX, RSI, volume, price position) storing correctly
- 1-year retention active (365 days)
- Foundation ready for 8-hour blocked signal tracking
2025-12-02 12:43:35 +01:00
mindesbunister
5773d7d36d feat: Extend 1-minute data retention from 4 weeks to 1 year
- Updated lib/maintenance/data-cleanup.ts retention period: 28 days → 365 days
- Storage requirements validated: 251 MB/year (negligible)
- Rationale: 13× more historical data for better pattern analysis
- Benefits: 260-390 blocked signals/year vs 20-30/month
- Cleanup cutoff: Now Dec 2, 2024 (vs Nov 4, 2025 previously)
- Deployment verified: Container restarted, cleanup scheduled for 3 AM daily
2025-12-02 11:55:36 +01:00
mindesbunister
6cec2e8e71 critical: Fix Smart Entry Validation Queue wrong price display
- Bug: Validation queue used TradingView symbol format (SOLUSDT) to lookup market data cache
- Cache uses normalized Drift format (SOL-PERP)
- Result: Cache lookup failed, wrong/stale price shown in Telegram abandonment notifications
- Real incident: Signal at $126.00 showed $98.18 abandonment price (-22.08% impossible drop)
- Fix: Added normalizeTradingViewSymbol() call in check-risk endpoint before passing to validation queue
- Files changed: app/api/trading/check-risk/route.ts (import + symbol normalization)
- Impact: Validation queue now correctly retrieves current price from market data cache
- Deployed: Dec 1, 2025
2025-12-01 23:45:21 +01:00
mindesbunister
e748cf709d fix: Correct SSH hop for EPYC worker2 connectivity
- ProxyJump (-J) doesn't work from Docker container
- Changed to nested SSH: hop -> target
- Proper command escaping for nested SSH
- Worker2 (srv-bd-host01) only accessible via worker1 (pve-nu-monitor01)
2025-12-01 19:42:08 +01:00
mindesbunister
2993bc8895 feat: Update v9 with optimal parameters from exhaustive sweep + consolidate files
Parameter updates (from 4,096 config sweep analysis):
- flipThreshold: 0.6 → 0.5 (optimal for reversal confirmation)
- adxMin: 18 → 21 (stronger trend filter)
- longPosMax: 85 → 75 (prevent chasing tops)
- shortPosMin: 15 → 20 (catch momentum shorts)
- volMin: 0.7 → 1.0 (stronger conviction requirement)

File consolidation:
- Archived moneyline_v9_ma_gap_clean.pinescript (suboptimal defaults)
- Archived moneyline_v9_test.pinescript (suboptimal defaults, missing MA gap)
- Kept moneyline_v9_ma_gap.pinescript as canonical v9 (optimal + MA gap analysis)

Result: Single v9 file with optimal defaults producing 19.44% returns
over 4 months (194.4% annualized) from sweep validation.
2025-12-01 16:04:42 +01:00
mindesbunister
1f83a7d7c4 feat: Add coordinator log viewer to cluster UI
- Created /api/cluster/logs endpoint to read coordinator.log
- Added real-time log display in cluster UI (updates every 3s)
- Shows last 100 lines of coordinator.log in terminal-style display
- Includes manual refresh button
- Improves debugging experience - no need to SSH for logs

User feedback: 'why dont we add the output of the log at the bottom of the page so i know whats going on'

This addresses poor visibility into coordinator errors and failures.
Next step: Fix SSH timeout issue blocking worker execution.
2025-12-01 11:49:23 +01:00
mindesbunister
67ef5b1ac6 feat: Add direction-specific quality thresholds and dynamic collateral display
- Split QUALITY_LEVERAGE_THRESHOLD into separate LONG and SHORT variants
- Added /api/drift/account-health endpoint for real-time collateral data
- Updated settings UI to show separate controls for LONG/SHORT thresholds
- Position size calculations now use dynamic collateral from Drift account
- Updated .env and docker-compose.yml with new environment variables
- LONG threshold: 95, SHORT threshold: 90 (configurable independently)

Files changed:
- app/api/drift/account-health/route.ts (NEW) - Account health API endpoint
- app/settings/page.tsx - Added collateral state, separate threshold inputs
- app/api/settings/route.ts - GET/POST handlers for LONG/SHORT thresholds
- .env - Added QUALITY_LEVERAGE_THRESHOLD_LONG/SHORT variables
- docker-compose.yml - Added new env vars with fallback defaults

Impact:
- Users can now configure quality thresholds independently for LONG vs SHORT signals
- Position size display dynamically updates based on actual Drift account collateral
- More flexible risk management with direction-specific leverage tiers
2025-12-01 09:09:30 +01:00
mindesbunister
cc56b72df2 fix: Database-first cluster status detection + Stop button clarification
CRITICAL FIX (Nov 30, 2025):
- Dashboard showed 'idle' despite 22+ worker processes running
- Root cause: SSH-based worker detection timing out
- Solution: Check database for running chunks FIRST

Changes:
1. app/api/cluster/status/route.ts:
   - Query exploration database before SSH detection
   - If running chunks exist, mark workers 'active' even if SSH fails
   - Override worker status: 'offline' → 'active' when chunks running
   - Log: ' Cluster status: ACTIVE (database shows running chunks)'
   - Database is source of truth, SSH only for supplementary metrics

2. app/cluster/page.tsx:
   - Stop button ALREADY EXISTS (conditionally shown)
   - Shows Start when status='idle', Stop when status='active'
   - No code changes needed - fixed by status detection

Result:
- Dashboard now shows 'ACTIVE' with 2 workers (correct)
- Workers show 'active' status (was 'offline')
- Stop button automatically visible when cluster active
- System resilient to SSH timeouts/network issues

Verified:
- Container restarted: Nov 30 21:18 UTC
- API tested: Returns status='active', activeWorkers=2
- Logs confirm: Database-first logic working
- Workers confirmed running: 22+ processes on worker1, workers on worker2
2025-11-30 22:23:01 +01:00