Commit Graph

18 Commits

Author SHA1 Message Date
mindesbunister
79ab30782c fix: MarketData storage now working in execute endpoint
- Added debug logging to trace execution
- Confirmed 1-minute signals being stored continuously
- Database accumulating rows every 1-3 minutes
- All indicators (ATR, ADX, RSI, volume, price position) storing correctly
- 1-year retention active (365 days)
- Foundation ready for 8-hour blocked signal tracking
2025-12-02 12:43:35 +01:00
mindesbunister
5773d7d36d feat: Extend 1-minute data retention from 4 weeks to 1 year
- Updated lib/maintenance/data-cleanup.ts retention period: 28 days → 365 days
- Storage requirements validated: 251 MB/year (negligible)
- Rationale: 13× more historical data for better pattern analysis
- Benefits: 260-390 blocked signals/year vs 20-30/month
- Cleanup cutoff: Now Dec 2, 2024 (vs Nov 4, 2025 previously)
- Deployment verified: Container restarted, cleanup scheduled for 3 AM daily
2025-12-02 11:55:36 +01:00
mindesbunister
6cec2e8e71 critical: Fix Smart Entry Validation Queue wrong price display
- Bug: Validation queue used TradingView symbol format (SOLUSDT) to lookup market data cache
- Cache uses normalized Drift format (SOL-PERP)
- Result: Cache lookup failed, wrong/stale price shown in Telegram abandonment notifications
- Real incident: Signal at $126.00 showed $98.18 abandonment price (-22.08% impossible drop)
- Fix: Added normalizeTradingViewSymbol() call in check-risk endpoint before passing to validation queue
- Files changed: app/api/trading/check-risk/route.ts (import + symbol normalization)
- Impact: Validation queue now correctly retrieves current price from market data cache
- Deployed: Dec 1, 2025
2025-12-01 23:45:21 +01:00
mindesbunister
4fb301328d docs: Document 70% CPU deployment and Python buffering fix
- CRITICAL FIX: Python output buffering caused silent failure
- Solution: python3 -u flag for unbuffered output
- 70% CPU optimization: int(cpu_count() * 0.7) = 22-24 cores per server
- Current state: 47 workers, load ~22 per server, 16.3 hour timeline
- System operational since Dec 1 22:50:32
- Expected completion: Dec 2 15:15
2025-12-01 23:27:17 +01:00
mindesbunister
e748cf709d fix: Correct SSH hop for EPYC worker2 connectivity
- ProxyJump (-J) doesn't work from Docker container
- Changed to nested SSH: hop -> target
- Proper command escaping for nested SSH
- Worker2 (srv-bd-host01) only accessible via worker1 (pve-nu-monitor01)
2025-12-01 19:42:08 +01:00
mindesbunister
7e1fe1cc30 feat: V9 advanced parameter sweep with MA gap filter (810K configs)
Parameter space expansion:
- Original 15 params: 101K configurations
- NEW: MA gap filter (3 dimensions) = 18× expansion
- Total: ~810,000 configurations across 4 time profiles
- Chunk size: 1,000 configs/chunk = ~810 chunks

MA Gap Filter parameters:
- use_ma_gap: True/False (2 values)
- ma_gap_min_long: -5.0%, 0%, +5.0% (3 values)
- ma_gap_min_short: -5.0%, 0%, +5.0% (3 values)

Implementation:
- money_line_v9.py: Full v9 indicator with MA gap logic
- v9_advanced_worker.py: Chunk processor (1,000 configs)
- v9_advanced_coordinator.py: Work distributor (2 EPYC workers)
- run_v9_advanced_sweep.sh: Startup script (generates + launches)

Infrastructure:
- Uses existing EPYC cluster (64 cores total)
- Worker1: bd-epyc-02 (32 threads)
- Worker2: bd-host01 (32 threads via SSH hop)
- Expected runtime: 70-80 hours
- Database: SQLite (chunk tracking + results)

Goal: Find optimal MA gap thresholds for filtering false breakouts
during MA whipsaw zones while preserving trend entries.
2025-12-01 18:11:47 +01:00
mindesbunister
2993bc8895 feat: Update v9 with optimal parameters from exhaustive sweep + consolidate files
Parameter updates (from 4,096 config sweep analysis):
- flipThreshold: 0.6 → 0.5 (optimal for reversal confirmation)
- adxMin: 18 → 21 (stronger trend filter)
- longPosMax: 85 → 75 (prevent chasing tops)
- shortPosMin: 15 → 20 (catch momentum shorts)
- volMin: 0.7 → 1.0 (stronger conviction requirement)

File consolidation:
- Archived moneyline_v9_ma_gap_clean.pinescript (suboptimal defaults)
- Archived moneyline_v9_test.pinescript (suboptimal defaults, missing MA gap)
- Kept moneyline_v9_ma_gap.pinescript as canonical v9 (optimal + MA gap analysis)

Result: Single v9 file with optimal defaults producing 19.44% returns
over 4 months (194.4% annualized) from sweep validation.
2025-12-01 16:04:42 +01:00
mindesbunister
11a0ea324b critical: Fix distributed worker quality_filter - dict to lambda function
Root cause: Passing dict {'min_adx': 15, 'min_volume_ratio': vol_min} when
simulate_money_line() expects callable function.

Bug caused ALL 2,096 backtests to fail with 'dict' object is not callable.

Fix: Changed to lambda function matching comprehensive_sweep.py pattern:
  quality_filter = lambda s: s.adx >= 15 and s.volume_ratio >= vol_min

Verified fix working: Workers running at 100% CPU, no errors after 2+ minutes.
2025-12-01 14:59:08 +01:00
mindesbunister
a886555d44 docs: Complete SSH timeout + resumption logic fix documentation
**Comprehensive documentation including:**
- Root cause analysis for both bugs
- Manual test procedures that validated fixes
- Code changes with before/after comparisons
- Verification results (24 worker processes running)
- Lessons learned for future debugging
- Current cluster state and next steps

Files: cluster/SSH_TIMEOUT_FIX_COMPLETE.md (288 lines)
2025-12-01 12:58:03 +01:00
mindesbunister
323ef03f5f critical: Fix SSH timeout + resumption logic bugs
**SSH Command Fix:**
- CRITICAL: Removed && after background command (&)
- Pattern: 'cmd & echo Started' works, 'cmd && echo' waits forever
- Manually tested: Works perfectly on direct SSH
- Result: Chunk 0 now starts successfully on worker1 (24 processes running)

**Resumption Logic Fix:**
- CRITICAL: Only count completed/running chunks, not pending
- Query: Added 'AND status IN (completed, running)' filter
- Result: Starts from chunk 0 when no chunks complete (was skipping to chunk 3)

**Database Cleanup:**
- CRITICAL: Delete pending/failed chunks on coordinator start
- Prevents UNIQUE constraint errors on retry
- Result: Clean slate allows coordinator to assign chunks fresh

**Verification:**
-  Chunk v9_chunk_000000: status='running', assigned_worker='worker1'
-  Worker1: 24 Python processes running backtester
-  Database: Cleaned 3 pending chunks, created 1 running chunk
- ⚠️  Worker2: SSH hop still timing out (separate infrastructure issue)

Files changed:
- cluster/distributed_coordinator.py (3 critical fixes: line 388-401, 514-533, 507-514)
2025-12-01 12:56:35 +01:00
mindesbunister
1f83a7d7c4 feat: Add coordinator log viewer to cluster UI
- Created /api/cluster/logs endpoint to read coordinator.log
- Added real-time log display in cluster UI (updates every 3s)
- Shows last 100 lines of coordinator.log in terminal-style display
- Includes manual refresh button
- Improves debugging experience - no need to SSH for logs

User feedback: 'why dont we add the output of the log at the bottom of the page so i know whats going on'

This addresses poor visibility into coordinator errors and failures.
Next step: Fix SSH timeout issue blocking worker execution.
2025-12-01 11:49:23 +01:00
mindesbunister
ef371a19b9 fix: EPYC cluster SSH timeout - increase timeout 30s→60s + add SSH options
CRITICAL FIX (Dec 1, 2025): Cluster start was failing with 'operation failed'

Problem:
- SSH commands timing out after 30s (too short for 2-hop SSH to worker2)
- Missing SSH options caused prompts/delays
- Result: Coordinator failed to start worker processes

Solution:
- Increased timeout from 30s to 60s for nested SSH hops
- Added SSH options: -o StrictHostKeyChecking=no -o ConnectTimeout=10
- Applied options to both ssh_command() and worker startup commands

Verification (Dec 1, 09:40):
- Worker1: 23 processes running (chunk 0-2000)
- Worker2: 24 processes running (chunk 2000-4000)
- Cluster status: ACTIVE with 2 workers
- Both chunks processing successfully

Files changed:
- cluster/distributed_coordinator.py (lines 302-314, 388-414)
2025-12-01 09:41:42 +01:00
mindesbunister
67ef5b1ac6 feat: Add direction-specific quality thresholds and dynamic collateral display
- Split QUALITY_LEVERAGE_THRESHOLD into separate LONG and SHORT variants
- Added /api/drift/account-health endpoint for real-time collateral data
- Updated settings UI to show separate controls for LONG/SHORT thresholds
- Position size calculations now use dynamic collateral from Drift account
- Updated .env and docker-compose.yml with new environment variables
- LONG threshold: 95, SHORT threshold: 90 (configurable independently)

Files changed:
- app/api/drift/account-health/route.ts (NEW) - Account health API endpoint
- app/settings/page.tsx - Added collateral state, separate threshold inputs
- app/api/settings/route.ts - GET/POST handlers for LONG/SHORT thresholds
- .env - Added QUALITY_LEVERAGE_THRESHOLD_LONG/SHORT variables
- docker-compose.yml - Added new env vars with fallback defaults

Impact:
- Users can now configure quality thresholds independently for LONG vs SHORT signals
- Position size display dynamically updates based on actual Drift account collateral
- More flexible risk management with direction-specific leverage tiers
2025-12-01 09:09:30 +01:00
mindesbunister
c5a8f5e32d docs: Add comprehensive status detection fix documentation 2025-11-30 22:27:08 +01:00
mindesbunister
cc56b72df2 fix: Database-first cluster status detection + Stop button clarification
CRITICAL FIX (Nov 30, 2025):
- Dashboard showed 'idle' despite 22+ worker processes running
- Root cause: SSH-based worker detection timing out
- Solution: Check database for running chunks FIRST

Changes:
1. app/api/cluster/status/route.ts:
   - Query exploration database before SSH detection
   - If running chunks exist, mark workers 'active' even if SSH fails
   - Override worker status: 'offline' → 'active' when chunks running
   - Log: ' Cluster status: ACTIVE (database shows running chunks)'
   - Database is source of truth, SSH only for supplementary metrics

2. app/cluster/page.tsx:
   - Stop button ALREADY EXISTS (conditionally shown)
   - Shows Start when status='idle', Stop when status='active'
   - No code changes needed - fixed by status detection

Result:
- Dashboard now shows 'ACTIVE' with 2 workers (correct)
- Workers show 'active' status (was 'offline')
- Stop button automatically visible when cluster active
- System resilient to SSH timeouts/network issues

Verified:
- Container restarted: Nov 30 21:18 UTC
- API tested: Returns status='active', activeWorkers=2
- Logs confirm: Database-first logic working
- Workers confirmed running: 22+ processes on worker1, workers on worker2
2025-11-30 22:23:01 +01:00
mindesbunister
83b4915d98 fix: Reduce coordinator chunk_size from 10k to 2k for small explorations
- Changed default chunk_size from 10,000 to 2,000
- Fixes bug where coordinator exited immediately for 4,096 combo exploration
- Coordinator was calculating: chunk 1 starts at 10,000 > 4,096 total = 'all done'
- Now creates 2-3 appropriately-sized chunks for distribution
- Verified: Workers now start and process assigned chunks
- Status:  Docker rebuilt and deployed to port 3001
2025-11-30 22:07:59 +01:00
mindesbunister
b77282b560 feat: Add EPYC cluster distributed sweep with web UI
New Features:
- Distributed coordinator orchestrates 2x AMD EPYC 16-core servers
- 64 total cores processing 12M parameter combinations (70% CPU limit)
- Worker1 (pve-nu-monitor01): Direct SSH access at 10.10.254.106
- Worker2 (bd-host01): 2-hop SSH through worker1 (10.20.254.100)
- Web UI at /cluster shows real-time status and AI recommendations
- API endpoint /api/cluster/status serves cluster metrics
- Auto-refresh every 30s with top strategies and actionable insights

Files Added:
- cluster/distributed_coordinator.py (510 lines) - Main orchestrator
- cluster/distributed_worker.py (271 lines) - Worker1 script
- cluster/distributed_worker_bd_clean.py (275 lines) - Worker2 script
- cluster/monitor_bd_host01.sh - Monitoring script
- app/api/cluster/status/route.ts (274 lines) - API endpoint
- app/cluster/page.tsx (258 lines) - Web UI
- cluster/CLUSTER_SETUP.md - Complete setup and access documentation

Technical Details:
- SQLite database tracks chunk assignments
- 10,000 combinations per chunk (1,195 total chunks)
- Multiprocessing.Pool with 70% CPU limit (22 cores per EPYC)
- SSH/SCP for deployment and result collection
- Handles 2-hop SSH for bd-host01 access
- Results in CSV format with top strategies ranked

Access Documentation:
- Worker1: ssh root@10.10.254.106
- Worker2: ssh root@10.10.254.106 "ssh root@10.20.254.100"
- Web UI: http://localhost:3001/cluster
- See CLUSTER_SETUP.md for complete guide

Status: Deployed and operational
2025-11-30 13:02:18 +01:00
mindesbunister
2a8e04fe57 feat: Continuous optimization cluster for 2 EPYC servers
- Master controller with job queue and result aggregation
- Worker scripts for parallel backtesting (22 workers per server)
- SQLite database for strategy ranking and performance tracking
- File-based job queue (simple, robust, survives crashes)
- Auto-setup script for both EPYC servers
- Status dashboard for monitoring progress
- Comprehensive deployment guide

Architecture:
- Master: Job generation, worker coordination, result collection
- Worker 1 (pve-nu-monitor01): AMD EPYC 7282, 22 parallel jobs
- Worker 2 (srv-bd-host01): AMD EPYC 7302, 22 parallel jobs
- Total capacity: ~49,000 backtests/day (44 cores @ 70%)

Initial focus: v9 parameter refinement (27 configurations)
Target: Find strategies >00/1k P&L (current baseline 92/1k)

Files:
- cluster/master.py: Main controller (570 lines)
- cluster/worker.py: Worker execution script (220 lines)
- cluster/setup_cluster.sh: Automated deployment
- cluster/status.py: Real-time status dashboard
- cluster/README.md: Operational documentation
- cluster/DEPLOYMENT.md: Step-by-step deployment guide
2025-11-29 22:34:52 +01:00