**Documentation Structure:** - Created docs/ subdirectory organization (analysis/, architecture/, bugs/, cluster/, deployments/, roadmaps/, setup/, archived/) - Moved 68 root markdown files to appropriate categories - Root directory now clean (only README.md remains) - Total: 83 markdown files now organized by purpose **New Content:** - Added comprehensive Environment Variable Reference to copilot-instructions.md - 100+ ENV variables documented with types, defaults, purpose, notes - Organized by category: Required (Drift/RPC/Pyth), Trading Config (quality/ leverage/sizing), ATR System, Runner System, Risk Limits, Notifications, etc. - Includes usage examples (correct vs wrong patterns) **File Distribution:** - docs/analysis/ - Performance analyses, blocked signals, profit projections - docs/architecture/ - Adaptive leverage, ATR trailing, indicator tracking - docs/bugs/ - CRITICAL_*.md, FIXES_*.md bug reports (7 files) - docs/cluster/ - EPYC setup, distributed computing docs (3 files) - docs/deployments/ - *_COMPLETE.md, DEPLOYMENT_*.md status (12 files) - docs/roadmaps/ - All *ROADMAP*.md strategic planning files (7 files) - docs/setup/ - TradingView guides, signal quality, n8n setup (8 files) - docs/archived/2025_pre_nov/ - Obsolete verification checklist (1 file) **Key Improvements:** - ENV variable reference: Single source of truth for all configuration - Common Pitfalls #68-71: Already complete, verified during audit - Better findability: Category-based navigation vs 68 files in root - Preserves history: All files git mv (rename), not copy/delete - Zero broken functionality: Only documentation moved, no code changes **Verification:** - 83 markdown files now in docs/ subdirectories - Root directory cleaned: 68 files → 0 files (except README.md) - Git history preserved for all moved files - Container running: trading-bot-v4 (no restart needed) **Next Steps:** - Create README.md files in each docs subdirectory - Add navigation index - Update main README.md with new structure - Consolidate duplicate deployment docs - Archive truly obsolete files (old SQL backups) See: docs/analysis/CLEANUP_PLAN.md for complete reorganization strategy
150 lines
3.0 KiB
Markdown
150 lines
3.0 KiB
Markdown
# Running Comprehensive Sweep on EPYC Server
|
||
|
||
## Transfer Package to EPYC
|
||
|
||
```bash
|
||
# From your local machine
|
||
scp comprehensive_sweep_package.tar.gz root@72.62.39.24:/root/
|
||
```
|
||
|
||
## Setup on EPYC
|
||
|
||
```bash
|
||
# SSH to EPYC
|
||
ssh root@72.62.39.24
|
||
|
||
# Extract package
|
||
cd /root
|
||
tar -xzf comprehensive_sweep_package.tar.gz
|
||
cd comprehensive_sweep
|
||
|
||
# Setup Python environment
|
||
python3 -m venv .venv
|
||
source .venv/bin/activate
|
||
pip install pandas numpy
|
||
|
||
# Create logs directory
|
||
mkdir -p backtester/logs
|
||
|
||
# Make scripts executable
|
||
chmod +x run_comprehensive_sweep.sh
|
||
chmod +x backtester/scripts/comprehensive_sweep.py
|
||
```
|
||
|
||
## Run the Sweep
|
||
|
||
```bash
|
||
# Start the sweep in background
|
||
./run_comprehensive_sweep.sh
|
||
|
||
# Or manually with more control:
|
||
cd /root/comprehensive_sweep
|
||
source .venv/bin/activate
|
||
nohup python3 backtester/scripts/comprehensive_sweep.py > sweep.log 2>&1 &
|
||
|
||
# Get the PID
|
||
echo $! > sweep.pid
|
||
```
|
||
|
||
## Monitor Progress
|
||
|
||
```bash
|
||
# Watch live progress (updates every 100 configs)
|
||
tail -f backtester/logs/sweep_comprehensive_*.log
|
||
|
||
# Or if using manual method:
|
||
tail -f sweep.log
|
||
|
||
# See current best result
|
||
grep 'Best so far' backtester/logs/sweep_comprehensive_*.log | tail -5
|
||
|
||
# Check if still running
|
||
ps aux | grep comprehensive_sweep
|
||
|
||
# Check CPU usage
|
||
htop
|
||
```
|
||
|
||
## Stop if Needed
|
||
|
||
```bash
|
||
# Using PID file:
|
||
kill $(cat sweep.pid)
|
||
|
||
# Or by name:
|
||
pkill -f comprehensive_sweep
|
||
```
|
||
|
||
## EPYC Performance Estimate
|
||
|
||
- **Your EPYC:** 16 cores/32 threads
|
||
- **Local Server:** 6 cores
|
||
- **Speedup:** ~5-6× faster on EPYC
|
||
|
||
**Total combinations:** 14,929,920
|
||
|
||
**Estimated times:**
|
||
- Local (6 cores): ~30-40 hours
|
||
- EPYC (16 cores): ~6-8 hours 🚀
|
||
|
||
## Retrieve Results
|
||
|
||
```bash
|
||
# After completion, download results
|
||
scp root@72.62.39.24:/root/comprehensive_sweep/sweep_comprehensive.csv .
|
||
|
||
# Check top results on server first:
|
||
head -21 /root/comprehensive_sweep/sweep_comprehensive.csv
|
||
```
|
||
|
||
## Results Format
|
||
|
||
CSV columns:
|
||
- rank
|
||
- trades
|
||
- win_rate
|
||
- total_pnl
|
||
- pnl_per_1k (most important - profitability per $1000)
|
||
- flip_threshold
|
||
- ma_gap
|
||
- adx_min
|
||
- long_pos_max
|
||
- short_pos_min
|
||
- cooldown
|
||
- position_size
|
||
- tp1_mult
|
||
- tp2_mult
|
||
- sl_mult
|
||
- tp1_close_pct
|
||
- trailing_mult
|
||
- vol_min
|
||
- max_bars
|
||
|
||
## Quick Test
|
||
|
||
Before running full sweep, test that everything works:
|
||
|
||
```bash
|
||
cd /root/comprehensive_sweep
|
||
source .venv/bin/activate
|
||
|
||
# Quick test with just 10 combinations
|
||
python3 -c "
|
||
from pathlib import Path
|
||
from backtester.data_loader import load_csv
|
||
from backtester.simulator import simulate_money_line, TradeConfig
|
||
from backtester.indicators.money_line import MoneyLineInputs
|
||
|
||
data_slice = load_csv(Path('backtester/data/solusdt_5m_aug_nov.csv'), 'SOL-PERP', '5m')
|
||
print(f'Loaded {len(data_slice.data)} candles')
|
||
|
||
inputs = MoneyLineInputs(flip_threshold_percent=0.6)
|
||
config = TradeConfig(position_size=210.0)
|
||
results = simulate_money_line(data_slice.data, 'SOL-PERP', inputs, config)
|
||
print(f'Test: {len(results.trades)} trades, {results.win_rate*100:.1f}% WR, \${results.total_pnl:.2f} P&L')
|
||
print('✅ Everything working!')
|
||
"
|
||
```
|
||
|
||
If test passes, run the full sweep!
|