**Problem 1: Rate Limit Cascade**
- Position Manager tried to close repeatedly, overwhelming Helius RPC (10 req/s limit)
- Base retry delay was too aggressive (2s → 4s → 8s)
- No graceful handling when 429 errors occur
**Problem 2: Orphaned Positions After Restart**
- Container restarts lost Position Manager state
- Positions marked 'closed' in DB but still open on Drift (failed close transactions)
- No cross-validation between database and actual Drift positions
**Solutions Implemented:**
1. **Increased retry delays (orders.ts)**:
- Base delay: 2s → 5s (progression now 5s → 10s → 20s)
- Reduces RPC pressure during rate limit situations
- Gives Helius time to recover between retries
- Documented Helius limits: 100 req/s burst, 10 req/s sustained (free tier)
2. **Startup position validation (init-position-manager.ts)**:
- Cross-checks last 24h of 'closed' trades against actual Drift positions
- If DB says closed but Drift shows open → reopens in DB to restore tracking
- Prevents unmonitored positions from existing after container restarts
- Logs detailed mismatch info for debugging
3. **Rate limit-aware exit handling (position-manager.ts)**:
- Detects 429 errors during position close
- Keeps trade in monitoring instead of removing it
- Natural retry on next price update (vs aggressive 2s loop)
- Prevents marking position as closed when transaction actually failed
**Impact:**
- Eliminates orphaned positions after restarts
- Reduces RPC pressure by 2.5x (5s vs 2s base delay)
- Graceful degradation under rate limits
- Position Manager continues monitoring even during temporary RPC issues
**Testing needed:**
- Monitor next container restart to verify position restoration works
- Check rate limit analytics after next close attempt
- Verify no more phantom 'closed' positions when Drift shows open