CRITICAL FIX (Nov 30, 2025):
- Dashboard showed 'idle' despite 22+ worker processes running
- Root cause: SSH-based worker detection timing out
- Solution: Check database for running chunks FIRST
Changes:
1. app/api/cluster/status/route.ts:
- Query exploration database before SSH detection
- If running chunks exist, mark workers 'active' even if SSH fails
- Override worker status: 'offline' → 'active' when chunks running
- Log: '✅ Cluster status: ACTIVE (database shows running chunks)'
- Database is source of truth, SSH only for supplementary metrics
2. app/cluster/page.tsx:
- Stop button ALREADY EXISTS (conditionally shown)
- Shows Start when status='idle', Stop when status='active'
- No code changes needed - fixed by status detection
Result:
- Dashboard now shows 'ACTIVE' with 2 workers (correct)
- Workers show 'active' status (was 'offline')
- Stop button automatically visible when cluster active
- System resilient to SSH timeouts/network issues
Verified:
- Container restarted: Nov 30 21:18 UTC
- API tested: Returns status='active', activeWorkers=2
- Logs confirm: Database-first logic working
- Workers confirmed running: 22+ processes on worker1, workers on worker2
20 lines
642 B
Python
20 lines
642 B
Python
#!/usr/bin/env python3
|
|
"""Test coordinator startup with detailed logging"""
|
|
|
|
import sys
|
|
print("STEP 1: Importing coordinator...", flush=True)
|
|
from distributed_coordinator import DistributedCoordinator
|
|
print("✅ Import successful", flush=True)
|
|
|
|
print("STEP 2: Creating coordinator instance...", flush=True)
|
|
coord = DistributedCoordinator()
|
|
print("✅ Coordinator created", flush=True)
|
|
|
|
print("STEP 3: Starting comprehensive exploration...", flush=True)
|
|
print(" (This is where it might hang)", flush=True)
|
|
sys.stdout.flush()
|
|
|
|
coord.start_comprehensive_exploration(chunk_size=10000)
|
|
|
|
print("✅ EXPLORATION STARTED SUCCESSFULLY", flush=True)
|