AI-powered DCA manager with sophisticated reversal detection Multi-factor analysis: price movements, RSI, support/resistance, 24h trends Real example: SOL position analysis shows 5.2:1 risk/reward improvement lib/ai-dca-manager.ts - Complete DCA analysis engine with risk management Intelligent scaling: adds to positions when AI detects 50%+ reversal confidence Account-aware: uses up to 50% available balance with conservative 3x leverage Dynamic SL/TP: adjusts stop loss and take profit for new average position lib/automation-service-simple.ts - DCA monitoring in main trading cycle prisma/schema.prisma - DCARecord model for comprehensive tracking Checks DCA opportunities before new trade analysis (priority system) test-ai-dca-simple.js - Real SOL position test from screenshot data Entry: 85.98, Current: 83.87 (-1.13% underwater) AI recommendation: 1.08 SOL DCA → 4.91 profit potential Risk level: LOW with 407% liquidation safety margin LOGIC Price movement analysis: 1-10% against position optimal for DCA Market sentiment: 24h trends must align with DCA direction Technical indicators: RSI oversold (<35) for longs, overbought (>65) for shorts Support/resistance: proximity to key levels increases confidence Risk management: respects leverage limits and liquidation distances Complete error handling and fallback mechanisms Database persistence for DCA tracking and performance analysis Seamless integration with existing AI leverage calculator Real-time market data integration for accurate decision making
29 KiB
AI-Powered Trading Bot Dashboard
This is a Next.js 15 App Router application with TypeScript, Tailwind CSS, and API routes. It's a production-ready trading bot with AI analysis, automated screenshot capture, and real-time trading execution via Drift Protocol and Jupiter DEX.
Prerequisites:
- Docker and Docker Compose v2 (uses
docker composecommand syntax) - All development must be done inside Docker containers for browser automation compatibility
Core Architecture
Dual-Session Screenshot Automation
- AI Layout:
Z1TzpUrf- RSI (top), EMAs, MACD (bottom) - DIY Layout:
vWVvjLhP- Stochastic RSI (top), VWAP, OBV (bottom) - Parallel browser sessions for multi-layout capture in
lib/enhanced-screenshot.ts - TradingView automation with session persistence in
lib/tradingview-automation.ts - Session data stored in
.tradingview-session/volume mount to avoid captchas
AI-Driven Dynamic Leverage System ✅
Complete AI leverage calculator with intelligent position sizing:
lib/ai-leverage-calculator.ts- Core AI leverage calculation engine with risk management- Account-based strategies: <$1k uses 100% balance (aggressive), >$1k uses 50% balance (conservative)
- Safety mechanisms: 10% buffer between liquidation price and stop loss
- Platform integration: Drift Protocol with maximum 20x leverage constraints
- Integration: Enhanced
lib/automation-service-simple.tsuses AI-calculated leverage for all positions
AI-Driven DCA (Dollar Cost Averaging) System ✅
Revolutionary position scaling that maximizes profits while managing risk:
lib/ai-dca-manager.ts- AI-powered DCA analysis engine with reversal detection- Multi-factor Analysis: Price movements, 24h trends, RSI levels, support/resistance
- Smart Scaling: Adds to positions when AI detects reversal potential (50%+ confidence threshold)
- Risk Management: Respects leverage limits, adjusts stop loss/take profit for new average price
- Account Integration: Uses available balance strategically (up to 50% for DCA operations)
- Real Example: SOL position at $185.98 entry, $183.87 current → AI recommends 1.08 SOL DCA for 5.2:1 R/R improvement
DCA Decision Factors:
- Price movement against position (1-10% optimal range)
- 24h market sentiment alignment with DCA direction
- Technical indicators (RSI oversold/overbought zones)
- Proximity to support/resistance levels
- Available balance and current leverage headroom
- Liquidation distance and safety buffers
Integration Points:
lib/automation-service-simple.ts- Automated DCA monitoring in main trading cycleprisma/schema.prisma- DCARecord model for tracking all scaling operations- Database tracking of DCA count, total amount, and performance metrics
Trading Integration
- Drift Protocol: Perpetual futures trading via
@drift-labs/sdk - Jupiter DEX: Spot trading on Solana
- Position management and P&L tracking in
lib/drift-trading-final.ts - Real-time account balance and collateral monitoring
Critical Development Patterns
Automation System Development Wisdom
Key lessons from building and debugging the automation system:
AI Risk Management vs Manual Controls
- NEVER mix manual TP/SL inputs with AI automation - causes conflicts and unpredictable behavior
- When implementing AI-driven automation, remove all manual percentage inputs from the UI
- AI should calculate dynamic stop losses and take profits based on market conditions, not user-defined percentages
- Always validate that UI selections (timeframes, strategies) are properly passed to backend services
Balance and P&L Calculation Critical Rules
- ALWAYS use Drift SDK's built-in calculation methods instead of manual calculations
- Use
driftClient.getUser().getTotalCollateral()for accurate collateral values - Use
driftClient.getUser().getUnrealizedPNL()for accurate P&L calculations - NEVER use hardcoded prices (like $195 for SOL) - always get current market data
- NEVER use empirical precision factors - use official SDK precision handling
- Test balance calculations against actual Drift interface values for validation
- Unrealized P&L should match position-level P&L calculations
Timeframe Handling Best Practices
- Always use minute values first in timeframe mapping to avoid TradingView confusion
- Example:
'4h': ['240', '240m', '4h', '4H']- 240 minutes FIRST, then alternatives - Validate that UI timeframe selections reach the automation service correctly
- Log timeframe values at every step to catch hardcoded overrides
System Integration Debugging
- Always validate data flow from UI → API → Service → Trading execution
- Check for hardcoded values that override user selections (especially timeframes)
- Verify correct protocol usage (Drift vs Jupiter) in trading execution
- Test cleanup systems regularly - memory leaks kill automation reliability
- Implement comprehensive logging for multi-step processes
Analysis Timer Implementation
- Store
nextScheduledtimestamps in database for persistence across restarts - Calculate countdown dynamically based on current time vs scheduled time
- Update timer fields in automation status responses for real-time UI updates
- Format countdown as "XhYm" or "Xm Ys" for better user experience
Docker Container Development (Required)
All development happens inside Docker containers using Docker Compose v2. Browser automation requires specific system dependencies that are only available in the containerized environment:
IMPORTANT: Use Docker Compose v2 syntax - All commands use docker compose (with space) instead of docker-compose (with hyphen).
# Development environment - Docker Compose v2 dev setup
npm run docker:dev # Port 9001:3000, hot reload, debug mode
# Direct v2 command: docker compose -f docker-compose.dev.yml up --build
# Production environment
npm run docker:up # Port 9000:3000, optimized build
# Direct v2 command: docker compose -f docker-compose.prod.yml up --build
# Debugging commands
npm run docker:logs # View container logs
# Direct v2 command: docker compose -f docker-compose.dev.yml logs -f
npm run docker:exec # Shell access for debugging inside container
# Direct v2 command: docker compose -f docker-compose.dev.yml exec app bash
Port Configuration:
- Development: External port
9001→ Internal port3000(http://localhost:9001) - Production: External port
9000→ Internal port3000(http://localhost:9000)
Docker Volume Mount Troubleshooting & Direct Container Development
Common Issue: File edits not reflecting in container due to volume mount sync issues.
Container-First Development Workflow: For immediate results and faster iteration, edit files directly inside the running container first, then rebuild for persistence:
# 1. Access running container for immediate edits
docker compose -f docker-compose.dev.yml exec app bash
# 2. Edit files directly in container (immediate effect)
# Use nano, vi, or echo for quick changes
nano /app/lib/enhanced-screenshot.ts
echo "console.log('Debug: immediate test');" >> /app/debug.js
# 3. Test changes immediately (no rebuild needed)
# Changes take effect instantly for hot reload
# 4. Once everything works, copy changes back to host
docker cp container_name:/app/modified-file.js ./modified-file.js
# 5. Commit successful changes to git BEFORE rebuilding
git add .
git commit -m "feat: implement working solution for [specific feature]"
git push origin development
# 6. Rebuild container for persistence
docker compose -f docker-compose.dev.yml down
docker compose -f docker-compose.dev.yml up --build -d
# 7. Final validation and commit completion
# Test that changes persist after rebuild
curl http://localhost:9001 # Verify functionality
git add . && git commit -m "chore: confirm container persistence" && git push
Alternative Solutions:
- Fresh Implementation Approach: When modifying existing files fails, create new files (e.g.,
page-v2.js) instead of editing corrupted files - Container Restart:
docker compose -f docker-compose.dev.yml restart app - Full Rebuild:
docker compose -f docker-compose.dev.yml down && docker compose -f docker-compose.dev.yml up --build - Manual Copy: Use
docker cpto copy files directly into container for immediate testing - Avoid sed/awk: Direct text manipulation commands often corrupt JSX syntax - prefer file replacement
Volume Mount Verification:
# Test volume mount sync
echo "test-$(date)" > test-volume-mount.txt
docker compose -f docker-compose.dev.yml exec app cat test-volume-mount.txt
Container Development Best Practices:
- Speed: Direct container edits = immediate testing
- Persistence: Always rebuild container after successful tests
- Backup: Use
docker cpto extract working changes before rebuild - Debugging: Use container shell for real-time log inspection and debugging
Multi-Timeframe Feature Copy Pattern
When copying multi-timeframe functionality between pages:
Step 1: Identify Source Implementation
# Search for existing timeframe patterns
grep -r "timeframes.*=.*\[" app/ --include="*.js" --include="*.jsx"
grep -r "selectedTimeframes" app/ --include="*.js" --include="*.jsx"
Step 2: Copy Core State Management
// Always include these state hooks
const [selectedTimeframes, setSelectedTimeframes] = useState(['1h', '4h']);
const [balance, setBalance] = useState({ balance: 0, collateral: 0 });
// Essential toggle function
const toggleTimeframe = (tf) => {
setSelectedTimeframes(prev =>
prev.includes(tf) ? prev.filter(t => t !== tf) : [...prev, tf]
);
};
Step 3: Copy UI Components
- Timeframe checkbox grid
- Preset buttons (Scalping, Day Trading, Swing Trading)
- Auto-sizing position calculator
- Formatted balance display
Step 4: Avoid Docker Issues
- Create new file instead of editing existing if volume mount issues persist
- Use fresh filename like
page-v2.jsorautomation-v2/page.js - Test in container before committing
API Route Structure
All core functionality exposed via Next.js API routes:
// Enhanced screenshot with progress tracking and robust cleanup
POST /api/enhanced-screenshot
{
symbol: "SOLUSD",
timeframe: "240",
layouts: ["ai", "diy"],
analyze: true
}
// Returns: { screenshots, analysis, sessionId }
// Note: Includes automatic Chromium process cleanup via finally blocks
// Drift trading endpoints
GET /api/balance # Account balance/collateral
POST /api/trading # Execute trades
GET /api/status # Trading status
API Development Tips:
- All browser automation APIs include guaranteed cleanup via finally blocks
- Use session tracking for long-running operations
- Test API endpoints directly with curl before UI integration
- Monitor Chromium processes during API testing:
pgrep -f chrome | wc -l
Progress Tracking System
Real-time operation tracking for long-running tasks:
lib/progress-tracker.tsmanages EventEmitter-based progress- SessionId-based tracking for multi-step operations
- Steps: init → auth → navigation → loading → capture → analysis
- Stream endpoint:
/api/progress/[sessionId]/stream
Browser Process Management & Cleanup System
Critical Issue: Chromium processes accumulate during automated trading, consuming system resources over time.
Robust Cleanup Implementation: The trading bot includes a comprehensive cleanup system to prevent Chromium process accumulation:
Core Cleanup Components:
-
Enhanced Screenshot Service (
lib/enhanced-screenshot-robust.ts)- Guaranteed cleanup via
finallyblocks in all browser operations - Active session tracking to prevent orphaned browsers
- Session cleanup tasks array for systematic teardown
- Guaranteed cleanup via
-
Automated Cleanup Service (
lib/automated-cleanup-service.ts)- Background monitoring service for orphaned processes
- Multiple kill strategies: graceful → force → system cleanup
- Periodic cleanup of temporary files and browser data
-
Aggressive Cleanup Utilities (
lib/aggressive-cleanup.ts)- System-level process killing for stubborn Chromium processes
- Port cleanup and temporary directory management
- Emergency cleanup functions for resource recovery
Implementation Patterns:
// Always use finally blocks for guaranteed cleanup
try {
const browser = await puppeteer.launch(options);
// ... browser operations
} finally {
// Guaranteed cleanup regardless of success/failure
await ensureBrowserCleanup(browser, sessionId);
await cleanupSessionTasks(sessionId);
}
// Background monitoring for long-running operations
const cleanupService = new AutomatedCleanupService();
cleanupService.startPeriodicCleanup(); // Every 10 minutes
Cleanup Testing:
# Test cleanup system functionality
node test-cleanup-system.js
# Monitor Chromium processes during automation
watch 'pgrep -f "chrome|chromium" | wc -l'
# Manual cleanup if needed
node -e "require('./lib/aggressive-cleanup.ts').performAggressiveCleanup()"
Prevention Strategies:
- Use session tracking for all browser instances
- Implement timeout protection for long-running operations
- Monitor resource usage during extended automation cycles
- Restart containers periodically for fresh environment Critical timeframe handling to avoid TradingView confusion:
// ALWAYS use minute values first, then alternatives
'4h': ['240', '240m', '4h', '4H'] // 240 minutes FIRST
'1h': ['60', '60m', '1h', '1H'] // 60 minutes FIRST
'15m': ['15', '15m']
Layout URL mappings for direct navigation:
const LAYOUT_URLS = {
'ai': 'Z1TzpUrf', // RSI + EMAs + MACD
'diy': 'vWVvjLhP' // Stochastic RSI + VWAP + OBV
}
Component Architecture
app/layout.js- Root layout with gradient styling and navigationcomponents/Navigation.tsx- Multi-page navigation systemcomponents/AIAnalysisPanel.tsx- Multi-timeframe analysis interfacecomponents/Dashboard.tsx- Main trading dashboard with real Drift positionscomponents/AdvancedTradingPanel.tsx- Drift Protocol trading interface
Cleanup System Architecture
Critical Production Issue: Chromium processes accumulate during automated trading, leading to resource exhaustion after several hours of operation.
Solution Components:
-
Enhanced Screenshot Service (
lib/enhanced-screenshot-robust.ts)- Replaces original screenshot service with guaranteed cleanup
- Uses
finallyblocks to ensure browser cleanup regardless of success/failure - Active session tracking with cleanup task arrays
- Force-kill functionality for stubborn processes
-
Automated Cleanup Service (
lib/automated-cleanup-service.ts)- Background monitoring service that runs every 10 minutes
- Multiple cleanup strategies: graceful → force → system-level cleanup
- Temporary file cleanup and browser data directory management
- Orphaned process detection and elimination
-
Aggressive Cleanup Utilities (
lib/aggressive-cleanup.ts)- Emergency cleanup functions for critical resource recovery
- System-level process management with multiple kill strategies
- Port cleanup and zombie process elimination
- Used by both automated service and manual intervention
Integration Pattern:
// In API routes - always use finally blocks
app/api/enhanced-screenshot/route.js:
try {
const result = await enhancedScreenshot.captureAndAnalyze(...);
return NextResponse.json(result);
} finally {
// Guaranteed cleanup execution
await enhancedScreenshot.cleanup();
}
// Background monitoring
lib/automated-cleanup-service.ts:
setInterval(async () => {
await this.performCleanup();
}, 10 * 60 * 1000); // Every 10 minutes
Testing Cleanup System:
# Monitor process count during operation
watch 'pgrep -f "chrome|chromium" | wc -l'
# Test cleanup functionality
node test-cleanup-system.js
# Manual cleanup if needed
docker compose exec app node -e "require('./lib/aggressive-cleanup.ts').forceKillAllChromium()"
Page Structure & Multi-Timeframe Implementation
app/analysis/page.js- Original analysis page with multi-timeframe functionalityapp/automation/page.js- Original automation page (legacy, may have issues)app/automation-v2/page.js- NEW: Clean automation page with full multi-timeframe supportapp/automation/page-v2.js- Alternative implementation, same functionality as automation-v2
Multi-Timeframe Architecture Pattern:
// Standard timeframes array - use this exact format
const timeframes = ['5m', '15m', '30m', '1h', '2h', '4h', '1d'];
// State management for multi-timeframe selection
const [selectedTimeframes, setSelectedTimeframes] = useState(['1h', '4h']);
// Toggle function with proper array handling
const toggleTimeframe = (tf) => {
setSelectedTimeframes(prev =>
prev.includes(tf)
? prev.filter(t => t !== tf) // Remove if selected
: [...prev, tf] // Add if not selected
);
};
// Preset configurations for trading styles
const presets = {
scalping: ['5m', '15m', '1h'],
day: ['1h', '4h', '1d'],
swing: ['4h', '1d']
};
UI Pattern for Timeframe Selection:
// Checkbox grid layout with visual feedback
<div className="grid grid-cols-4 gap-2 mb-4">
{timeframes.map(tf => (
<button
key={tf}
onClick={() => toggleTimeframe(tf)}
className={`p-2 rounded border transition-all ${
selectedTimeframes.includes(tf)
? 'bg-blue-600 border-blue-500 text-white'
: 'bg-gray-700 border-gray-600 text-gray-300 hover:bg-gray-600'
}`}
>
{tf}
</button>
))}
</div>
// Preset buttons for quick selection
<div className="flex gap-2 mb-4">
{Object.entries(presets).map(([name, tfs]) => (
<button
key={name}
onClick={() => setSelectedTimeframes(tfs)}
className="px-3 py-1 bg-purple-600 hover:bg-purple-700 rounded text-sm"
>
{name.charAt(0).toUpperCase() + name.slice(1)}
</button>
))}
</div>
Environment Variables
# AI Analysis (Required)
OPENAI_API_KEY=sk-... # OpenAI API key for chart analysis
# TradingView Automation (Required)
TRADINGVIEW_EMAIL= # TradingView account email
TRADINGVIEW_PASSWORD= # TradingView account password
# Trading Integration (Optional)
SOLANA_RPC_URL=https://api.mainnet-beta.solana.com
DRIFT_PRIVATE_KEY= # Base58 encoded Solana private key
SOLANA_PRIVATE_KEY= # JSON array format for Jupiter DEX
# Docker Environment Detection
DOCKER_ENV=true # Auto-set in containers
PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium
Testing & Debugging Workflow
Test files follow specific patterns - use them to validate changes:
# Test dual-session screenshot capture
node test-enhanced-screenshot.js
# Test robust cleanup system
node test-cleanup-system.js
# Test Docker environment (requires Docker Compose v2)
./test-docker-comprehensive.sh
# Test API endpoints directly
node test-analysis-api.js
# Test Drift trading integration
node test-drift-trading.js
# Monitor resource usage during automation
watch 'pgrep -f "chrome|chromium" | wc -l'
Container-First Development Workflow:
# 1. Start development container
npm run docker:dev # Port 9001
# 2. For immediate testing, edit directly in container
docker compose -f docker-compose.dev.yml exec app bash
# Edit files in /app/ directory for instant results
# 3. Test changes in real-time (hot reload active)
curl http://localhost:9001/api/enhanced-screenshot
# 4. Once working, commit progress to git
git add .
git commit -m "feat: implement [feature] - tested and working"
git push origin development
# 5. Rebuild container for persistence
docker compose -f docker-compose.dev.yml down
docker compose -f docker-compose.dev.yml up --build -d
# 6. Validate persistent changes and final commit
curl http://localhost:9001 # Should show updated functionality
git add . && git commit -m "chore: confirm persistence after container rebuild" && git push
Browser automation debugging:
- Screenshots automatically saved to
screenshots/with timestamps - Debug screenshots:
takeDebugScreenshot('prefix') - Session persistence prevents repeated logins/captchas
- Use
npm run docker:logsto view real-time automation logs - All Docker commands use v2 syntax:
docker compose(notdocker-compose) - Monitor Chromium processes:
docker compose exec app pgrep -f chrome
Code Style & Architecture Patterns
- Client Components: Use
"use client"for state/effects, server components by default - Styling: Tailwind with gradient backgrounds (
bg-gradient-to-br from-gray-900 via-blue-900 to-purple-900) - Error Handling: Detailed logging for browser automation with fallbacks
- File Structure: Mixed
.js/.tsx- components in TypeScript, API routes in JavaScript - Database: Prisma with SQLite (
DATABASE_URL=file:./prisma/dev.db)
Key Integration Points
- Session Persistence:
.tradingview-session/directory volume-mounted - Screenshots:
screenshots/directory for chart captures - Progress Tracking: EventEmitter-based real-time updates via SSE
- Multi-Stage Docker: Development vs production builds with browser optimization
- CAPTCHA Handling: Manual CAPTCHA mode with X11 forwarding (
ALLOW_MANUAL_CAPTCHA=true) - Process Management: Robust cleanup system prevents Chromium accumulation
- Container Development: Direct in-container editing for immediate testing, rebuild for persistence
Development vs Production Modes
- Development: Port 9001:3000, hot reload, debug logging, headless: false option
- Production: Port 9000:3000, optimized build, minimal logging, always headless
Development Container Features:
- Hot Reload: File changes reflect immediately (when volume mounts work)
- Process Monitoring: Real-time Chromium process tracking
- Debug Access: Shell access via
docker compose exec app bash - Immediate Testing: Edit files in
/app/directory for instant results - Resource Cleanup: Automated cleanup services running in background
Git Branch Strategy (Required)
Primary development workflow:
developmentbranch: Use for all active development and feature workmainbranch: Stable, production-ready code only- Workflow: Develop on
development→ test thoroughly → commit progress → merge tomainwhen stable
# Standard development workflow with frequent commits
git checkout development # Always start here
git pull origin development # Get latest changes
# Make your changes and test in container...
# Commit working progress BEFORE rebuilding container
git add .
git commit -m "feat: [specific achievement] - tested and working"
git push origin development
# After successful container rebuild and validation
git add .
git commit -m "chore: confirm [feature] persistence after rebuild"
git push origin development
# Only merge to main when features are stable and tested and you have asked the user to merge to main
git checkout main
git merge development # When ready for production
git push origin main
Git Commit Best Practices:
- Commit Early: Save working progress before container rebuilds
- Commit Often: After each successful test or implementation step
- Descriptive Messages: Include what was accomplished and tested
- Final Commits: Always commit after confirming container persistence
Container Persistence & Git Strategy:
- Git changes only persist after container rebuild with
--buildflag - CRITICAL: Commit working changes BEFORE rebuilding container
- Test changes in container first, then commit and rebuild
- Use descriptive commit messages for cleanup system improvements
- Example commits from robust cleanup implementation:
feat: implement robust cleanup system with finally blocks - tested in containerfix: restore automation-v2 page with balance slider - confirmed workingchore: confirm cleanup system persistence after container rebuilddocs: update instructions with container development workflow
When working with this codebase, prioritize Docker consistency, understand the dual-session architecture, leverage the comprehensive test suite to validate changes, and always implement proper cleanup patterns for browser automation to prevent resource exhaustion.
Common Issues & Troubleshooting
Chromium Process Accumulation
Symptoms: System becomes slow after hours of automation, high CPU/memory usage, many chrome processes running
Diagnosis: pgrep -f "chrome|chromium" | wc -l shows increasing process count
Solutions:
- Ensure all browser automation uses finally blocks for cleanup
- Restart automated cleanup service:
docker compose exec app node -e "require('./lib/automated-cleanup-service.ts').startPeriodicCleanup()" - Manual cleanup:
docker compose exec app node test-cleanup-system.js - Container restart:
docker compose restart app
Volume Mount Sync Issues
Symptoms: File changes not reflecting in running container Diagnosis: Edit test file and check if visible in container Solutions:
- Quick Fix: Edit directly in container for immediate testing
- Commit Progress: Save working changes to git before rebuilding
- Persistence: Always rebuild container after confirming changes work
- Final Commit: Validate and commit after successful rebuild
- Manual Copy: Use
docker cpto transfer working files - Fresh Start: Create new files instead of editing problematic ones
Automation Page Restoration
Symptoms: Automation page shows old version after container rebuild Issue: Git history not properly maintained in container Solutions:
- Check current branch:
git branch - Restore from git:
git checkout HEAD -- app/automation-v2/page.js - Verify features: Check for balance slider and multi-timeframe selection
- Commit restoration:
git add . && git commit -m "fix: restore automation-v2 functionality" && git push - Rebuild container to persist restoration
Testing and Validation Patterns (Critical)
Essential validation steps learned from complex automation debugging:
API Response Validation
- Always test API responses directly with curl before debugging UI issues
- Compare calculated values against actual trading platform values
- Example:
curl -s http://localhost:9001/api/drift/balance | jq '.unrealizedPnl' - Validate that API returns realistic values (2-5% targets, not 500% gains)
Multi-Component System Testing
- Test data flow end-to-end: UI selection → API endpoint → Service logic → Database storage
- Use browser dev tools to verify API calls match expected parameters
- Check database updates after automation cycles complete
- Validate that timer calculations match expected intervals
Trading Integration Validation
- Never assume trading calculations are correct - always validate against platform
- Test with small amounts first when implementing new trading logic
- Compare bot-calculated P&L with actual platform P&L values
- Verify protocol selection (Drift vs Jupiter) matches intended trading method
AI Analysis Output Validation
- Always check AI responses for realistic values before using in trading
- AI can return absolute prices when percentages are expected - validate data types
- Log AI analysis results to catch unrealistic take profit targets (>50% gains)
- Implement bounds checking on AI-generated trading parameters
Cleanup System Monitoring
- Test cleanup functionality after every automation cycle
- Monitor memory usage patterns to catch cleanup failures early
- Verify that cleanup triggers properly after analysis completion
- Check for zombie browser processes that indicate cleanup issues
Successful Implementation Workflow
After completing any feature or fix:
# 1. Test functionality thoroughly
curl http://localhost:9001/api/test-endpoint
# 2. Commit successful implementation
git add .
git commit -m "feat: [specific achievement] - fully tested and working"
git push origin development
# 3. Rebuild container for persistence
docker compose down && docker compose up --build -d
# 4. Final validation and completion commit
curl http://localhost:9001 # Verify persistent functionality
git add . && git commit -m "chore: confirm [feature] persistence - implementation complete" && git push
# 5. Consider merge to main if ready for production
# (Ask user first before merging to main branch)
API Endpoint Not Responding
Symptoms: API calls timeout or return errors
Diagnosis: Check container logs: docker compose logs -f app
Solutions:
- Verify container is running:
docker ps - Check port mapping: Development=9001:3000, Production=9000:3000
- Test direct access:
curl localhost:9001/api/status - Restart if needed:
docker compose restart app