Files
trading_bot_v3/.github/copilot-instructions.md
mindesbunister 11aec95d47 docs: consolidate copilot instructions into single comprehensive file
- Merged duplicate .github/copilot-instructions.instructions.md into main copilot-instructions.md
- Combined development patterns, architecture details, and AI learning system docs
- Added comprehensive references to all technical documentation files
- Single source of truth for GitHub Copilot development guidance
- Includes Docker workflow, cleanup systems, error handling patterns
2025-07-26 15:19:36 +02:00

20 KiB

GitHub Copilot Instructions for Trading Bot Development

🎯 Project Context & Architecture

This is an AI-powered trading automation system with advanced learning capabilities built with Next.js 15 App Router, TypeScript, Tailwind CSS, and integrated with Drift Protocol and Jupiter DEX for automated trading execution.

Core System Components

  1. Superior Parallel Screenshot System - 60% faster than sequential capture (71s vs 180s)
  2. AI Learning System - Adapts trading decisions based on outcomes with pattern recognition
  3. Orphaned Order Cleanup - Automatic cleanup when positions close via position monitor
  4. Position Monitoring - Frequent checks with integrated cleanup triggers
  5. Dual-Session Screenshot Automation - AI and DIY layouts with session persistence
  6. Robust Cleanup System - Prevents Chromium process accumulation

Critical File Relationships

app/api/automation/position-monitor/route.js → Monitors positions + triggers cleanup
lib/simplified-stop-loss-learner.js → AI learning core with pattern recognition
lib/superior-screenshot-service.ts → Parallel screenshot capture system
lib/enhanced-autonomous-risk-manager.js → Risk management with AI integration
lib/enhanced-screenshot-robust.ts → Guaranteed cleanup with finally blocks
lib/automated-cleanup-service.ts → Background process monitoring

🚀 Development Environment (Critical)

Docker Container Development (Required)

All development happens inside Docker containers using Docker Compose v2. Browser automation requires specific system dependencies only available in containerized environment.

IMPORTANT: Use Docker Compose v2 syntax - All commands use docker compose (with space) instead of docker-compose (with hyphen).

# Development environment - Docker Compose v2 dev setup
npm run docker:dev        # Port 9001:3000, hot reload, debug mode
# Direct v2 command: docker compose -f docker-compose.dev.yml up --build

# Production environment  
npm run docker:up         # Port 9000:3000, optimized build
# Direct v2 command: docker compose -f docker-compose.prod.yml up --build

# Debugging commands
npm run docker:logs       # View container logs
npm run docker:exec       # Shell access for debugging inside container

Port Configuration:

Container-First Development Workflow

Common Issue: File edits not reflecting in container due to volume mount sync issues.

Solution - Container Development Workflow:

# 1. Access running container for immediate edits
docker compose -f docker-compose.dev.yml exec app bash

# 2. Edit files directly in container (immediate effect)
nano /app/lib/enhanced-screenshot.ts
echo "console.log('Debug: immediate test');" >> /app/debug.js

# 3. Test changes immediately (no rebuild needed)
# Changes take effect instantly for hot reload

# 4. Once everything works, copy changes back to host
docker cp container_name:/app/modified-file.js ./modified-file.js

# 5. Commit successful changes to git BEFORE rebuilding
git add .
git commit -m "feat: implement working solution for [specific feature]"
git push origin development

# 6. Rebuild container for persistence
docker compose -f docker-compose.dev.yml down
docker compose -f docker-compose.dev.yml up --build -d

# 7. Final validation and commit completion
curl http://localhost:9001  # Verify functionality
git add . && git commit -m "chore: confirm container persistence" && git push

Git Branch Strategy (Required)

Primary development workflow:

  • development branch: Use for all active development and feature work
  • main branch: Stable, production-ready code only
  • Workflow: Develop on development → test thoroughly → commit progress → merge to main when stable
# Standard development workflow with frequent commits
git checkout development        # Always start here
git pull origin development     # Get latest changes

# Make your changes and test in container...

# Commit working progress BEFORE rebuilding container
git add .
git commit -m "feat: [specific achievement] - tested and working"
git push origin development

# After successful container rebuild and validation
git add .
git commit -m "chore: confirm [feature] persistence after rebuild"
git push origin development

# Only merge to main when features are stable and tested
git checkout main
git merge development          # When ready for production
git push origin main

🏗️ System Architecture

Dual-Session Screenshot Automation

  • AI Layout: Z1TzpUrf - RSI (top), EMAs, MACD (bottom)
  • DIY Layout: vWVvjLhP - Stochastic RSI (top), VWAP, OBV (bottom)
  • Parallel browser sessions for multi-layout capture in lib/enhanced-screenshot.ts
  • TradingView automation with session persistence in lib/tradingview-automation.ts
  • Session data stored in .tradingview-session/ volume mount to avoid captchas

AI Analysis Pipeline

  • OpenAI GPT-4o mini for cost-effective chart analysis (~$0.006 per analysis)
  • Multi-layout comparison and consensus detection in lib/ai-analysis.ts
  • Professional trading setups with exact entry/exit levels and risk management
  • Layout-specific indicator analysis (RSI vs Stochastic RSI, MACD vs OBV)

Trading Integration

  • Drift Protocol: Perpetual futures trading via @drift-labs/sdk
  • Jupiter DEX: Spot trading on Solana
  • Position management and P&L tracking in lib/drift-trading-final.ts
  • Real-time account balance and collateral monitoring

Browser Process Management & Cleanup System

Critical Issue: Chromium processes accumulate during automated trading, consuming system resources over time.

Robust Cleanup Implementation:

  1. Enhanced Screenshot Service (lib/enhanced-screenshot-robust.ts)

    • Guaranteed cleanup via finally blocks in all browser operations
    • Active session tracking to prevent orphaned browsers
    • Session cleanup tasks array for systematic teardown
  2. Automated Cleanup Service (lib/automated-cleanup-service.ts)

    • Background monitoring service for orphaned processes
    • Multiple kill strategies: graceful → force → system cleanup
    • Periodic cleanup of temporary files and browser data
  3. Aggressive Cleanup Utilities (lib/aggressive-cleanup.ts)

    • System-level process killing for stubborn Chromium processes
    • Port cleanup and temporary directory management
    • Emergency cleanup functions for resource recovery

Implementation Patterns:

// Always use finally blocks for guaranteed cleanup
try {
  const browser = await puppeteer.launch(options);
  // ... browser operations
} finally {
  // Guaranteed cleanup regardless of success/failure
  await ensureBrowserCleanup(browser, sessionId);
  await cleanupSessionTasks(sessionId);
}

// Background monitoring for long-running operations
const cleanupService = new AutomatedCleanupService();
cleanupService.startPeriodicCleanup(); // Every 10 minutes

API Route Structure

All core functionality exposed via Next.js API routes:

// Enhanced screenshot with progress tracking and robust cleanup
POST /api/enhanced-screenshot
{
  symbol: "SOLUSD", 
  timeframe: "240", 
  layouts: ["ai", "diy"],
  analyze: true
}
// Returns: { screenshots, analysis, sessionId }
// Note: Includes automatic Chromium process cleanup via finally blocks

// Drift trading endpoints
GET /api/balance          # Account balance/collateral
POST /api/trading         # Execute trades
GET /api/status          # Trading status
GET /api/automation/position-monitor  # Position monitoring with orphaned cleanup
POST /api/drift/cleanup-orders        # Manual order cleanup

Progress Tracking System

Real-time operation tracking for long-running tasks:

  • lib/progress-tracker.ts manages EventEmitter-based progress
  • SessionId-based tracking for multi-step operations
  • Steps: init → auth → navigation → loading → capture → analysis
  • Stream endpoint: /api/progress/[sessionId]/stream

Page Structure & Multi-Timeframe Implementation

  • app/analysis/page.js - Original analysis page with multi-timeframe functionality
  • app/automation/page.js - Original automation page (legacy, may have issues)
  • app/automation-v2/page.js - NEW: Clean automation page with full multi-timeframe support
  • app/automation/page-v2.js - Alternative implementation, same functionality as automation-v2

Multi-Timeframe Architecture Pattern:

// Standard timeframes array - use this exact format
const timeframes = ['5m', '15m', '30m', '1h', '2h', '4h', '1d'];

// State management for multi-timeframe selection
const [selectedTimeframes, setSelectedTimeframes] = useState(['1h', '4h']);

// Toggle function with proper array handling
const toggleTimeframe = (tf) => {
  setSelectedTimeframes(prev => 
    prev.includes(tf) 
      ? prev.filter(t => t !== tf)  // Remove if selected
      : [...prev, tf]                // Add if not selected
  );
};

// Preset configurations for trading styles
const presets = {
  scalping: ['5m', '15m', '1h'],
  day: ['1h', '4h', '1d'],
  swing: ['4h', '1d']
};

Component Architecture

  • app/layout.js - Root layout with gradient styling and navigation
  • components/Navigation.tsx - Multi-page navigation system
  • components/AIAnalysisPanel.tsx - Multi-timeframe analysis interface
  • components/Dashboard.tsx - Main trading dashboard with real Drift positions
  • components/AdvancedTradingPanel.tsx - Drift Protocol trading interface

Critical timeframe handling to avoid TradingView confusion:

// ALWAYS use minute values first, then alternatives
'4h': ['240', '240m', '4h', '4H'] // 240 minutes FIRST
'1h': ['60', '60m', '1h', '1H']   // 60 minutes FIRST
'15m': ['15', '15m']

Layout URL mappings for direct navigation:

const LAYOUT_URLS = {
  'ai': 'Z1TzpUrf',    // RSI + EMAs + MACD
  'diy': 'vWVvjLhP'    // Stochastic RSI + VWAP + OBV
}

🧠 AI Learning System Patterns

async generateLearningReport() {
  // Return comprehensive learning status
  return {
    summary: { totalDecisions, systemConfidence, successRate },
    insights: { thresholds, confidenceLevel },
    recommendations: []
  };
}

async getSmartRecommendation(requestData) {
  // Analyze patterns and return AI recommendation
  const { distanceFromSL, symbol, marketConditions } = requestData;
  // Return: { action, confidence, reasoning }
}

async recordDecision(decisionData) {
  // Log decision for learning with unique ID
  const id = `decision_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
  // Store in database for pattern analysis
}

async assessDecisionOutcome(outcomeData) {
  // Update decision with actual result for learning
  // Calculate if decision was correct based on outcome
}

Database Operations Best Practices:

// ALWAYS provide unique IDs for Prisma records
await prisma.ai_learning_data.create({
  data: {
    id: `${prefix}_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`,
    // ... other fields
  }
});

// Use correct import path
const { getDB } = require('./db');  // NOT './database-util'

Always Include These Functions in Learning Classes:

async generateLearningReport() {
  // Return comprehensive learning status
  return {
    summary: { totalDecisions, systemConfidence, successRate },
    insights: { thresholds, confidenceLevel },
    recommendations: []
  };
}

async getSmartRecommendation(requestData) {
  // Analyze patterns and return AI recommendation
  const { distanceFromSL, symbol, marketConditions } = requestData;
  // Return: { action, confidence, reasoning }
}

async recordDecision(decisionData) {
  // Log decision for learning with unique ID
  const id = `decision_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
  // Store in database for pattern analysis
}

async assessDecisionOutcome(outcomeData) {
  // Update decision with actual result for learning
  // Calculate if decision was correct based on outcome
}

Database Operations Best Practices:

// ALWAYS provide unique IDs for Prisma records
await prisma.ai_learning_data.create({
  data: {
    id: `${prefix}_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`,
    // ... other fields
  }
});

// Use correct import path
const { getDB } = require('./db');  // NOT './database-util'

🔧 Error Handling Patterns

🔧 Error Handling Patterns

Function Existence Checks:

// Always check if functions exist before calling
if (typeof this.learner.generateLearningReport === 'function') {
  const report = await this.learner.generateLearningReport();
} else {
  // Fallback to alternative method
  const status = await this.learner.getLearningStatus();
}

Comprehensive Try-Catch:

try {
  const result = await aiFunction();
  return result;
} catch (error) {
  await this.log(`❌ AI function error: ${error.message}`);
  return fallbackResult(); // Always provide fallback
}

📊 Integration Patterns

// Always check if functions exist before calling
if (typeof this.learner.generateLearningReport === 'function') {
  const report = await this.learner.generateLearningReport();
} else {
  // Fallback to alternative method
  const status = await this.learner.getLearningStatus();
}

Comprehensive Try-Catch:

try {
  const result = await aiFunction();
  return result;
} catch (error) {
  await this.log(`❌ AI function error: ${error.message}`);
  return fallbackResult(); // Always provide fallback
}

📊 Integration Patterns

Position Monitor Integration:

// When no position detected, check for orphaned orders
if (!result.hasPosition) {
  console.log('📋 No active positions detected - checking for orphaned orders...');
  
  try {
    const ordersResponse = await fetch(`${baseUrl}/api/drift/orders`);
    if (ordersResponse.ok) {
      const ordersData = await ordersResponse.json();
      if (ordersData.orders?.length > 0) {
        // Trigger cleanup
        const cleanupResponse = await fetch(`${baseUrl}/api/drift/cleanup-orders`, {
          method: 'POST'
        });
        // Handle cleanup result
      }
    }
  } catch (error) {
    // Handle error gracefully
  }
}

Parallel Processing for Screenshots:

// Use Promise.allSettled for parallel processing
const promises = timeframes.map(timeframe => 
  captureTimeframe(timeframe, symbol, layoutType)
);
const results = await Promise.allSettled(promises);

// Process results with error isolation
results.forEach((result, index) => {
  if (result.status === 'fulfilled') {
    // Handle success
  } else {
    // Handle individual failure without breaking others
  }
});

🎯 Performance Optimization Rules

Screenshot Capture:

  • Always use parallel processing for multiple timeframes
  • Reuse browser sessions to avoid login/captcha
  • Isolate errors so one failure doesn't break others
  • Prefer Promise.allSettled over Promise.all

Database Queries:

  • Use indexed fields for frequent searches (symbol, createdAt)
  • Batch operations when possible
  • Include proper error handling for connection issues

Container Optimization:

  • Check syntax before deployment: node -c filename.js
  • Use health checks for monitoring
  • Implement graceful shutdown handling

🧪 Testing Requirements

Always Include These Tests:

// Test AI learning functions
const learner = new SimplifiedStopLossLearner();
const report = await learner.generateLearningReport();
console.log('Learning report:', report.summary);

// Test API endpoints
const response = await fetch('/api/automation/position-monitor');
const result = await response.json();
console.log('Position monitor working:', result.success);

// Test error scenarios
try {
  await riskyFunction();
} catch (error) {
  console.log('Error handling working:', error.message);
}

🎨 UI/UX Patterns

Preset Configuration:

// Frontend presets MUST match backend exactly
const TRADING_PRESETS = {
  scalp: ['5m', '15m', '30m'],    // NOT ['5m', '15m', '1h']
  day: ['1h', '2h'],              // NOT ['1h', '4h', '1d']  
  swing: ['4h', '1D'],
  extended: ['1m', '3m', '5m', '15m', '30m', '1h', '4h', '1D']
};

Status Display:

// Always provide detailed feedback
return {
  success: true,
  monitor: {
    hasPosition: false,
    orphanedOrderCleanup: {
      triggered: true,
      success: true,
      message: 'Cleaned up 2 orphaned orders',
      summary: { totalCanceled: 2 }
    }
  }
};

🔍 Debugging Strategies

Container Issues:

# Check for syntax errors
find . -name "*.js" -exec node -c {} \;

# Monitor logs for patterns
docker logs trader_dev --since="1m" | grep -E "(Error|unhandled|crash)"

# Test specific components
node test-learning-system.js

Integration Issues:

# Test API endpoints individually
curl -s http://localhost:9001/api/automation/position-monitor | jq .

# Verify database connectivity
node -e "const {getDB} = require('./lib/db'); getDB().then(() => console.log('DB OK'));"

🚨 Critical Anti-Patterns to Avoid

Don't Do This:

// Missing error handling
const report = await this.learner.generateLearningReport(); // Will crash if function missing

// Redundant polling
setInterval(checkOrders, 60000); // When position monitor already runs frequently

// Frontend/backend preset mismatch  
backend: ['5m', '15m', '1h']
frontend: ['5m', '15m', '30m'] // Will cause confusion

// Missing unique IDs
await prisma.create({ data: { symbol, timeframe } }); // Will fail validation

Do This Instead:

// Defensive programming
if (typeof this.learner.generateLearningReport === 'function') {
  try {
    const report = await this.learner.generateLearningReport();
  } catch (error) {
    await this.log(`Report generation failed: ${error.message}`);
  }
}

// Leverage existing infrastructure
// Add cleanup to existing position monitor instead of new polling

// Ensure consistency
const PRESETS = { scalp: ['5m', '15m', '30m'] }; // Same in frontend and backend

// Always provide unique IDs
const id = `${type}_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;

🎯 Configuration Standards

Environment Variables:

// Always provide fallbacks
const apiKey = process.env.OPENAI_API_KEY || '';
if (!apiKey) {
  throw new Error('OPENAI_API_KEY is required');
}

Next.js Configuration:

// Use new format, not deprecated
const nextConfig: NextConfig = {
  serverExternalPackages: ['puppeteer-core'], // NOT experimental.serverComponentsExternalPackages
  transpilePackages: ['next-font'],
  typescript: { ignoreBuildErrors: true },
  eslint: { ignoreDuringBuilds: true }
};

📈 Enhancement Guidelines

When adding new features:

  1. Check Existing Infrastructure - Can it be integrated vs creating new?
  2. Add Comprehensive Error Handling - Assume functions may not exist
  3. Include Fallback Mechanisms - System should work without AI/learning
  4. Test in Isolation - Create test scripts for new components
  5. Document Integration Points - How does it connect to existing systems?
  6. Maintain Consistency - Frontend and backend must match exactly
  7. Use Defensive Programming - Check before calling, handle gracefully

📚 Documentation References

Technical Documentation

  • ADVANCED_SYSTEM_KNOWLEDGE.md - Deep technical architecture, session management, cleanup systems
  • README.md - Main project overview with current feature status and setup
  • AI_LEARNING_EXPLAINED.md - AI learning system implementation details
  • DRIFT_FEEDBACK_LOOP_COMPLETE.md - Complete Drift trading integration
  • ROBUST_CLEANUP_IMPLEMENTATION.md - Browser process cleanup system details

Implementation Guides

  • MULTI_LAYOUT_IMPLEMENTATION.md - Dual-session screenshot system
  • SESSION_PERSISTENCE.md - TradingView session management
  • DOCKER_AUTOMATION.md - Container development workflow
  • DEVELOPMENT_GUIDE.md - Complete development setup instructions

Analysis & Troubleshooting

  • MULTI_LAYOUT_TROUBLESHOOTING.md - Screenshot automation debugging
  • CLEANUP_IMPROVEMENTS.md - Process management enhancements
  • SCREENSHOT_PATH_FIXES.md - Screenshot capture issue resolution

Follow these patterns to maintain system stability and avoid the complex debugging issues that were resolved in this session.