- Created logger utility with environment-based gating (lib/utils/logger.ts) - Replaced 517 console.log statements with logger.log (71% reduction) - Fixed import paths in 15 files (resolved comment-trapped imports) - Added DEBUG_LOGS=false to .env - Achieves 71% immediate log reduction (517/731 statements) - Expected 90% reduction in production when deployed Impact: Reduced I/O blocking, lower log volume in production Risk: LOW (easy rollback, non-invasive) Phase: Phase 1, Task 1.1 (Quick Wins - Console.log Production Gating) Files changed: - NEW: lib/utils/logger.ts (production-safe logging) - NEW: scripts/replace-console-logs.js (automation tool) - Modified: 15 lib/*.ts files (console.log → logger.log) - Modified: .env (DEBUG_LOGS=false) Next: Task 1.2 (Image Size Optimization)
39 KiB
Trading Bot Optimization Execution Plan
Generated: December 4, 2025
Based On: Comprehensive system analysis (8 data collection commands)
Status: Ready for execution
Duration: 3 months (3 phases)
Quick Reference
Top 3 Priorities:
- 🔴 Console.log Gating (4h, 90% impact, CRITICAL)
- 🔴 Docker Image Size (3h, 50% reduction, HIGH)
- 🟡 Position Manager Refactor (11d, 59% complexity reduction, MEDIUM)
Current System Health: ✅ EXCELLENT
- CPU: 10.88% (stable)
- Memory: 179.7MiB (8.77% of 2GB)
- Database: 20MB for 170+ trades (efficient)
- Trading: $540 capital, 57.1% WR, +$262.70 (v8)
Phase 1: Quick Wins (1-2 weeks)
Task 1.1: Console.log Production Gating 🔴 CRITICAL
Problem: 731 unguarded console statements causing production overhead
Files Affected: 18 files across lib/
lib/trading/position-manager.ts: 244 statements
lib/drift/orders.ts: 89 statements
lib/database/trades.ts: 63 statements
lib/trading/smart-entry-timer.ts: 58 statements
lib/analysis/blocked-signal-tracker.ts: 54 statements
lib/trading/stop-hunt-tracker.ts: 50 statements
lib/drift/client.ts: 41 statements
lib/startup/init-position-manager.ts: 38 statements
lib/trading/smart-validation-queue.ts: 36 statements
lib/trading/signal-quality.ts: 28 statements
lib/pyth/price-monitor.ts: 13 statements
lib/notifications/telegram.ts: 7 statements
lib/trading/market-data-cache.ts: 4 statements
lib/monitoring/drift-health-monitor.ts: 2 statements
lib/trading/revenge-system.ts: 2 statements
lib/utils/persistent-logger.ts: 1 statement
lib/database/client.ts: 1 statement
lib/trading/ghost-detection.ts: 0 statements
Solution: Environment-Gated Logging
Step 1: Create Logger Utility (15 minutes)
// lib/utils/logger.ts
const isDev = process.env.NODE_ENV !== 'production'
const isDebug = process.env.DEBUG_LOGS === 'true'
export const logger = {
log: (...args: any[]) => {
if (isDev || isDebug) console.log(...args)
},
error: (...args: any[]) => {
// Errors always logged
console.error(...args)
},
warn: (...args: any[]) => {
if (isDev || isDebug) console.warn(...args)
},
debug: (...args: any[]) => {
if (isDebug) console.log('[DEBUG]', ...args)
}
}
Step 2: Automated Replacement (3 hours)
# Use codemod script (create scripts/replace-console-logs.js)
# Find all console.log → logger.log
# Find all console.warn → logger.warn
# Keep all console.error → logger.error (always show)
# Add import { logger } from '@/lib/utils/logger'
cd /home/icke/traderv4
node scripts/replace-console-logs.js
# Manual review high-priority files:
# - position-manager.ts (244 statements)
# - orders.ts (89 statements)
# - trades.ts (63 statements)
Step 3: ENV Configuration (5 minutes)
# .env additions
NODE_ENV=production
DEBUG_LOGS=false # Toggle for troubleshooting
Step 4: Docker Rebuild (10 minutes)
docker compose build trading-bot
docker compose up -d --force-recreate trading-bot
docker logs -f trading-bot-v4 | head -100 # Verify gating works
Success Criteria:
- ✅ Production logs: <10 entries per minute (was >100)
- ✅ 90% reduction in log volume
- ✅ DEBUG_LOGS=true restores full logging
- ✅ All trading functionality preserved
Effort: 4 hours
Risk: LOW (fallback: revert git commit)
Priority: 🔴 CRITICAL
Task 1.2: TypeScript Type-Only Imports ⚡ QUICK WIN
Problem: 49 imports without type keyword causing compilation overhead
Solution: ESLint + Auto-Fix
Step 1: ESLint Rule (10 minutes)
// .eslintrc.json additions
{
"rules": {
"@typescript-eslint/consistent-type-imports": [
"error",
{
"prefer": "type-imports",
"fixStyle": "separate-type-imports"
}
]
}
}
Step 2: Automated Fix (20 minutes)
cd /home/icke/traderv4
npx eslint lib/ --fix --ext .ts
npm run build # Verify no compilation errors
git add -A
git commit -m "optimize: Add type-only imports for TypeScript compilation speedup"
git push
Success Criteria:
- ✅ 0 missing type imports (was 49)
- ✅ Build time: 52-53s (5-10% faster from 54.74s)
- ✅ No runtime behavior changes
Effort: 30 minutes
Risk: NONE (purely compilation optimization)
Priority: 🟢 HIGH
Task 1.3: Docker Image Size Investigation 🔍
Problem: 1.32GB image (5× larger than postgres at 275MB)
Investigation Steps (3 hours)
Step 1: Layer Analysis (1 hour)
# Analyze layer sizes
docker history trading-bot-v4 --human --no-trunc | head -20
# Use dive tool for interactive inspection
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
wagoodman/dive:latest trading-bot-v4
# Look for:
# - node_modules in multiple layers (duplication)
# - Dev dependencies in production
# - Large Solana/Drift SDK files
# - Unused build artifacts
Step 2: Dockerfile Optimization (1.5 hours)
# Potential changes based on findings:
# Multi-stage: Ensure dev dependencies NOT in final image
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Builder stage: Keep build deps isolated
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci # Include dev deps for build
COPY . .
RUN npm run build
# Final stage: Minimal runtime
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
# ... rest of files
Step 3: Build and Measure (30 minutes)
docker compose build trading-bot
docker images | grep trading-bot
# Target: 600-800MB (50% reduction from 1.32GB)
# If not achieved, investigate further:
# - npm dedupe to remove duplicates
# - Replace heavy dependencies
# - Use .dockerignore more aggressively
Success Criteria:
- ✅ Image size: 600-800MB (45-53% reduction)
- ✅ All functionality preserved
- ✅ Container starts successfully
- ✅ Test trade executes correctly
Effort: 3 hours
Risk: LOW (can revert Dockerfile)
Priority: 🔴 HIGH
Task 1.4: Export Tree-Shaking Audit 🌳
Problem: 93 exports, potential unused code in bundles
Solution: Automated Detection
Step 1: Install Tool (5 minutes)
cd /home/icke/traderv4
npm install --save-dev ts-prune
Step 2: Run Analysis (30 minutes)
npx ts-prune | tee docs/analysis/unused-exports.txt
# Review output, identify safe removals
# Focus on:
# - Unused helper functions
# - Legacy code exports
# - Over-exported types
# Manual cleanup of confirmed unused exports
# Test after each removal: npm run build
Step 3: Verification (15 minutes)
npm run build
# Check bundle sizes: should be 5-10% smaller
ls -lh .next/static/chunks/app/*.js
Success Criteria:
- ✅ 5-10% bundle size reduction
- ✅ No broken imports
- ✅ Build successful
Effort: 1 hour
Risk: LOW (TypeScript catches broken imports)
Priority: 🟡 MEDIUM
Phase 1 Summary
Duration: 1-2 weeks
Total Effort: 8.5 hours
Expected Results:
- 90% log volume reduction
- 45-53% Docker image reduction
- 5-10% build time improvement
- 5-10% bundle size reduction
- 100% type import compliance
Deployment Checklist:
- All changes committed to git
- Docker rebuilt with new optimizations
- Container restarted successfully
- Test trade executed (verify no regressions)
- Logs monitored for 24 hours
- Update OPTIMIZATION_MASTER_ROADMAP.md
Phase 2: Medium Initiatives (2-4 weeks)
Task 2.1: Database Query Batching 📊
Problem: 32 trade queries (51.6% of all queries) concentrated in trades.ts
Solution: Prisma Include Optimization
Step 1: Audit Current Queries (1 hour)
# Identify N+1 patterns
grep -n "prisma.trade" lib/database/trades.ts
# Common patterns needing batching:
# - getTradeStats() with multiple findMany
# - Individual trade fetches in loops
# - Separate queries for related data
Step 2: Implement Batching (2 hours)
// Example: getTradeStats with include
export async function getTradeStats(filters?: TradeFilters) {
// BEFORE: Multiple queries
// const trades = await prisma.trade.findMany({ where })
// const winningTrades = await prisma.trade.count({ where: { ...where, realizedPnL: { gt: 0 } } })
// const losingTrades = await prisma.trade.count({ where: { ...where, realizedPnL: { lt: 0 } } })
// AFTER: Single query with aggregation
const [stats, trades] = await Promise.all([
prisma.trade.aggregate({
where,
_count: true,
_sum: { realizedPnL: true },
_avg: { realizedPnL: true }
}),
prisma.trade.findMany({
where,
select: { realizedPnL: true, exitReason: true }
})
])
// Calculate derived stats from single result set
const winningTrades = trades.filter(t => t.realizedPnL > 0).length
const losingTrades = trades.filter(t => t.realizedPnL < 0).length
// ...
}
Step 3: Testing (30 minutes)
# Run analytics queries, verify results match
curl http://localhost:3001/api/analytics/last-trade
curl http://localhost:3001/api/withdrawals/stats
# Monitor query performance
docker logs trading-bot-v4 | grep -i "prisma" | head -20
Success Criteria:
- ✅ Trade queries: 15-20 (50-70% reduction from 32)
- ✅ Same analytics results (correctness preserved)
- ✅ Response time: <100ms for dashboard
Effort: 3.5 hours
Risk: LOW (compare old vs new results)
Priority: 🔴 HIGH
Task 2.2: Database Indexing Audit 🔍
Problem: No systematic index audit, potential slow queries
Solution: Strategic Index Creation
Step 1: Query Pattern Analysis (2 hours)
-- Connect to database
docker exec -it trading-bot-postgres psql -U postgres -d trading_bot_v4
-- Analyze slow queries (if logging enabled)
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 20;
-- Common filter patterns in codebase:
-- WHERE exitReason IS NULL (open positions)
-- WHERE symbol = 'SOL-PERP' (per-symbol queries)
-- WHERE signalQualityScore >= X (quality filtering)
-- WHERE createdAt > NOW() - INTERVAL '24 hours' (recent trades)
-- WHERE indicatorVersion = 'v8' (version comparison)
Step 2: Index Creation (2 hours)
-- Prisma migration file: prisma/migrations/YYYYMMDD_add_performance_indexes/migration.sql
-- Index for open positions (frequent query)
CREATE INDEX idx_trade_open_positions ON "Trade"("exitReason")
WHERE "exitReason" IS NULL;
-- Index for symbol filtering
CREATE INDEX idx_trade_symbol ON "Trade"("symbol");
-- Composite index for quality analysis
CREATE INDEX idx_trade_quality_version ON "Trade"("signalQualityScore", "indicatorVersion");
-- Index for time-based queries
CREATE INDEX idx_trade_created_at ON "Trade"("createdAt" DESC);
-- Index for stop hunt tracking
CREATE INDEX idx_stophunt_active ON "StopHunt"("revengeExecuted", "revengeWindowExpired")
WHERE "revengeExecuted" = false AND "revengeWindowExpired" = false;
Step 3: Migration and Verification (1 hour)
# Create migration
npx prisma migrate dev --name add_performance_indexes
# Apply to production
docker exec trading-bot-v4 npx prisma migrate deploy
# Verify indexes created
docker exec -it trading-bot-postgres psql -U postgres -d trading_bot_v4 -c "\d+ \"Trade\""
# Benchmark queries before/after
# Should see 2-5× speedup on filtered queries
Success Criteria:
- ✅ Query time: 2-5× faster for common filters
- ✅ All migrations applied successfully
- ✅ No performance regressions
Effort: 5 hours
Risk: LOW (indexes don't change data)
Priority: 🟡 MEDIUM
Task 2.3: Timer/Interval Consolidation ⏱️
Problem: 20 separate polling calls causing RPC overhead
Solution: Event-Driven Architecture
Step 1: Audit Polling Patterns (4 hours)
# Find all setInterval/setTimeout calls
grep -rn "setInterval\|setTimeout" lib/ --include="*.ts"
# Document:
# - position-manager.ts: 2s price monitoring
# - stop-hunt-tracker.ts: 30s revenge checks
# - blocked-signal-tracker.ts: 5min price tracking
# - drift-health-monitor.ts: 2min health checks
# - smart-validation-queue.ts: 30s validation
Step 2: Implement Event Bus (8 hours)
// lib/events/event-bus.ts
import { EventEmitter } from 'events'
class TradingEventBus extends EventEmitter {
private static instance: TradingEventBus
static getInstance() {
if (!this.instance) {
this.instance = new TradingEventBus()
}
return this.instance
}
// Events:
// - 'price:update' - Pyth WebSocket price changes
// - 'trade:opened' - New position opened
// - 'trade:tp1' - TP1 hit
// - 'trade:closed' - Position closed
}
// Example usage in position-manager.ts:
// Instead of 2s polling, listen to price updates
eventBus.on('price:update', ({ symbol, price }) => {
const trade = this.activeTrades.get(symbol)
if (trade) {
this.checkTradeConditions(trade, price)
}
})
Step 3: Adaptive Polling Fallback (4 hours)
// For systems that can't be fully event-driven
class AdaptivePoller {
private interval: NodeJS.Timeout | null = null
private currentRate: number = 30000 // Start slow
adjustRate(activity: 'idle' | 'low' | 'high') {
const rates = {
idle: 30000, // 30s when no trades
low: 10000, // 10s with 1-2 trades
high: 2000 // 2s with 3+ trades
}
this.currentRate = rates[activity]
this.restart()
}
}
Step 4: Testing (4 hours)
# Shadow testing: Run old and new side-by-side
# Compare: Do same trades get detected?
# Measure: RPC call reduction (should be 50-70%)
# Monitor: CPU usage should drop 18-27%
Success Criteria:
- ✅ RPC calls: 50-70% reduction
- ✅ CPU usage: 8-9% (from 10.88%)
- ✅ Same trade detection accuracy
Effort: 2 days
Risk: MEDIUM (core monitoring changes)
Priority: 🟡 MEDIUM
Task 2.4: Node Modules Audit 📦
Problem: 620MB node_modules (47.7% of disk)
Solution: Dependency Optimization
Step 1: Analyze Dependencies (2 hours)
# Size breakdown
npx npkgsize --output node_modules_sizes.txt
# Identify large packages
du -sh node_modules/* | sort -rh | head -20
# Common culprits:
# - @drift-labs/sdk (Solana deps)
# - @solana/web3.js
# - @coral-xyz/anchor
# - next (framework)
Step 2: Optimization Opportunities (2 hours)
// package.json changes:
// 1. Remove unused dependencies
// Run: npx depcheck
// Remove packages not imported anywhere
// 2. Replace heavy dependencies
// Example: moment → date-fns (smaller bundle)
// Example: lodash → native JS methods
// 3. Move dev deps correctly
"devDependencies": {
"@types/*": "*", // Ensure all @types are dev-only
"eslint": "*",
"prettier": "*"
}
// 4. Use npm ci for reproducible builds
// Already in Dockerfile, but verify
Step 3: Rebuild and Test (30 minutes)
rm -rf node_modules package-lock.json
npm install
npm run build
docker compose build trading-bot
# Verify size reduction
du -sh node_modules
Success Criteria:
- ✅ Node modules: 480-500MB (20-23% reduction)
- ✅ All functionality preserved
- ✅ Build successful
Effort: 4.5 hours
Risk: MEDIUM (dependency changes)
Priority: 🟡 MEDIUM
Task 2.5: RPC Call Pattern Optimization 🌐
Problem: 20.5GB received (high RPC volume)
Solution: Caching + Batching
Step 1: Oracle Price Caching (4 hours)
// lib/drift/price-cache.ts
class OraclePriceCache {
private cache = new Map<string, { price: number, timestamp: number }>()
private TTL = 2000 // 2 second cache
async getPrice(marketIndex: number): Promise<number> {
const cached = this.cache.get(marketIndex.toString())
const now = Date.now()
if (cached && (now - cached.timestamp) < this.TTL) {
return cached.price
}
// Fetch from Drift only if cache expired
const price = await driftService.getOraclePrice(marketIndex)
this.cache.set(marketIndex.toString(), { price, timestamp: now })
return price
}
}
Step 2: RPC Request Batching (4 hours)
// Batch multiple getOraclePrice calls into single RPC request
class BatchedRpcClient {
private queue: Array<{ marketIndex: number, resolve: Function }> = []
private timeout: NodeJS.Timeout | null = null
getPrice(marketIndex: number): Promise<number> {
return new Promise((resolve) => {
this.queue.push({ marketIndex, resolve })
if (!this.timeout) {
this.timeout = setTimeout(() => this.flush(), 100) // 100ms batch window
}
})
}
private async flush() {
const batch = [...this.queue]
this.queue = []
this.timeout = null
// Single RPC call for all prices
const prices = await this.fetchMultiplePrices(batch.map(b => b.marketIndex))
batch.forEach((item, i) => item.resolve(prices[i]))
}
}
Step 3: WebSocket Investigation (4 hours)
// Investigate if WebSocket subscriptions can replace polling
// Drift SDK may support WebSocket price feeds
// If yes, migrate from HTTP polling to WebSocket push
Step 4: Monitoring (4 hours)
# Track RPC call reduction
docker stats trading-bot-v4 --no-stream
# Network I/O should reduce by 30-50%
# Verify no accuracy loss
# Price updates should still be timely (within 2s)
Success Criteria:
- ✅ RPC calls: 30-50% reduction
- ✅ Network received: <15GB/day (from 20.5GB)
- ✅ Price accuracy preserved (±0.01% tolerance)
Effort: 2 days
Risk: LOW (caching is conservative)
Priority: 🟡 MEDIUM
Phase 2 Summary
Duration: 2-4 weeks
Total Effort: 6 days
Expected Results:
- 38-53% database query reduction
- 2-5× query speed improvement
- 50-70% RPC call reduction
- 20-23% node_modules size reduction
- 18-27% CPU usage reduction
Deployment Checklist:
- Database migrations applied
- Shadow testing completed (old vs new behavior)
- Performance benchmarks documented
- Rollback plan prepared
- Gradual rollout: 10% → 50% → 100% over 2 weeks
Phase 3: Long-Term Projects (1-3 months)
Task 3.1: Winston Structured Logging 📝
Problem: Console.log doesn't provide queryable logs for production analysis
Solution: Professional Logging Framework
Step 1: Install Winston (15 minutes)
cd /home/icke/traderv4
npm install winston winston-daily-rotate-file
Step 2: Create Logger Service (3 hours)
// lib/utils/winston-logger.ts
import winston from 'winston'
import DailyRotateFile from 'winston-daily-rotate-file'
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: { service: 'trading-bot' },
transports: [
// Console for Docker logs
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
// File rotation for persistent logs
new DailyRotateFile({
filename: '/app/logs/trading-%DATE%.log',
datePattern: 'YYYY-MM-DD',
maxSize: '20m',
maxFiles: '14d',
level: 'info'
}),
// Separate error log
new DailyRotateFile({
filename: '/app/logs/error-%DATE%.log',
datePattern: 'YYYY-MM-DD',
maxSize: '20m',
maxFiles: '30d',
level: 'error'
})
]
})
// Structured logging helpers
export const log = {
trade: (action: string, data: any) =>
logger.info('TRADE', { action, ...data }),
position: (action: string, data: any) =>
logger.info('POSITION', { action, ...data }),
error: (context: string, error: Error, data?: any) =>
logger.error('ERROR', { context, error: error.message, stack: error.stack, ...data })
}
Step 3: Replace Logger Import (4 hours)
// Update all files to use Winston instead of simple logger
// Find: import { logger } from '@/lib/utils/logger'
// Replace: import { log } from '@/lib/utils/winston-logger'
// Example conversions:
// logger.log('Trade opened')
// → log.trade('opened', { symbol, entryPrice, size })
// logger.error('Failed to close position')
// → log.error('position-close', error, { symbol, positionId })
Step 4: Log Analysis Setup (1 hour)
# Query logs with jq
docker exec trading-bot-v4 cat /app/logs/trading-2025-12-04.log | jq '.action, .symbol, .realizedPnL'
# Aggregate stats
cat logs/trading-*.log | jq -s 'group_by(.action) | map({action: .[0].action, count: length})'
Success Criteria:
- ✅ 100% console.log removed
- ✅ Queryable JSON logs
- ✅ 14-day retention working
- ✅ Error logs isolated
Effort: 1 day
Risk: MEDIUM (logging changes)
Priority: 🟡 MEDIUM
Task 3.2: Position Manager Refactor 🔧
Problem: 1,945 lines causing maintainability issues
Solution: Modular Architecture
Target Structure:
lib/trading/position-manager/
├── index.ts (200 lines) - Core orchestration
├── price-monitor.ts (300 lines) - Price tracking & WebSocket
├── trade-lifecycle.ts (400 lines) - State management
├── exit-strategy.ts (500 lines) - TP/SL/trailing logic
├── position-validator.ts (300 lines) - Ghost detection, external closure
└── types.ts (100 lines) - Shared interfaces
Migration Strategy (11 days total):
Week 1: Planning & Setup (2 days)
- Day 1: Document current architecture (call graph, state flow)
- Day 2: Design module interfaces, define contracts
Week 2: Module Extraction (5 days)
- Day 3-4: Extract price-monitor.ts (Pyth WebSocket, caching)
- Day 5: Extract position-validator.ts (ghost detection, external closure)
- Day 6-7: Extract exit-strategy.ts (TP1/TP2/trailing stop logic)
Week 3: Integration & Testing (4 days)
- Day 8-9: Extract trade-lifecycle.ts (state transitions, DB updates)
- Day 10: Refactor index.ts as thin orchestrator
- Day 11: Integration testing, shadow deployment
Implementation Details:
Step 1: Extract Price Monitor (2 days)
// lib/trading/position-manager/price-monitor.ts
export class PriceMonitor {
private pythMonitor: PythPriceMonitor
private subscriptions = new Map<string, Function>()
constructor() {
this.pythMonitor = getPythPriceMonitor()
this.startMonitoring()
}
subscribe(symbol: string, callback: (price: number) => void) {
this.subscriptions.set(symbol, callback)
}
unsubscribe(symbol: string) {
this.subscriptions.delete(symbol)
}
private startMonitoring() {
// WebSocket price updates trigger callbacks
this.pythMonitor.on('price', ({ symbol, price }) => {
const callback = this.subscriptions.get(symbol)
if (callback) callback(price)
})
}
}
Step 2: Extract Exit Strategy (2 days)
// lib/trading/position-manager/exit-strategy.ts
export class ExitStrategy {
shouldTakeProfit1(price: number, trade: ActiveTrade): boolean {
const profitPercent = this.calculateProfitPercent(trade.entryPrice, price, trade.direction)
return !trade.tp1Hit && profitPercent >= trade.tp1Percent
}
shouldTakeProfit2(price: number, trade: ActiveTrade): boolean {
const profitPercent = this.calculateProfitPercent(trade.entryPrice, price, trade.direction)
return trade.tp1Hit && !trade.tp2Hit && profitPercent >= trade.tp2Percent
}
shouldStopLoss(price: number, trade: ActiveTrade): boolean {
const profitPercent = this.calculateProfitPercent(trade.entryPrice, price, trade.direction)
return profitPercent <= trade.stopLossPercent
}
calculateTrailingStop(trade: ActiveTrade): number {
// ATR-based trailing stop logic
const atrPercent = (trade.atrAtEntry / trade.entryPrice) * 100
const multiplier = this.getTrailingMultiplier(trade)
return atrPercent * multiplier
}
}
Step 3: Extract Position Validator (1 day)
// lib/trading/position-manager/position-validator.ts
export class PositionValidator {
async detectGhostPosition(trade: ActiveTrade): Promise<boolean> {
const position = await this.getDriftPosition(trade.symbol)
if (!position || Math.abs(position.size) < 0.01) {
// Trade in memory but not on Drift = ghost
return true
}
return false
}
async detectExternalClosure(trade: ActiveTrade): Promise<boolean> {
const position = await this.getDriftPosition(trade.symbol)
if (!position && Date.now() - trade.lastUpdateTime > 30000) {
// Position gone and not recent = external closure
return true
}
return false
}
}
Step 4: Refactor Core Index (2 days)
// lib/trading/position-manager/index.ts
export class PositionManager {
private priceMonitor: PriceMonitor
private exitStrategy: ExitStrategy
private validator: PositionValidator
private lifecycle: TradeLifecycle
constructor(config: TradingConfig) {
this.priceMonitor = new PriceMonitor()
this.exitStrategy = new ExitStrategy(config)
this.validator = new PositionValidator()
this.lifecycle = new TradeLifecycle()
}
async addTrade(trade: ActiveTrade) {
this.lifecycle.add(trade)
this.priceMonitor.subscribe(trade.symbol, (price) => {
this.handlePriceUpdate(trade, price)
})
}
private async handlePriceUpdate(trade: ActiveTrade, price: number) {
// Ghost detection
if (await this.validator.detectGhostPosition(trade)) {
return this.handleGhostDetection(trade)
}
// Exit conditions
if (this.exitStrategy.shouldStopLoss(price, trade)) {
return this.executeExit(trade, 100, 'SL', price)
}
if (this.exitStrategy.shouldTakeProfit1(price, trade)) {
return this.executeExit(trade, 60, 'TP1', price)
}
// ... more conditions
}
}
Step 5: Shadow Testing (2 days)
// Run both old and new implementations side-by-side
// Compare: Do they detect same exit conditions?
// Measure: Performance differences
// Validate: No missed signals or false triggers
Step 6: Gradual Rollout (2 days)
// Feature flag for phased migration
if (process.env.USE_REFACTORED_POSITION_MANAGER === 'true') {
return new RefactoredPositionManager(config)
} else {
return new LegacyPositionManager(config)
}
// Rollout plan:
// Week 1: 10% of trades (1-2 trades)
// Week 2: 50% of trades (monitor closely)
// Week 3: 100% (full migration)
Success Criteria:
- ✅ 1,945 lines → ~800 lines per module (~59% complexity reduction)
- ✅ 100% test coverage on new modules
- ✅ No missed trades or false exits
- ✅ Same P&L results as legacy version
Effort: 11 days
Risk: HIGH (core trading logic)
Priority: 🟡 MEDIUM
Task 3.3: Circular Dependency Resolution 🔄
Problem: 5 singleton patterns may have circular dependencies
Solution: Dependency Injection
Step 1: Detect Circular Dependencies (2 hours)
npm install --save-dev madge
npx madge --circular lib/
# Expected output:
# trades.ts → position-manager.ts → drift/client.ts → trades.ts
# signal-quality.ts → trades.ts → signal-quality.ts
Step 2: Refactor Singletons (1 day)
// BEFORE: Direct getInstance calls create circular deps
// drift/client.ts
export function getDriftService() {
if (!instance) {
const trades = require('../database/trades') // Circular!
instance = new DriftService(trades)
}
return instance
}
// AFTER: Dependency injection
// drift/client.ts
export function createDriftService(dependencies: {
tradesRepo: TradesRepository
}) {
return new DriftService(dependencies.tradesRepo)
}
// lib/startup/services.ts (central initialization)
export async function initializeServices() {
const tradesRepo = new TradesRepository(prisma)
const driftService = createDriftService({ tradesRepo })
const positionManager = createPositionManager({ driftService, tradesRepo })
return { driftService, positionManager, tradesRepo }
}
Step 3: Update Call Sites (4 hours)
// BEFORE:
const driftService = getDriftService()
// AFTER:
// In API routes, get from request context
const { driftService } = await getServices()
Step 4: Verification (2 hours)
npx madge --circular lib/
# Should show 0 circular dependencies
npm run build
# Should compile without issues
Success Criteria:
- ✅ 0 circular dependencies
- ✅ All services initialized correctly
- ✅ No runtime errors
Effort: 2 days
Risk: MEDIUM (architectural change)
Priority: 🟢 LOW
Task 3.4: Build Time Optimization 🚀
Problem: 54.74s build time could be faster
Solution: Incremental Builds + Caching
Step 1: Enable Incremental TypeScript (30 minutes)
// tsconfig.json
{
"compilerOptions": {
"incremental": true,
"tsBuildInfoFile": ".tsbuildinfo"
}
}
// .gitignore
.tsbuildinfo
Step 2: Parallel Build Processing (1 hour)
// next.config.js
module.exports = {
experimental: {
// Use SWC for minification (faster than Terser)
swcMinify: true,
// Parallel build workers
workerThreads: true,
cpus: Math.max(1, require('os').cpus().length - 1)
}
}
Step 3: Turborepo Caching (2 hours)
# Install Turborepo
npm install -D turbo
# Create turbo.json
{
"pipeline": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "!.next/cache/**"],
"cache": true
}
}
}
# Update package.json scripts
"scripts": {
"build": "turbo run build"
}
Step 4: Docker Layer Caching (1 hour)
# Dockerfile optimization
# Cache node_modules separately
FROM node:20-alpine AS deps
COPY package*.json ./
RUN npm ci
# This layer is cached unless package.json changes
FROM node:20-alpine AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# This layer rebuilds only when source changes
Step 5: Benchmarking (30 minutes)
# Cold build (no cache)
rm -rf .next .tsbuildinfo node_modules/.cache
time npm run build
# Warm build (with cache)
touch lib/trading/position-manager.ts
time npm run build
# Target: 25-30s (50% reduction from 54.74s)
Success Criteria:
- ✅ Cold build: <30s (from 54.74s)
- ✅ Warm build: <10s (incremental)
- ✅ Docker build: Layer caching working
Effort: 5 hours
Risk: LOW (build tooling)
Priority: 🟢 LOW
Phase 3 Summary
Duration: 1-3 months
Total Effort: 19.5 days
Expected Results:
- 100% console.log removal (Winston only)
- 59% position manager complexity reduction
- 0 circular dependencies
- 45-54% build time reduction
- Queryable structured logs
Deployment Checklist:
- Winston logging tested in staging
- Position Manager shadow testing completed (1-2 weeks)
- Gradual rollout: 10% → 50% → 100%
- Rollback plan prepared
- Performance regression testing
- Update all documentation
Risk Mitigation
Trading System Constraints
- ✅ Real-money trading: $540 capital
- ✅ Win rate: Must maintain ≥60%
- ✅ Dual-layer redundancy: Preserve Position Manager + on-chain orders
- ✅ Database integrity: 170+ trades, critical for analytics
- ✅ Zero downtime: HA infrastructure must stay operational
Mitigation Strategies
1. Shadow Testing (All High-Risk Changes)
// Run old and new code side-by-side
if (process.env.SHADOW_MODE === 'true') {
const oldResult = await legacyFunction()
const newResult = await optimizedFunction()
if (!deepEqual(oldResult, newResult)) {
log.error('Shadow test failed', { old: oldResult, new: newResult })
}
return oldResult // Use old result in production
}
2. Feature Flags (Runtime Toggles)
// Environment-based toggles
const config = {
useEventDrivenMonitoring: process.env.USE_EVENT_DRIVEN === 'true',
useRefactoredPositionManager: process.env.USE_REFACTORED_PM === 'true',
useBatchedQueries: process.env.USE_BATCHED_QUERIES === 'true'
}
// Easy rollback without deployment
3. Rollback Plan
# Git tags for each phase
git tag -a phase1-console-gating -m "Phase 1: Console.log gating"
git tag -a phase2-db-optimization -m "Phase 2: Database optimization"
git tag -a phase3-refactor -m "Phase 3: Position Manager refactor"
# Docker image snapshots
docker tag trading-bot-v4 trading-bot-v4:phase1-backup
docker tag trading-bot-v4 trading-bot-v4:phase2-backup
# Rollback procedure
git checkout phase1-console-gating
docker compose build trading-bot
docker compose up -d --force-recreate trading-bot
4. Comprehensive Testing
# Unit tests (target: 90%+ coverage)
npm test -- --coverage
# Integration tests
npm run test:integration
# Load testing (simulate 50-100 trades)
npm run test:load
# Manual testing checklist:
# - Open position
# - Hit TP1 (verify 60% closes)
# - Monitor runner (verify trailing stop)
# - Hit SL (verify full close)
# - Database queries (verify correct results)
5. Gradual Rollout
| Week | Rollout % | Monitoring |
|---|---|---|
| 1 | 10% | Watch every trade closely |
| 2 | 25% | Monitor daily aggregates |
| 3 | 50% | Compare old vs new metrics |
| 4 | 75% | Confidence growing |
| 5 | 100% | Full migration |
6. Monitoring Alerts
// Set up alerts for regressions
if (buildTime > previousBuildTime * 1.2) {
alert('Build time regression: ' + buildTime)
}
if (queryTime > previousQueryTime * 1.5) {
alert('Query performance regression: ' + queryTime)
}
if (memoryUsage > 250 * 1024 * 1024) { // 250MB
alert('Memory usage spike: ' + memoryUsage)
}
Success Metrics Tracking
Baseline (Before Optimization)
| Metric | Current | Target After Phase 1 | Target After Phase 2 | Target After Phase 3 |
|---|---|---|---|---|
| Console.log | 731 | 73 (90% gated) | 73 | 0 (Winston only) |
| Build Time | 54.74s | 52-53s | 52-53s | 25-30s |
| Docker Image | 1.32GB | 600-700MB | 600-700MB | 600-700MB |
| Node Modules | 620MB | 620MB | 480-500MB | 480-500MB |
| DB Queries (Trade) | 32 | 32 | 15-20 | 15-20 |
| Position Manager Lines | 1,945 | 1,945 | 1,945 | ~800 |
| Type Imports Missing | 49 | 0 | 0 | 0 |
| CPU Usage | 10.88% | 10.88% | 8-9% | 8-9% |
| Memory Usage | 179.7MiB | 175MiB | 150-160MiB | 140-150MiB |
Measurement Commands
# Console.log count
grep -r "console\.\(log\|error\|warn\)" --include="*.ts" lib/ | wc -l
# Build time
time npm run build 2>&1 | grep "Compiled successfully"
# Docker image size
docker images | grep trading-bot-v4
# Node modules size
du -sh node_modules
# Database query count
grep -rn "prisma.trade" lib/database/trades.ts | wc -l
# File lines
wc -l lib/trading/position-manager.ts
# Type imports
grep -r "import.*{.*}" --include="*.ts" lib/ | grep -v "type {" | wc -l
# Runtime metrics
docker stats trading-bot-v4 --no-stream
Integration with Existing Roadmaps
OPTIMIZATION_MASTER_ROADMAP.md (Trading Strategy)
Focus: Signal quality, position scaling, ATR-based TP Status:
- ✅ Signal Quality v8 complete (57.1% WR, +$262.70)
- 🔄 Data collection ongoing (8/20 blocked signals, 8/50 ATR trades)
- 📋 v9 development planned (directional filter, time-of-day)
This Plan (Infrastructure/Code Quality): Focus: Console.log, Docker size, Position Manager complexity, database queries Relationship: Complementary (run in parallel, no conflicts)
Synergies:
- Console.log gating reduces noise during signal quality analysis
- Database indexing speeds backtesting queries for position scaling
- Position Manager refactor makes exit strategies easier to implement
- Structured logging provides better data for trading performance analysis
No Conflicts:
- Infrastructure optimizations don't touch trading logic
- Quality thresholds unchanged (91 for v8)
- Position sizing strategies unaffected
- Data collection systems continue running
Timeline Overview
December 2025
Week 1-2: Phase 1 (Quick Wins)
├── Console.log gating (4h) ✓
├── Type imports (30m) ✓
├── Docker investigation (3h) ✓
└── Export tree-shaking (1h) ✓
Week 3-6: Phase 2 (Medium Initiatives)
├── Database batching (3.5h)
├── Database indexing (5h)
├── Timer consolidation (2d)
├── Node modules audit (4.5h)
└── RPC optimization (2d)
January-March 2026: Phase 3 (Long-Term)
├── Winston logging (1d)
├── Position Manager refactor (11d)
│ └── Shadow testing (1-2 weeks)
├── Circular dependencies (2d)
└── Build optimization (5h)
Execution Checklist
Pre-Phase 1
- Backup database:
pg_dump trading_bot_v4 > backup_pre_optimization.sql - Tag git:
git tag -a pre-optimization -m "Before optimization plan" - Document baseline metrics (run measurement commands above)
- Create Nextcloud Deck cards for Phase 1 tasks
- Schedule maintenance window (if needed for risky changes)
During Each Phase
- Create feature branch:
git checkout -b optimize/phase-X-taskname - Implement changes
- Run tests:
npm test - Build:
npm run build - Measure improvement (document in git commit)
- Deploy to staging (if available)
- Shadow test (if high risk)
- Deploy to production with feature flag
- Monitor for 24-48 hours
- Commit:
git commit -m "optimize: [description]" - Push:
git push origin optimize/phase-X-taskname - Update Nextcloud Deck card status
Post-Phase
- Document actual vs expected results
- Update success metrics table
- Tag git:
git tag -a phase-X-complete - Update OPTIMIZATION_MASTER_ROADMAP.md
- Retrospective: What worked? What didn't?
- Adjust remaining phases based on learnings
Quick Reference Commands
# Start Phase 1
cd /home/icke/traderv4
git checkout -b optimize/phase1-console-gating
# ... implement changes
npm run build
docker compose build trading-bot
docker compose up -d --force-recreate trading-bot
git add -A
git commit -m "optimize: Console.log production gating (90% reduction)"
git push
git checkout main
git merge optimize/phase1-console-gating
git tag -a phase1-complete
# Measure improvements
grep -r "console\.\(log\|error\|warn\)" --include="*.ts" lib/ | wc -l
docker images | grep trading-bot-v4
docker stats trading-bot-v4 --no-stream
# Rollback if needed
git checkout phase1-backup
docker compose up -d --force-recreate trading-bot
Documentation Updates
After each phase, update:
- This file: Mark tasks as complete, update success metrics
- OPTIMIZATION_MASTER_ROADMAP.md: Add infrastructure notes
- README.md: Update system requirements if changed
- .github/copilot-instructions.md: Document new patterns learned
- Nextcloud Deck: Move cards to "Complete" stack
Contact & Support
For Questions:
- Review comprehensive analysis:
/home/icke/traderv4/docs/analysis/COMPREHENSIVE_IMPROVEMENT_PLAN_DEC2025.md - Check existing roadmap:
OPTIMIZATION_MASTER_ROADMAP.md - System architecture:
.github/copilot-instructions.md
Best Practices:
- Always test in shadow mode first for high-risk changes
- Document baseline before starting each task
- Use feature flags for easy rollback
- Measure twice, optimize once
- Trading system stability > optimization gains
Status: ✅ READY FOR EXECUTION
Next Action: User reviews plan and approves Phase 1 start
Estimated Total Duration: 3 months
Expected Total Impact: 40-60% improvement across all metrics