Files
trading_bot_v4/docs/ORDERBOOK_SHADOW_LOGGING.md
mindesbunister 67dde80679 docs: Clarify orderbook shadow logging scope for next agent
Key clarifications in ORDERBOOK_SHADOW_LOGGING.md:

1. Trade execution only (NOT periodic):
   - Orderbook captured in /api/trading/execute (lines 1037-1053)
   - NOT captured in /api/trading/market-data (1-minute feed)
   - Frequency: 3-5 orderbook snapshots per day (when trades execute)
   - vs. 1,440 market data updates per day (price/ATR/ADX/RSI)

2. Phase 1.5 declined (Dec 19, 2025):
   - User asked about 1-minute periodic orderbook capture
   - Agent explained tradeoffs: 1,440 API calls/day, more storage
   - User confirmed trade-time capture sufficient for now
   - Periodic monitoring not needed for Phase 1 validation

3. Scope clarity for resumption:
   - Phase 1 complete and deployed
   - Goal: Collect 50-100 trades with orderbook data
   - Phase 2: Real orderbook API integration (Hyperliquid/Jupiter)
   - Next agent: Monitor trade data collection, analyze patterns

Status:  All Phase 1 tasks complete, awaiting first trade validation
2025-12-19 09:07:46 +01:00

219 lines
7.5 KiB
Markdown

# Orderbook Shadow Logging System
**Status**: ✅ DEPLOYED (Dec 19, 2025)
**Purpose**: Track orderbook metrics for all trades to prepare for real orderbook integration (Phase 2)
## Overview
Phase 1 implementation: Oracle-based estimates with 2bps spread assumption. Captures orderbook snapshots **at trade execution only** (not periodic).
**Critical Scope Note**: Orderbook data is captured ONLY when trades execute via `/api/trading/execute`, NOT during 1-minute market data feed updates (`/api/trading/market-data`). The 1-minute feed captures price/ATR/ADX/RSI metrics but does NOT capture orderbook data. This is intentional to minimize API overhead - we only need orderbook state at the moment of trade execution.
## Database Schema
**Table**: `Trade`
**New Columns** (all nullable `DOUBLE PRECISION`):
| Column | Description | Expected Range |
|--------|-------------|----------------|
| `spreadBps` | Bid-ask spread in basis points | 2-100 bps typical |
| `imbalanceRatio` | Order flow imbalance (-1 to 1) | Negative=sell pressure, Positive=buy |
| `depthBid1Usd` | Bid depth USD (1% from mid) | >0 |
| `depthAsk1Usd` | Ask depth USD (1% from mid) | >0 |
| `priceImpact1Usd` | Price impact for $1 trade | Small decimal |
| `bidWall` | Largest bid wall detected | NULL or >0 |
| `askWall` | Largest ask wall detected | NULL or >0 |
## ENV Configuration
```bash
# Enable/disable orderbook shadow logging
ENABLE_ORDERBOOK_LOGGING=true # Default: true (omitting = defaults to true)
```
**How to toggle**:
```bash
# Disable logging
echo "ENABLE_ORDERBOOK_LOGGING=false" >> .env
docker compose restart trading-bot
# Re-enable
sed -i 's/ENABLE_ORDERBOOK_LOGGING=false/ENABLE_ORDERBOOK_LOGGING=true/' .env
docker compose restart trading-bot
```
## Implementation Details
**Files Modified**:
- `prisma/schema.prisma` - Added 7 orderbook fields to Trade model
- `config/trading.ts` - Added `enableOrderbookLogging` boolean flag (line 127)
- `types/trading.ts` - Added orderbook fields to CreateTradeParams interface
- `lib/database/trades.ts` - Updated createTrade() to save orderbook data
- `app/api/trading/execute/route.ts` - Added ENV flag wrapper (lines 1037-1053)
**Data Source (Phase 1)**: Oracle prices with 2bps spread estimates
- No real orderbook API integration yet
- Estimates sufficient for pattern analysis
- Phase 2 will integrate Hyperliquid/Jupiter for real data
**CRITICAL: Trade Execution Only**:
- Orderbook capture happens in `/api/trading/execute` endpoint (lines 1037-1053)
- Triggered ONLY when trade signal executes (not periodic)
- `/api/trading/market-data` endpoint (1-minute feed) does NOT capture orderbook data
- 1-minute feed captures: ATR, ADX, RSI, volume, price position (stored in MarketData table)
- Orderbook data stored in: Trade table (7 columns added via ALTER TABLE)
- Frequency: 3-5 orderbook snapshots per day (when trades execute) vs. 1,440 market data updates per day
## Verification Queries
### Check Latest Trade with Orderbook Data
```sql
SELECT
id,
symbol,
direction,
spreadBps,
imbalanceRatio,
depthBid1Usd,
depthAsk1Usd,
priceImpact1Usd,
bidWall,
askWall,
"createdAt"
FROM "Trade"
WHERE spreadBps IS NOT NULL
ORDER BY "createdAt" DESC
LIMIT 1;
```
### Count Trades with Orderbook Data
```sql
SELECT
COUNT(*) as total_trades,
COUNT(spreadBps) as trades_with_orderbook,
ROUND(100.0 * COUNT(spreadBps) / COUNT(*), 1) as pct_coverage
FROM "Trade";
```
### Average Orderbook Metrics by Symbol
```sql
SELECT
symbol,
COUNT(*) as trades,
ROUND(AVG(spreadBps)::numeric, 2) as avg_spread_bps,
ROUND(AVG(imbalanceRatio)::numeric, 3) as avg_imbalance,
ROUND(AVG(depthBid1Usd)::numeric, 2) as avg_bid_depth,
ROUND(AVG(depthAsk1Usd)::numeric, 2) as avg_ask_depth
FROM "Trade"
WHERE spreadBps IS NOT NULL
GROUP BY symbol
ORDER BY trades DESC;
```
### Spread vs Win Rate Analysis
```sql
SELECT
CASE
WHEN spreadBps < 5 THEN '0-5 bps (tight)'
WHEN spreadBps < 10 THEN '5-10 bps (normal)'
WHEN spreadBps < 20 THEN '10-20 bps (wide)'
ELSE '20+ bps (very wide)'
END as spread_bucket,
COUNT(*) as trades,
ROUND(100.0 * COUNT(CASE WHEN "realizedPnL" > 0 THEN 1 END) / COUNT(*), 1) as win_rate,
ROUND(AVG("realizedPnL")::numeric, 2) as avg_pnl
FROM "Trade"
WHERE spreadBps IS NOT NULL
AND "exitReason" IS NOT NULL
GROUP BY spread_bucket
ORDER BY MIN(spreadBps);
```
## Deployment Timeline
| Date | Action | Status |
|------|--------|--------|
| Dec 18, 2025 | Task 1-3: Code implementation | ✅ COMPLETE |
| Dec 18, 2025 | Task 4: ENV flag + syntax fix | ✅ COMPLETE |
| Dec 18, 2025 | Task 5: Database migration via direct SQL | ✅ COMPLETE |
| Dec 18, 2025 | Container rebuild and restart | ✅ COMPLETE |
| Dec 18, 2025+ | Production validation (await next trade) | ⏳ PENDING |
## Production Validation Checklist
After next trade executes, verify:
- [ ] Query returns non-NULL orderbook values
- [ ] spreadBps in reasonable range (2-100 bps)
- [ ] imbalanceRatio between -1 and 1
- [ ] depthBid1Usd and depthAsk1Usd > 0
- [ ] priceImpact1Usd is small positive decimal
- [ ] bidWall/askWall NULL or positive (if detected)
**Manual Test** (Optional):
```bash
# Trigger test trade via Telegram
# Send: "long sol --force" or "short sol --force"
# Then query database for orderbook data
```
## Phase 2 Roadmap (Future Enhancement)
**Real Orderbook Integration**:
- Hyperliquid API for SOL/ETH orderbook data
- Jupiter aggregator for DEX liquidity depth
- Replace oracle estimates with actual L2 orderbook snapshots
- Add real-time spread/imbalance monitoring
**Potential Phase 1.5 (User Declined Dec 19, 2025)**:
- Periodic orderbook capture via 1-minute market data feed
- Would store orderbook snapshots every 1 minute in separate table (not Trade table)
- Use case: Track spread/imbalance trends over time independent of trades
- Tradeoffs: More API calls (1,440/day vs. 3-5/day), more storage, minimal benefit for Phase 1 validation
- Decision: User confirmed trade-time capture sufficient for now, periodic monitoring not needed yet
## Analysis Queries
## Troubleshooting
**Problem**: Next trade has NULL orderbook fields
**Solution**: Check ENV flag is true, restart container, verify code deployed
**Problem**: Values seem incorrect (negative depths, >100 spread)
**Solution**: Oracle estimates may be off - acceptable for Phase 1, will fix in Phase 2
**Problem**: Want to disable logging temporarily
**Solution**: Set ENABLE_ORDERBOOK_LOGGING=false in .env, restart container
## Database Maintenance
**Migration Method**: Direct SQL (not Prisma migrate)
- Used due to migration history drift
- Zero data loss approach
- See git commit for ALTER TABLE statement
**Prisma Sync**:
```bash
DATABASE_URL="postgresql://postgres:postgres@localhost:55432/trading_bot_v4?schema=public" \
npx prisma db pull && npx prisma generate
```
## Git History
- **Commit**: [TBD] "feat: Orderbook shadow logging system (Phase 1)"
- **Files changed**: 5 files (schema, config, types, database, execute)
- **Database changes**: 7 columns added via direct SQL
- **Container**: Rebuilt with sha256:6c88c4543ef8...
## Notes
- **Phase 1**: Oracle-based estimates (2bps spread assumption)
- **Phase 2**: Real orderbook integration (Hyperliquid/Jupiter APIs)
- **Data safety**: Preserved production data during migration (avoided destructive reset)
- **Defaults**: Logging enabled by default (ENABLE_ORDERBOOK_LOGGING=true)
- **Performance**: Minimal overhead (estimates calculated at trade execution only)
---
**Last Updated**: Dec 18, 2025
**System Status**: ✅ PRODUCTION READY (awaiting first trade validation)