docs: Add multi-timeframe tracking and Docker maintenance to copilot instructions
- Multi-Timeframe Price Tracking System section (Nov 19, 2025) - Purpose and architecture overview - BlockedSignalTracker background job details - Database schema with price tracking fields - API endpoints and integration points - How it works (step-by-step flow) - Analysis queries for cross-timeframe comparison - Decision-making criteria for timeframe optimization - Current status: 2 signals tracked, pending 4H/Daily alerts - Docker Maintenance section (Item #13 in When Making Changes) - Cleanup commands: image prune, builder prune, volume prune - When to run: After builds, weekly, on disk warnings - Space savings: 2-5GB images, 40-50GB cache, 0.5-1GB volumes - Safety guidelines: What to delete vs keep - Why critical: Prevent disk full from build cache accumulation User requested: 'document in the copilot instructions with the latest updates' Both sections provide complete reference for future AI agents
This commit is contained in:
121
.github/copilot-instructions.md
vendored
121
.github/copilot-instructions.md
vendored
@@ -414,6 +414,104 @@ docker system df
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Multi-Timeframe Price Tracking System (Nov 19, 2025)
|
||||||
|
|
||||||
|
**Purpose:** Automated data collection and analysis for signals across multiple timeframes (5min, 15min, 1H, 4H, Daily) to determine which timeframe produces the best trading results.
|
||||||
|
|
||||||
|
**Architecture:**
|
||||||
|
- **5min signals:** Execute trades (production)
|
||||||
|
- **15min/1H/4H/Daily signals:** Save to BlockedSignal table with `blockReason='DATA_COLLECTION_ONLY'`
|
||||||
|
- **Background tracker:** Runs every 5 minutes, monitors price movements for 30 minutes
|
||||||
|
- **Analysis:** After 50+ signals per timeframe, compare win rates and profit potential
|
||||||
|
|
||||||
|
**Components:**
|
||||||
|
|
||||||
|
1. **BlockedSignalTracker** (`lib/analysis/blocked-signal-tracker.ts`)
|
||||||
|
- Background job running every 5 minutes
|
||||||
|
- Tracks price at 1min, 5min, 15min, 30min intervals
|
||||||
|
- Detects if TP1/TP2/SL would have been hit using ATR-based targets
|
||||||
|
- Records max favorable/adverse excursion (MFE/MAE)
|
||||||
|
- Auto-completes after 30 minutes (`analysisComplete=true`)
|
||||||
|
- Singleton pattern: Use `getBlockedSignalTracker()` or `startBlockedSignalTracking()`
|
||||||
|
|
||||||
|
2. **Database Schema** (BlockedSignal table)
|
||||||
|
```sql
|
||||||
|
entryPrice FLOAT -- Price at signal time (baseline)
|
||||||
|
priceAfter1Min FLOAT? -- Price 1 minute after
|
||||||
|
priceAfter5Min FLOAT? -- Price 5 minutes after
|
||||||
|
priceAfter15Min FLOAT? -- Price 15 minutes after
|
||||||
|
priceAfter30Min FLOAT? -- Price 30 minutes after
|
||||||
|
wouldHitTP1 BOOLEAN? -- Would TP1 have been hit?
|
||||||
|
wouldHitTP2 BOOLEAN? -- Would TP2 have been hit?
|
||||||
|
wouldHitSL BOOLEAN? -- Would SL have been hit?
|
||||||
|
maxFavorablePrice FLOAT? -- Price at max profit
|
||||||
|
maxAdversePrice FLOAT? -- Price at max loss
|
||||||
|
maxFavorableExcursion FLOAT? -- Best profit % during 30min
|
||||||
|
maxAdverseExcursion FLOAT? -- Worst loss % during 30min
|
||||||
|
analysisComplete BOOLEAN -- Tracking finished (30min elapsed)
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **API Endpoints**
|
||||||
|
- `GET /api/analytics/signal-tracking` - View tracking status, metrics, recent signals
|
||||||
|
- `POST /api/analytics/signal-tracking` - Manually trigger tracking update (auth required)
|
||||||
|
|
||||||
|
4. **Integration Points**
|
||||||
|
- Execute endpoint: Captures entry price when saving DATA_COLLECTION_ONLY signals
|
||||||
|
- Startup: Auto-starts tracker via `initializePositionManagerOnStartup()`
|
||||||
|
- Check-risk endpoint: Bypasses quality checks for non-5min signals (lines 147-159)
|
||||||
|
|
||||||
|
**How It Works:**
|
||||||
|
1. TradingView sends 15min/1H/4H/Daily signal → n8n → `/api/trading/execute`
|
||||||
|
2. Execute endpoint detects `timeframe !== '5'`
|
||||||
|
3. Gets current price from Pyth, saves to BlockedSignal with `entryPrice`
|
||||||
|
4. Background tracker wakes every 5 minutes
|
||||||
|
5. Queries current price, calculates profit % based on direction
|
||||||
|
6. Checks if TP1 (~0.86%), TP2 (~1.72%), or SL (~1.29%) would have hit
|
||||||
|
7. Updates price fields at appropriate intervals (1/5/15/30 min)
|
||||||
|
8. Tracks MFE/MAE throughout 30-minute window
|
||||||
|
9. After 30 minutes, marks `analysisComplete=true`
|
||||||
|
|
||||||
|
**Analysis Queries (After 50+ signals per timeframe):**
|
||||||
|
```sql
|
||||||
|
-- Compare win rates across timeframes
|
||||||
|
SELECT
|
||||||
|
timeframe,
|
||||||
|
COUNT(*) as total_signals,
|
||||||
|
COUNT(CASE WHEN wouldHitTP1 = true THEN 1 END) as tp1_wins,
|
||||||
|
COUNT(CASE WHEN wouldHitSL = true THEN 1 END) as sl_losses,
|
||||||
|
ROUND(100.0 * COUNT(CASE WHEN wouldHitTP1 = true THEN 1 END) / COUNT(*), 1) as win_rate,
|
||||||
|
ROUND(AVG(maxFavorableExcursion), 2) as avg_mfe,
|
||||||
|
ROUND(AVG(maxAdverseExcursion), 2) as avg_mae
|
||||||
|
FROM "BlockedSignal"
|
||||||
|
WHERE analysisComplete = true
|
||||||
|
AND blockReason = 'DATA_COLLECTION_ONLY'
|
||||||
|
GROUP BY timeframe
|
||||||
|
ORDER BY win_rate DESC;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decision Making:**
|
||||||
|
After sufficient data collected:
|
||||||
|
- Compare: 5min vs 15min vs 1H vs 4H vs Daily win rates
|
||||||
|
- Evaluate: Signal frequency (trades/day) vs win rate trade-off
|
||||||
|
- Identify: Which timeframe has best TP1 hit rate with acceptable MAE
|
||||||
|
- Action: Switch production timeframe if higher timeframe shows significantly better results
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- **Autonomous:** No manual work needed, runs in background
|
||||||
|
- **Accurate:** Uses same TP/SL calculations as live trades (ATR-based)
|
||||||
|
- **Risk-free:** Data collection only, no money at risk
|
||||||
|
- **Comprehensive:** Tracks best/worst case scenarios (MFE/MAE)
|
||||||
|
- **API accessible:** Check status anytime via `/api/analytics/signal-tracking`
|
||||||
|
|
||||||
|
**Current Status (Nov 19, 2025):**
|
||||||
|
- ✅ System deployed and running
|
||||||
|
- ✅ 2 signals tracked (15min and 1H from earlier today)
|
||||||
|
- ✅ TradingView alerts configured for 15min and 1H
|
||||||
|
- 📋 Pending: Set up 4H and Daily alerts
|
||||||
|
- 📊 Target: Collect 50+ signals per timeframe for statistical analysis
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Critical Components
|
## Critical Components
|
||||||
|
|
||||||
### 1. Phantom Trade Auto-Closure System
|
### 1. Phantom Trade Auto-Closure System
|
||||||
@@ -2882,14 +2980,33 @@ if (!enabled) {
|
|||||||
```
|
```
|
||||||
- Types: `feat:` (feature), `fix:` (bug fix), `docs:` (documentation), `refactor:` (code restructure)
|
- Types: `feat:` (feature), `fix:` (bug fix), `docs:` (documentation), `refactor:` (code restructure)
|
||||||
- This is NOT optional - code exists only when committed and pushed
|
- This is NOT optional - code exists only when committed and pushed
|
||||||
13. **NEXTCLOUD DECK SYNC (MANDATORY):** After completing phases or making significant roadmap progress:
|
13. **DOCKER MAINTENANCE (AFTER BUILDS):** Clean up accumulated cache to prevent disk full:
|
||||||
|
```bash
|
||||||
|
# Remove dangling images (old builds)
|
||||||
|
docker image prune -f
|
||||||
|
|
||||||
|
# Remove build cache (biggest space hog - 40+ GB typical)
|
||||||
|
docker builder prune -f
|
||||||
|
|
||||||
|
# Optional: Remove dangling volumes (if no important data)
|
||||||
|
docker volume prune -f
|
||||||
|
|
||||||
|
# Check space saved
|
||||||
|
docker system df
|
||||||
|
```
|
||||||
|
- **When to run:** After successful deployments, weekly if building frequently, when disk warnings appear
|
||||||
|
- **Space freed:** Dangling images (2-5 GB), Build cache (40-50 GB), Dangling volumes (0.5-1 GB)
|
||||||
|
- **Safe to delete:** `<none>` tagged images, build cache (recreated on next build), dangling volumes
|
||||||
|
- **Keep:** Named volumes (`trading-bot-postgres`), active containers, tagged images in use
|
||||||
|
- **Why critical:** Docker builds create 1.3+ GB per build, cache accumulates to 40-50 GB without cleanup
|
||||||
|
14. **NEXTCLOUD DECK SYNC (MANDATORY):** After completing phases or making significant roadmap progress:
|
||||||
- Update roadmap markdown files with new status (🔄 IN PROGRESS, ✅ COMPLETE, 🔜 NEXT)
|
- Update roadmap markdown files with new status (🔄 IN PROGRESS, ✅ COMPLETE, 🔜 NEXT)
|
||||||
- Run sync to update Deck cards: `python3 scripts/sync-roadmap-to-deck.py --init`
|
- Run sync to update Deck cards: `python3 scripts/sync-roadmap-to-deck.py --init`
|
||||||
- Move cards between stacks in Nextcloud Deck UI to reflect progress visually
|
- Move cards between stacks in Nextcloud Deck UI to reflect progress visually
|
||||||
- Backlog (📥) → Planning (📋) → In Progress (🚀) → Complete (✅)
|
- Backlog (📥) → Planning (📋) → In Progress (🚀) → Complete (✅)
|
||||||
- Keep Deck in sync with actual work - it's the visual roadmap tracker
|
- Keep Deck in sync with actual work - it's the visual roadmap tracker
|
||||||
- Documentation: `docs/NEXTCLOUD_DECK_SYNC.md`
|
- Documentation: `docs/NEXTCLOUD_DECK_SYNC.md`
|
||||||
14. **UPDATE COPILOT-INSTRUCTIONS.MD (MANDATORY):** After implementing ANY significant feature or system change:
|
15. **UPDATE COPILOT-INSTRUCTIONS.MD (MANDATORY):** After implementing ANY significant feature or system change:
|
||||||
- Document new database fields and their purpose
|
- Document new database fields and their purpose
|
||||||
- Add filtering requirements (e.g., manual vs TradingView trades)
|
- Add filtering requirements (e.g., manual vs TradingView trades)
|
||||||
- Update "Important fields" sections with new schema changes
|
- Update "Important fields" sections with new schema changes
|
||||||
|
|||||||
Reference in New Issue
Block a user