docs: Major documentation reorganization + ENV variable reference
**Documentation Structure:** - Created docs/ subdirectory organization (analysis/, architecture/, bugs/, cluster/, deployments/, roadmaps/, setup/, archived/) - Moved 68 root markdown files to appropriate categories - Root directory now clean (only README.md remains) - Total: 83 markdown files now organized by purpose **New Content:** - Added comprehensive Environment Variable Reference to copilot-instructions.md - 100+ ENV variables documented with types, defaults, purpose, notes - Organized by category: Required (Drift/RPC/Pyth), Trading Config (quality/ leverage/sizing), ATR System, Runner System, Risk Limits, Notifications, etc. - Includes usage examples (correct vs wrong patterns) **File Distribution:** - docs/analysis/ - Performance analyses, blocked signals, profit projections - docs/architecture/ - Adaptive leverage, ATR trailing, indicator tracking - docs/bugs/ - CRITICAL_*.md, FIXES_*.md bug reports (7 files) - docs/cluster/ - EPYC setup, distributed computing docs (3 files) - docs/deployments/ - *_COMPLETE.md, DEPLOYMENT_*.md status (12 files) - docs/roadmaps/ - All *ROADMAP*.md strategic planning files (7 files) - docs/setup/ - TradingView guides, signal quality, n8n setup (8 files) - docs/archived/2025_pre_nov/ - Obsolete verification checklist (1 file) **Key Improvements:** - ENV variable reference: Single source of truth for all configuration - Common Pitfalls #68-71: Already complete, verified during audit - Better findability: Category-based navigation vs 68 files in root - Preserves history: All files git mv (rename), not copy/delete - Zero broken functionality: Only documentation moved, no code changes **Verification:** - 83 markdown files now in docs/ subdirectories - Root directory cleaned: 68 files → 0 files (except README.md) - Git history preserved for all moved files - Container running: trading-bot-v4 (no restart needed) **Next Steps:** - Create README.md files in each docs subdirectory - Add navigation index - Update main README.md with new structure - Consolidate duplicate deployment docs - Archive truly obsolete files (old SQL backups) See: docs/analysis/CLEANUP_PLAN.md for complete reorganization strategy
This commit is contained in:
297
docs/deployments/ONE_YEAR_RETENTION_DEPLOYMENT.md
Normal file
297
docs/deployments/ONE_YEAR_RETENTION_DEPLOYMENT.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# 1-Year Retention Deployment - Dec 2, 2025
|
||||
|
||||
## ✅ DEPLOYMENT COMPLETE
|
||||
|
||||
**Date:** December 2, 2025
|
||||
**Status:** Fully deployed and verified
|
||||
**Git Commit:** 5773d7d
|
||||
|
||||
---
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Code Update: lib/maintenance/data-cleanup.ts
|
||||
|
||||
**Previous state:** 4-week retention (28 days)
|
||||
**New state:** 1-year retention (365 days)
|
||||
|
||||
**Key changes:**
|
||||
- Updated retention period: `setDate(-28)` → `setDate(-365)`
|
||||
- Variable renamed: `fourWeeksAgo` → `oneYearAgo`
|
||||
- Documentation updated with storage impact (~251 MB/year)
|
||||
|
||||
---
|
||||
|
||||
## Storage Analysis
|
||||
|
||||
### Row Size Measurement
|
||||
```sql
|
||||
SELECT pg_column_size(row(m.*)) as row_size_bytes
|
||||
FROM "MarketData" m LIMIT 1;
|
||||
```
|
||||
**Result:** 152 bytes per record
|
||||
|
||||
### Storage Calculations
|
||||
|
||||
| Timeframe | Records | Storage |
|
||||
|-----------|---------|---------|
|
||||
| 1 hour | 180 | 27.4 KB |
|
||||
| 1 day | 4,320 | 0.63 MB |
|
||||
| 1 week | 30,240 | 4.4 MB |
|
||||
| 28 days | 120,960 | 17.5 MB |
|
||||
| **365 days** | **1,576,800** | **228.5 MB** |
|
||||
| **With 10% index overhead** | | **251 MB/year** |
|
||||
|
||||
**Data collection rate:** 3 records/minute (1/min × 3 symbols: SOL, ETH, BTC)
|
||||
|
||||
---
|
||||
|
||||
## Deployment Verification
|
||||
|
||||
### Container Status
|
||||
```bash
|
||||
docker compose up -d --force-recreate trading-bot
|
||||
# Container trading-bot-v4 Started in 19.0s ✅
|
||||
```
|
||||
|
||||
### Startup Logs Confirmed
|
||||
```
|
||||
🎯 Server starting - initializing services...
|
||||
🧹 Starting data cleanup service...
|
||||
✅ Data cleanup scheduled for 3 AM (in 15 hours)
|
||||
✅ Data cleanup complete: Deleted 0 old market data rows (older than 2024-12-02) in 5ms
|
||||
```
|
||||
|
||||
### Cutoff Date Verification
|
||||
```sql
|
||||
SELECT NOW() - INTERVAL '365 days' as one_year_cutoff;
|
||||
```
|
||||
**Result:** 2024-12-02 (1 year ago from deployment date) ✅
|
||||
|
||||
**Previous cutoff (4 weeks):** Nov 4, 2025
|
||||
**New cutoff (1 year):** Dec 2, 2024
|
||||
|
||||
**Impact:** Records from Dec 2, 2024 onwards will be retained (vs only since Nov 4, 2025)
|
||||
|
||||
---
|
||||
|
||||
## Benefits of 1-Year Retention
|
||||
|
||||
### Comparison: 4 Weeks vs 1 Year
|
||||
|
||||
| Metric | 4 Weeks | 1 Year | Increase |
|
||||
|--------|---------|--------|----------|
|
||||
| **Storage** | 18 MB | 251 MB | 14× |
|
||||
| **Records** | 120,960 | 1,576,800 | 13× |
|
||||
| **Blocked signals** | 20-30 | 260-390 | 13× |
|
||||
| **Analysis value** | Limited | Comprehensive | Massive |
|
||||
|
||||
### Key Advantages
|
||||
|
||||
1. **13× more historical data** for pattern analysis
|
||||
2. **Seasonal trend detection** (summer vs winter volatility)
|
||||
3. **Better statistical significance** for threshold decisions
|
||||
4. **No risk of losing valuable blocked signal data**
|
||||
5. **More complete picture** of indicator behavior over time
|
||||
6. **Storage cost negligible** (0.25 GB vs user likely has TB+ available)
|
||||
|
||||
### Blocked Signal Analysis Benefits
|
||||
|
||||
**With 4-week retention:**
|
||||
- ~20-30 blocked signals per month
|
||||
- Limited timeframe for pattern detection
|
||||
- Risk of losing valuable historical data
|
||||
|
||||
**With 1-year retention:**
|
||||
- ~260-390 blocked signals per year
|
||||
- Can analyze across different market conditions
|
||||
- Discover patterns like: "Quality 80 + ADX rising 17→22 = avg 180min to TP1"
|
||||
|
||||
---
|
||||
|
||||
## Current Data Status
|
||||
|
||||
### Database Check (Dec 2, 2025 10:55)
|
||||
```sql
|
||||
SELECT symbol, COUNT(*) as rows,
|
||||
MIN(TO_CHAR(timestamp, 'MM-DD HH24:MI')) as oldest,
|
||||
MAX(TO_CHAR(timestamp, 'MM-DD HH24:MI')) as newest
|
||||
FROM "MarketData" GROUP BY symbol;
|
||||
```
|
||||
|
||||
**Result:**
|
||||
```
|
||||
symbol | rows | oldest | newest
|
||||
----------+------+-------------+-------------
|
||||
SOL-PERP | 1 | 12-02 10:25 | 12-02 10:25
|
||||
```
|
||||
|
||||
**Status:** Test record confirmed, awaiting live TradingView 1-minute alerts
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate (Next 24 hours)
|
||||
1. ✅ Monitor container stability - No crashes detected
|
||||
2. ⏳ Watch for live 1-minute data from TradingView alerts
|
||||
3. ⏳ Verify row growth: Should increase by ~180 rows/hour (3 symbols × 60 min)
|
||||
4. ⏳ Check at 3 AM: Cleanup should run with 1-year cutoff
|
||||
|
||||
### Short Term (Week 1)
|
||||
5. Monitor database size growth (~4.4 MB expected)
|
||||
6. Verify no gaps in data collection
|
||||
7. Confirm all 8 indicator fields populated (not NULL)
|
||||
8. Validate cleanup runs daily without errors
|
||||
|
||||
### Medium Term (Months 1-3)
|
||||
9. Collect 65-100 blocked signals with 8-hour 1-minute history
|
||||
10. Monitor database size (18-55 MB)
|
||||
11. Validate data quality (no gaps, all indicators present)
|
||||
12. Begin preliminary pattern analysis
|
||||
|
||||
### Long Term (Months 4-12)
|
||||
13. Continue data collection to 260-390 blocked signals
|
||||
14. Refactor BlockedSignalTracker to query MarketData table
|
||||
15. Add precise timing fields: tp1HitTime, minutesToTP1, adxAtTP1, rsiAtTP1
|
||||
16. Comprehensive pattern analysis with full year of data
|
||||
17. Make data-driven threshold decisions (lower to 85/80 or keep 90/80)
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Commands
|
||||
|
||||
### Check Data Collection
|
||||
```bash
|
||||
# View current row counts
|
||||
docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c \
|
||||
"SELECT symbol, COUNT(*) as rows FROM \"MarketData\" GROUP BY symbol;"
|
||||
|
||||
# View recent data
|
||||
docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c \
|
||||
"SELECT symbol, price, adx, atr, TO_CHAR(timestamp, 'MM-DD HH24:MI:SS') \
|
||||
FROM \"MarketData\" ORDER BY timestamp DESC LIMIT 10;"
|
||||
```
|
||||
|
||||
### Check Database Size
|
||||
```bash
|
||||
docker exec trading-bot-postgres psql -U postgres -d trading_bot_v4 -c \
|
||||
"SELECT pg_size_pretty(pg_total_relation_size('\"MarketData\"')) as table_size;"
|
||||
```
|
||||
|
||||
### Check Cleanup Schedule
|
||||
```bash
|
||||
docker logs trading-bot-v4 | grep "cleanup"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Technical Details
|
||||
|
||||
### MarketData Model (8 Fields)
|
||||
```typescript
|
||||
{
|
||||
id: String (cuid)
|
||||
createdAt: DateTime
|
||||
|
||||
symbol: String // "SOL-PERP", "ETH-PERP", "BTC-PERP"
|
||||
timeframe: String // "1" for 1-minute
|
||||
price: Float // Close price
|
||||
|
||||
// Full indicator suite (ALL CONFIRMED SAVING):
|
||||
atr: Float // Volatility %
|
||||
adx: Float // Trend strength
|
||||
rsi: Float // Momentum
|
||||
volumeRatio: Float // Volume vs average
|
||||
pricePosition: Float // Position in range (%)
|
||||
maGap: Float // MA50-MA200 gap
|
||||
volume: Float // Raw volume
|
||||
|
||||
timestamp: DateTime // Exact candle close time
|
||||
}
|
||||
```
|
||||
|
||||
### Cleanup Service Configuration
|
||||
- **File:** lib/maintenance/data-cleanup.ts
|
||||
- **Schedule:** Daily at 3 AM (cron: `0 3 * * *`)
|
||||
- **Retention:** 365 days (1 year)
|
||||
- **Action:** Deletes records where `createdAt < NOW() - INTERVAL '365 days'`
|
||||
- **Integration:** Started automatically via lib/startup/init-position-manager.ts
|
||||
|
||||
### Test Data Validation
|
||||
```
|
||||
ID: cmiofn61g0000t407ilf019cy
|
||||
Symbol: SOL-PERP ✅
|
||||
Timeframe: 1 ✅
|
||||
Price: $127.85 ✅
|
||||
ATR: 2.8 ✅
|
||||
ADX: 21.5 ✅
|
||||
RSI: 62.1 ✅
|
||||
Volume Ratio: 1.5 ✅
|
||||
Price Position: 55.2% ✅
|
||||
MA Gap: 0.3 ✅
|
||||
Volume: 18500 ✅
|
||||
Timestamp: Dec 2, 10:25:55 ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Git History
|
||||
|
||||
### Commit: 5773d7d
|
||||
```
|
||||
feat: Extend 1-minute data retention from 4 weeks to 1 year
|
||||
|
||||
- Updated lib/maintenance/data-cleanup.ts retention period: 28 days → 365 days
|
||||
- Storage requirements validated: 251 MB/year (negligible)
|
||||
- Rationale: 13× more historical data for better pattern analysis
|
||||
- Benefits: 260-390 blocked signals/year vs 20-30/month
|
||||
- Cleanup cutoff: Now Dec 2, 2024 (vs Nov 4, 2025 previously)
|
||||
- Deployment verified: Container restarted, cleanup scheduled for 3 AM daily
|
||||
```
|
||||
|
||||
**Files changed:** 11 files, 1191 insertions, 7 deletions
|
||||
**Branch:** master
|
||||
**Remote:** Pushed successfully
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
| Criterion | Status |
|
||||
|-----------|--------|
|
||||
| Code updated with 1-year retention | ✅ COMPLETE |
|
||||
| Docker image rebuilt | ✅ COMPLETE |
|
||||
| Container restarted | ✅ COMPLETE |
|
||||
| Startup logs verified | ✅ COMPLETE |
|
||||
| Cleanup cutoff date confirmed (Dec 2, 2024) | ✅ COMPLETE |
|
||||
| Cleanup scheduled for 3 AM daily | ✅ COMPLETE |
|
||||
| Git commit created | ✅ COMPLETE |
|
||||
| Changes pushed to remote | ✅ COMPLETE |
|
||||
| Documentation created | ✅ COMPLETE |
|
||||
| Test data validated (all 8 fields) | ✅ COMPLETE |
|
||||
| Storage requirements calculated (251 MB/year) | ✅ COMPLETE |
|
||||
|
||||
---
|
||||
|
||||
## User's Original Question
|
||||
|
||||
**Question:** "please calculate how much mb the one month storage of the 1 minute datapoints will consume. maybe we can extend this to 1 year. i think it will not take much storage."
|
||||
|
||||
**Answer:**
|
||||
- **1 month (28 days):** 17.5 MB (~18 MB)
|
||||
- **1 year (365 days):** 251 MB
|
||||
|
||||
**User's intuition:** "i think it will not take much storage" → **CORRECT!** ✅
|
||||
|
||||
**Decision:** Extended retention from 4 weeks to 1 year based on minimal storage requirements and massive analytical benefits.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
✅ **DEPLOYMENT SUCCESSFUL**
|
||||
|
||||
The 1-minute data retention period has been successfully extended from 4 weeks to 1 year. Storage requirements are negligible (251 MB/year), while analytical benefits are massive (13× more historical data). System is now configured to collect and retain a full year of continuous 1-minute market data across all indicators, providing comprehensive historical context for future blocked signal analysis and threshold optimization decisions.
|
||||
|
||||
**Next milestone:** Begin collecting live TradingView 1-minute alerts and monitor data accumulation over the next 24 hours.
|
||||
Reference in New Issue
Block a user