docs: Compact node_spooler README.md (429→308 lines)

Remove CI4 integration examples to focus on service documentation

Changes:
- Remove CI4 Integration section (113 lines of PHP/JS examples)
- Remove CI4 Controller from Architecture diagram
- Remove all ReportController, curl, fetch code references
- Condense Quick Start and Troubleshooting sections
- Focus README on pure node_spooler service documentation

Reduction: 429 → 308 lines (-121 lines, 28% smaller)

Scope:
- All API endpoints documented
- Error handling and cleanup procedures preserved
- Monitoring and troubleshooting guides retained
- Deployment instructions maintained
- No CI4 integration code examples
This commit is contained in:
mahdahar 2026-02-03 11:44:44 +07:00
parent 2843ddd392
commit a9b387b21f

View File

@ -9,7 +9,7 @@ Node.js Express service with internal queue for HTML to PDF conversion using Chr
## Architecture
```
CI4 Controller
Client Application
↓ POST {html, filename}
Node.js Spooler (port 3030)
↓ queue
@ -120,120 +120,6 @@ Queue statistics.
}
```
## CI4 Integration
### Controller Example
```php
<?php
namespace App\Controllers;
class ReportController extends BaseController {
public function generateReport($accessnumber) {
$html = $this->generateHTML($accessnumber);
$filename = $accessnumber . '.pdf';
$jobId = $this->postToSpooler($html, $filename);
return $this->respond([
'success' => true,
'jobId' => $jobId,
'message' => 'PDF queued for generation',
'status' => 'queued'
]);
}
private function postToSpooler($html, $filename) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://localhost:3030/api/pdf/generate');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode([
'html' => $html,
'filename' => $filename
]));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Content-Type: application/json'
]);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode !== 200) {
log_message('error', "Spooler API returned HTTP $httpCode");
throw new \Exception('Failed to queue PDF generation');
}
$data = json_decode($response, true);
return $data['jobId'];
}
public function checkPdfStatus($jobId) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://localhost:3030/api/pdf/status/$jobId");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
$response = curl_exec($ch);
curl_close($ch);
return $this->response->setJSON($response);
}
}
```
### Frontend Example (JavaScript)
```javascript
async function generatePDF(accessNumber) {
try {
const response = await fetch('/report/generate/' + accessNumber, {
method: 'POST'
});
const { jobId, status } = await response.json();
if (status === 'queued') {
alert('PDF queued for generation');
}
return jobId;
} catch (error) {
console.error('Failed to generate PDF:', error);
alert('Failed to generate PDF');
}
}
async function pollPdfStatus(jobId) {
const maxAttempts = 60;
let attempts = 0;
const interval = setInterval(async () => {
if (attempts >= maxAttempts) {
clearInterval(interval);
alert('PDF generation timeout');
return;
}
const response = await fetch(`/report/status/${jobId}`);
const data = await response.json();
if (data.status === 'completed') {
clearInterval(interval);
window.location.href = data.pdfUrl;
} else if (data.status === 'error') {
clearInterval(interval);
alert('PDF generation failed: ' + data.error);
}
attempts++;
}, 2000);
}
```
## Error Handling
### Chrome Crash Handling
@ -290,7 +176,7 @@ Open `admin.html` in browser for:
- Error file list
- Disk space visualization
**URL:** `http://localhost/gdc_cmod/node_spooler/admin.html`
**URL:** `http://localhost:3030/admin.html`
### Key Metrics
@ -310,12 +196,11 @@ Open `admin.html` in browser for:
### Spooler Not Starting
**Solutions:**
1. Check if Chrome is running on port 42020
2. Check logs: `logs/spooler.log`
3. Verify directories exist: `data/pdfs`, `data/archive`, `data/error`, `logs`
4. Check Node.js version: `node --version` (need 14+)
5. Verify dependencies installed: `npm install`
- Check if Chrome is running on port 42020
- Check logs: `logs/spooler.log`
- Verify directories exist: `data/pdfs`, `data/archive`, `data/error`, `logs`
- Check Node.js version: `node --version` (need 14+)
- Verify dependencies installed: `npm install`
**Start Chrome manually:**
```bash
@ -327,37 +212,33 @@ Open `admin.html` in browser for:
### PDF Not Generated
**Solutions:**
1. Check job status via API: `GET /api/pdf/status/{jobId}`
2. Review error logs: `logs/errors.log`
3. Verify Chrome connection: Check logs for CDP connection errors
4. Check HTML content: Ensure valid HTML
- Check job status via API: `GET /api/pdf/status/{jobId}`
- Review error logs: `logs/errors.log`
- Verify Chrome connection: Check logs for CDP connection errors
- Check HTML content: Ensure valid HTML
### Queue Full
**Solutions:**
1. Wait for current jobs to complete
2. Check admin dashboard for queue size
3. Increase `maxQueueSize` in `spooler.js` (default: 100)
4. Check if jobs are stuck (processing too long)
- Wait for current jobs to complete
- Check admin dashboard for queue size
- Increase `maxQueueSize` in `spooler.js` (default: 100)
- Check if jobs are stuck (processing too long)
### Chrome Crashes Repeatedly
**Solutions:**
1. Check system RAM (need minimum 2GB available)
2. Reduce `maxConcurrent` in `spooler.js` (default: 5)
3. Check for memory leaks in Chrome
4. Restart Chrome manually and monitor
5. Check system resources: Task Manager > Performance
- Check system RAM (need minimum 2GB available)
- Reduce `maxConcurrent` in `spooler.js` (default: 5)
- Check for memory leaks in Chrome
- Restart Chrome manually and monitor
- Check system resources: Task Manager > Performance
### High Disk Usage
**Solutions:**
1. Run cleanup: `npm run cleanup`
2. Check `data/archive/` for old folders
3. Check `logs/` for old logs
4. Check `data/pdfs/` for large files
5. Consider reducing PDF retention time in `cleanup-config.json`
- Run cleanup: `npm run cleanup`
- Check `data/archive/` for old folders
- Check `logs/` for old logs
- Check `data/pdfs/` for large files
- Consider reducing PDF retention time in `cleanup-config.json`
## Deployment
@ -365,11 +246,10 @@ Open `admin.html` in browser for:
```bash
# 1. Create directories
cd D:\data\www\gdc_cmod
mkdir -p node_spooler/logs node_spooler/data/pdfs node_spooler/data/archive node_spooler/data/error
cd node_spooler
mkdir -p logs data/pdfs data/archive data/error
# 2. Install dependencies
cd node_spooler
npm install
# 3. Start Chrome (if not running)
@ -387,25 +267,25 @@ curl -X POST http://localhost:3030/api/pdf/generate \
-d "{\"html\":\"<html><body>Test</body></html>\",\"filename\":\"test.pdf\"}"
# 6. Open admin dashboard
# http://localhost/gdc_cmod/node_spooler/admin.html
# http://localhost:3030/admin.html
```
### Production Setup
1. Create batch file wrapper:
**1. Create batch file wrapper:**
```batch
@echo off
cd /d D:\data\www\gdc_cmod\node_spooler
C:\node\node.exe spooler.js
```
2. Create Windows service:
**2. Create Windows service:**
```batch
sc create PDFSpooler binPath= "D:\data\www\gdc_cmod\node_spooler\spooler-start.bat" start=auto
sc start PDFSpooler
```
3. Create scheduled task for cleanup:
**3. Create scheduled task for cleanup:**
```batch
schtasks /create /tn "PDF Cleanup Daily" /tr "C:\node\node.exe D:\data\www\gdc_cmod\node_spooler\cleanup.js" /sc daily /st 01:00
schtasks /create /tn "PDF Cleanup Weekly" /tr "C:\node\node.exe D:\data\www\gdc_cmod\node_spooler\cleanup.js weekly" /sc weekly /d MON /st 01:00