Workers
CutX uses two Cloudflare Workers alongside the main app to handle background processing and external callbacks.
Pipeline Worker
Section titled “Pipeline Worker”Name: cutx-pipeline
Trigger: Cloudflare Queue consumer (cutx-jobs)
The pipeline worker processes generation jobs from the queue. It reads messages in batches and routes each job to the appropriate handler.
Job Routing
Section titled “Job Routing”switch (job.type) { case 'copy': → handleCopyJob() // Workers AI (sync) case 'static_ad': → handleStaticAdJob() // Replicate (async) case 'ugc_video': → handleVideoJob() // Replicate (async) case 'tts': → handleTTSJob() // Replicate (async) case 'lip_sync': → handleLipSyncJob() // Replicate (async) case 'scrape': → handleScrapeJob() // HTTP fetch (sync)}Processing Flow
Section titled “Processing Flow”- Dequeue — receive batch of up to 5 messages
- Transition — mark each job as
processing - Execute — run the handler for the job type
- Complete or Fail — update job status
- Ack — acknowledge the message to remove from queue
Sync vs Async Jobs
Section titled “Sync vs Async Jobs”Synchronous jobs (copy, scrape) complete within the worker:
- Workers AI generates copy directly
- HTTP scraper fetches product data
- Output is stored and job marked
completedimmediately
Asynchronous jobs (static_ad, ugc_video, tts) submit to Replicate:
- Worker sends prediction request to Replicate API
- Stores the
replicate_prediction_idon the job row - Job stays in
processinguntil webhook arrives
Error Handling
Section titled “Error Handling”- Transient errors (network timeouts, 5xx responses): message is retried up to 3 times
- Permanent errors (invalid input, auth failure): job is marked
failed, credits refunded - Dead letter queue: after 3 failed retries, the message moves to
cutx-jobs-dlq
Queue Configuration
Section titled “Queue Configuration”[[queues.consumers]]queue = "cutx-jobs"max_batch_size = 5max_batch_timeout = 30max_retries = 3dead_letter_queue = "cutx-jobs-dlq"Webhook Worker
Section titled “Webhook Worker”Name: cutx-webhook
Trigger: HTTP requests
The webhook worker receives callbacks from external services.
Endpoints
Section titled “Endpoints”| Route | Source | Purpose |
|---|---|---|
POST /replicate | Replicate API | Prediction completion |
POST /stripe | Stripe | Payment and subscription events |
Replicate Webhook Flow
Section titled “Replicate Webhook Flow”When an async job completes on Replicate:
1. Replicate sends POST /replicate with prediction result2. Worker looks up job by replicate_prediction_id3. If prediction succeeded: a. Download output file from Replicate URL b. Upload to R2 bucket (cutx-media) c. Store asset record with R2 URL d. Mark job as completed4. If prediction failed: a. Record error message b. Mark job as failed c. Refund creditsStripe Webhook Events
Section titled “Stripe Webhook Events”| Event | Action |
|---|---|
checkout.session.completed | Add purchased credits to balance |
customer.subscription.created | Activate subscription tier |
customer.subscription.updated | Update tier and monthly credits |
customer.subscription.deleted | Deactivate subscription |
invoice.payment_succeeded | Add monthly subscription credits |
Bindings Summary
Section titled “Bindings Summary”| Binding | Pipeline | Webhook | Main App |
|---|---|---|---|
HYPERDRIVE (PostgreSQL) | Yes | Yes | Yes |
MEDIA (R2 bucket) | Yes | Yes | Yes |
AI (Workers AI) | Yes | No | Yes |
QUEUE (producer) | No | No | Yes |
| Queue consumer | Yes | No | No |
Deployment
Section titled “Deployment”Each worker is deployed independently:
# Pipeline workercd workers/pipelinenpx wrangler deploy
# Webhook workercd workers/webhooknpx wrangler deploy
# Main appnpx wrangler pages deploy distAll three services share the same Cloudflare account and Hyperdrive binding, ensuring they connect to the same PostgreSQL database.