Skip to content

Workers

CutX uses two Cloudflare Workers alongside the main app to handle background processing and external callbacks.

Name: cutx-pipeline Trigger: Cloudflare Queue consumer (cutx-jobs)

The pipeline worker processes generation jobs from the queue. It reads messages in batches and routes each job to the appropriate handler.

switch (job.type) {
case 'copy': → handleCopyJob() // Workers AI (sync)
case 'static_ad': → handleStaticAdJob() // Replicate (async)
case 'ugc_video': → handleVideoJob() // Replicate (async)
case 'tts': → handleTTSJob() // Replicate (async)
case 'lip_sync': → handleLipSyncJob() // Replicate (async)
case 'scrape': → handleScrapeJob() // HTTP fetch (sync)
}
  1. Dequeue — receive batch of up to 5 messages
  2. Transition — mark each job as processing
  3. Execute — run the handler for the job type
  4. Complete or Fail — update job status
  5. Ack — acknowledge the message to remove from queue

Synchronous jobs (copy, scrape) complete within the worker:

  • Workers AI generates copy directly
  • HTTP scraper fetches product data
  • Output is stored and job marked completed immediately

Asynchronous jobs (static_ad, ugc_video, tts) submit to Replicate:

  • Worker sends prediction request to Replicate API
  • Stores the replicate_prediction_id on the job row
  • Job stays in processing until webhook arrives
  • Transient errors (network timeouts, 5xx responses): message is retried up to 3 times
  • Permanent errors (invalid input, auth failure): job is marked failed, credits refunded
  • Dead letter queue: after 3 failed retries, the message moves to cutx-jobs-dlq
[[queues.consumers]]
queue = "cutx-jobs"
max_batch_size = 5
max_batch_timeout = 30
max_retries = 3
dead_letter_queue = "cutx-jobs-dlq"

Name: cutx-webhook Trigger: HTTP requests

The webhook worker receives callbacks from external services.

RouteSourcePurpose
POST /replicateReplicate APIPrediction completion
POST /stripeStripePayment and subscription events

When an async job completes on Replicate:

1. Replicate sends POST /replicate with prediction result
2. Worker looks up job by replicate_prediction_id
3. If prediction succeeded:
a. Download output file from Replicate URL
b. Upload to R2 bucket (cutx-media)
c. Store asset record with R2 URL
d. Mark job as completed
4. If prediction failed:
a. Record error message
b. Mark job as failed
c. Refund credits
EventAction
checkout.session.completedAdd purchased credits to balance
customer.subscription.createdActivate subscription tier
customer.subscription.updatedUpdate tier and monthly credits
customer.subscription.deletedDeactivate subscription
invoice.payment_succeededAdd monthly subscription credits
BindingPipelineWebhookMain App
HYPERDRIVE (PostgreSQL)YesYesYes
MEDIA (R2 bucket)YesYesYes
AI (Workers AI)YesNoYes
QUEUE (producer)NoNoYes
Queue consumerYesNoNo

Each worker is deployed independently:

Terminal window
# Pipeline worker
cd workers/pipeline
npx wrangler deploy
# Webhook worker
cd workers/webhook
npx wrangler deploy
# Main app
npx wrangler pages deploy dist

All three services share the same Cloudflare account and Hyperdrive binding, ensuring they connect to the same PostgreSQL database.