Skip to content

Flows Engine

Flows are automated pipelines that process your uploaded files. When a user uploads an image, a flow can automatically resize it, generate thumbnails, optimize it for the web, and save all versions to your storage - all without any manual intervention.

Think of flows like a recipe: you define the steps once, and Uploadista executes them every time a file is uploaded.

Trigger Flow → Create Job → Input Nodes (upload files) → Processing Nodes → Output Nodes → Done

A flow is made up of connected nodes:

  • Input nodes - Initiate uploads using the Upload Engine and receive files
  • Processing nodes - Transform the file (resize, compress, convert, etc.)
  • Output nodes - Save results to storage

Nodes are connected with edges that define how data flows between them.

When a flow is triggered:

  1. A job is created to track the execution
  2. Input nodes initiate one or more uploads via the Upload Engine
  3. The flow pauses while waiting for uploads to complete
  4. Once all input uploads finish, the flow resumes processing
  5. Processing and output nodes execute in sequence
Use CaseFlow Description
Profile photosResize to standard sizes, crop to square, save as WebP
Product imagesGenerate thumbnail + full-size, optimize for web
Document uploadValidate file type, scan for viruses, store securely
Video thumbnailsExtract frame at 5 seconds, resize, save as JPEG
Batch processingApply same transformation to multiple files

The easiest way to create flows is with the visual builder in the Uploadista dashboard:

  1. Go to Flows in your dashboard
  2. Click Create Flow
  3. Drag nodes from the sidebar onto the canvas
  4. Connect nodes by dragging from output to input handles
  5. Configure each node by clicking on it
  6. Save and activate your flow

Here’s a typical image processing flow:

┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Upload │───►│ Resize │───►│ Save to │
│ (Input) │ │ (800x600) │ │ S3 │
└─────────────┘ └─────────────┘ └─────────────┘
┌─────────────┐ ┌─────────────┐
│ Thumbnail │───►│ Save to │
│ (200x150) │ │ S3 │
└─────────────┘ └─────────────┘

This flow:

  1. Receives an uploaded image
  2. Resizes it to 800x600 for the main image
  3. Creates a 200x150 thumbnail
  4. Saves both versions to S3
NodeDescription
File InputReceives uploaded files
URL InputFetches files from a URL
NodeDescription
ResizeChange image dimensions
CropCrop to specific area or aspect ratio
RotateRotate by degrees
Format ConvertConvert between JPEG, PNG, WebP, AVIF
OptimizeCompress while maintaining quality
WatermarkAdd text or image watermark
BlurApply blur effect
GrayscaleConvert to black and white
NodeDescription
Background RemoveRemove image background
UpscaleAI-powered image upscaling
Face RestoreEnhance and restore faces
NodeDescription
PDF to ImageConvert PDF pages to images
Extract TextOCR text extraction
NodeDescription
ConditionalRoute files based on conditions
MergeCombine multiple files
ZIPCreate archive from files
NodeDescription
Save to StorageSave to your configured data store
WebhookSend result to external URL

Flows are triggered explicitly - the user or application decides when to start a flow:

Via API:

Terminal window
POST /api/flows/image-processing/execute

Via Client SDK:

// The client triggers the flow, then uploads file(s)
const job = await uploadista.executeFlow("image-processing");
// The flow's input nodes handle the upload process

From the Dashboard:

  1. Go to Flows in your dashboard
  2. Select a flow
  3. Click Execute to start a new job

When a flow is triggered, it creates a job and the input nodes initiate uploads. The flow automatically pauses until all uploads complete, then resumes processing.

Each flow has configuration options:

{
name: "image-processing",
description: "Process uploaded images",
// Input validation (applied to input nodes)
input: {
allowedMimeTypes: ["image/jpeg", "image/png", "image/webp"],
maxFileSize: 10 * 1024 * 1024, // 10MB
},
// Processing timeout
timeout: 60000, // 1 minute
// Retry on failure
retries: 3,
}

When a flow completes, you receive the outputs:

{
"flowId": "image-processing",
"jobId": "job-abc123",
"status": "completed",
"outputs": {
"main": {
"url": "https://cdn.example.com/uploads/abc123/main.webp",
"size": 245760,
"mimeType": "image/webp"
},
"thumbnail": {
"url": "https://cdn.example.com/uploads/abc123/thumb.webp",
"size": 12288,
"mimeType": "image/webp"
}
},
"duration": 2340
}

Via Webhook:

// Configure webhook output node to send results
app.post("/webhooks/flow-complete", (req, res) => {
const { flowId, outputs } = req.body;
// Store URLs in your database
await db.update("uploads", {
mainUrl: outputs.main.url,
thumbnailUrl: outputs.thumbnail.url,
});
});

Via WebSocket events:

ws.onmessage = (event) => {
const message = JSON.parse(event.data);
if (message.type === "flow_complete") {
console.log("Flow outputs:", message.outputs);
}
};

Via API:

Terminal window
GET /api/flows/jobs/job-abc123
# Response
{
"id": "job-abc123",
"status": "completed",
"outputs": { ... }
}

Create multiple image sizes from one upload:

Upload ──► Resize 1920px ──► Save (fullsize)
├─► Resize 800px ──► Save (medium)
└─► Resize 200px ──► Save (thumbnail)

Route files based on their properties:

Upload ──► Check Size ──► Large (>5MB) ──► Compress ──► Save
└─► Small (<5MB) ──► Save directly

Keep the original file while also creating processed versions:

Upload ──┬─► Save (original)
└─► Process ──► Save (processed)

By default, only the final outputs (nodes with no outgoing connections) are saved. Intermediate processing results are automatically cleaned up to save storage.

Sometimes you need to keep intermediate results:

  • Invoice processing: Keep both the original PDF and the extracted text
  • Multi-format images: Keep the resized version before converting to WebP and JPEG
  • Audit trails: Preserve each processing step

In the flow builder, select any node and enable “Keep output as result”. This preserves that node’s output even if it has outgoing connections.

Upload (keep) ──► OCR ──► Analysis
└─► Original PDF is preserved alongside the OCR text and analysis

Nodes with “Keep output” enabled show an amber badge in the flow builder.

  1. Automatic retry - Uploadista retries failed nodes (configurable)
  2. Error event - A flow_error event is sent via WebSocket
  3. Job status - The job is marked as failed with error details
Terminal window
GET /api/flows/jobs/job-abc123
# Response for failed flow
{
"id": "job-abc123",
"status": "failed",
"error": {
"nodeId": "resize-node",
"message": "Image format not supported",
"code": "UNSUPPORTED_FORMAT"
}
}
ErrorCauseSolution
UNSUPPORTED_FORMATFile type not supported by nodeCheck file mime type before flow
FILE_TOO_LARGEFile exceeds node limitsIncrease limits or compress first
TIMEOUTProcessing took too longIncrease timeout or simplify flow
STORAGE_ERRORFailed to save outputCheck storage configuration
  • Use larger chunk sizes in your data store configuration
  • Increase flow timeout for processing-heavy operations
  • Consider streaming nodes that process files in chunks
  • Use Redis-backed KV and event stores (not Memory)
  • Deploy multiple server instances
  • Consider dedicated worker processes for flow execution
  • Keep flows focused - one purpose per flow
  • Use conditional nodes to skip unnecessary processing
  • Set appropriate timeouts per node