Flint
Workflow Engine

How Workflows Run

Understanding workflow execution and monitoring

How Workflows Run

Learn what happens when your workflows execute and how to monitor their performance.

Workflow Execution Basics

What Happens When a Workflow Starts

  1. Trigger Fires - Something starts your workflow

    • Webhook receives data
    • Schedule time arrives
    • File gets uploaded
    • Button gets clicked
  2. Data Preparation - System gets ready

    • Loads workflow definition
    • Prepares variables and context
    • Validates permissions
    • Allocates resources
  3. Step Execution - Actions happen in order

    • Each step receives input data
    • Performs its specific action
    • Produces output for next step
    • Reports success or failure
  4. Completion - Workflow finishes

    • Final results recorded
    • Resources cleaned up
    • Notifications sent
    • Status updated

Execution Modes

Sequential Execution Steps run one after another:

  • Step 1 completes → Step 2 starts
  • Predictable order
  • Easy to follow and debug
  • Best for dependent operations

Parallel Execution Multiple steps run simultaneously:

  • Faster overall completion
  • Better resource utilization
  • Handles independent operations
  • Automatic coordination and joining

Conditional Execution Different paths based on decisions:

  • Dynamic routing
  • Business rule enforcement
  • Exception handling
  • Flexible process flow

Variable Processing

How Variables Work

Variables connect your workflow steps by passing data between them.

Variable Resolution When you use {{variable.name}}, the system:

  1. Looks for the variable in step outputs
  2. Checks global variables
  3. Searches trigger data
  4. Uses context information
  5. Returns the value or reports error

Data Types Variables can hold different types of data:

  • Text - Names, descriptions, messages
  • Numbers - Amounts, quantities, calculations
  • Dates - Timestamps, schedules, deadlines
  • Lists - Collections of items to process
  • Objects - Complex data structures

Example Variable Usage

Email Subject: "Order {{order.id}} shipped to {{customer.name}}"
API URL: "https://api.example.com/customers/{{customer.id}}"
Condition: "{{invoice.amount}} > {{limits.approval_threshold}}"

Dynamic Data Flow

Step-to-Step Data Passing

  • Output from Step 1 becomes input for Step 2
  • Calculations and transformations happen automatically
  • Complex data structures maintained
  • Error handling preserves data integrity

Global Variable Access

  • Organization settings available everywhere
  • API keys and configuration values
  • Business rules and thresholds
  • Shared resources and connections

Error Handling & Recovery

Automatic Error Handling

Retry Logic When steps fail, the system automatically:

  1. Waits a short time (1 second)
  2. Retries the step
  3. If still failing, waits longer (4 seconds)
  4. Retries again with exponential backoff
  5. After maximum attempts, escalates to manual review

Smart Retry Decisions The system knows which errors to retry:

  • Retry: Network timeouts, server busy, rate limits
  • Don't Retry: Invalid data, authentication errors, business rule violations

Manual Interventions

When Human Review is Needed

  • Data validation failures
  • External system unavailable
  • Business rule exceptions
  • Approval requirements

Intervention Process

  1. Workflow pauses automatically
  2. Notification sent to designated reviewers
  3. Issue appears in dashboard
  4. Reviewer can:
    • Fix data and continue
    • Retry the failed step
    • Skip the problematic step
    • Abort the workflow

Fallback Strategies

Alternative Actions

  • Primary email service down → Use backup service
  • Main payment processor unavailable → Try secondary processor
  • API rate limit reached → Queue for later processing

Graceful Degradation

  • Continue with default values
  • Log issues for later review
  • Reduce functionality but maintain operation
  • Notify stakeholders of limitations

Performance & Optimization

Execution Speed

Factors Affecting Speed

  • Network latency to external systems
  • Database query complexity
  • File processing size
  • Number of parallel operations
  • System load and resources

Optimization Techniques

  • Run independent steps in parallel
  • Cache frequently accessed data
  • Batch operations when possible
  • Use efficient data formats
  • Set appropriate timeouts

Resource Management

Compute Resources

  • Each workflow gets allocated processing power
  • Complex operations may need more resources
  • System automatically scales based on demand
  • Priority workflows get precedence

Memory Usage

  • Large data sets require more memory
  • System monitors and manages usage
  • Automatic cleanup of completed workflows
  • Prevention of memory leaks

Storage

  • Workflow data stored securely
  • Audit trails maintained
  • Large files handled efficiently
  • Automatic archiving of old data

Monitoring & Observability

Real-Time Monitoring

Execution Dashboard

  • See currently running workflows
  • Monitor step-by-step progress
  • View queued workflows waiting to run
  • Check system health and performance

Live Execution View

  • Watch workflows execute in real-time
  • See data flowing between steps
  • Identify bottlenecks immediately
  • Monitor resource usage

Audit Trail

Complete Execution History Every workflow run records:

  • Start and end times for each step
  • Input and output data
  • Any errors or warnings
  • Who triggered the workflow
  • System resource usage

Compliance & Debugging

  • Prove processes followed correctly
  • Debug failed workflows quickly
  • Track data lineage
  • Generate compliance reports

Performance Metrics

Key Metrics Tracked

  • Average execution time
  • Success/failure rates
  • Resource utilization
  • Queue depth and wait times
  • Error patterns and trends

Alerting System

  • Automatic notifications for failures
  • Performance degradation alerts
  • Resource limit warnings
  • SLA breach notifications

Workflow States

Execution States

Queued - Waiting to start

  • Workflow triggered but not yet running
  • Waiting for available resources
  • Position in queue visible
  • Estimated start time provided

Running - Currently executing

  • Steps executing in order
  • Progress visible in real-time
  • Can be monitored or paused
  • Resource usage being tracked

Paused - Waiting for intervention

  • Manual review required
  • External system unavailable
  • Approval pending
  • Can be resumed when ready

Completed - Successfully finished

  • All steps executed successfully
  • Results available for review
  • Audit trail complete
  • Resources cleaned up

Failed - Stopped due to error

  • Error details available
  • Can often be retried
  • Data preserved for analysis
  • Notifications sent to stakeholders

State Transitions

Workflows move between states based on:

  • Execution progress
  • Error conditions
  • User actions
  • System events
  • External factors

Understanding these states helps you:

  • Monitor workflow health
  • Troubleshoot issues
  • Optimize performance
  • Plan capacity

Best Practices

Designing for Reliability

Error Handling

  • Plan for external system failures
  • Validate data at each step
  • Provide meaningful error messages
  • Implement appropriate retry logic

Performance

  • Use parallel execution where possible
  • Set realistic timeouts
  • Monitor resource usage
  • Optimize data flow

Monitoring

  • Set up appropriate alerts
  • Review execution patterns regularly
  • Monitor error trends
  • Track performance metrics

Troubleshooting Common Issues

Slow Execution

  • Check for network bottlenecks
  • Optimize database queries
  • Consider parallel processing
  • Review timeout settings

Frequent Failures

  • Examine error patterns
  • Check external system health
  • Validate input data quality
  • Review retry logic

Resource Issues

  • Monitor memory usage
  • Check for data leaks
  • Optimize file processing
  • Review concurrent execution limits