CLI Pro
Use CLI Pro to enable advanced features
CLI Pro Features
Sling CLI Pro extends the core functionality with advanced features designed for production environments and complex data operations.
✅ API Sources (extract data from any REST API by using Specs)
✅ Parallel Stream Processing (run streams in parallel)
✅ Stream Chunking (split large streams into smaller ones)
✅ Pipelines & Hooks (such as
http,query,checkand more)✅ MCP Server (enable AI assistants to interact with your data infrastructure)
✅ Capture Deletes (similar to CDC)
✅ Staged Transforms (advanced multi-stage transformations with expressions and functions)
✅ Support Sling and its continuous development
You can obtain a token for free at https://dash.slingdata.io. There is 7-day trial (no credit card needed).
Once you have a token, just put the value into the SLING_CLI_TOKEN environment variable before running sling (make sure the version is 1.4+).
For Pricing details see here.
API Sources
Extract data from any REST API with powerful YAML-based specifications called API Specs:
Define authentication methods (Bearer, Basic, OAuth2)
Configure endpoints with pagination strategies
Process responses with JMESPath extraction
Manage state for incremental synchronization
Support for queues and dependent requests
Built-in retry and error handling
In your env.yaml:
connections:
stripe_api:
type: api
spec: stripe # Use official spec or custom YAML (e.g. file://path/to/stripe.spec.yaml)
secrets:
api_key: sk_live_xxxxxxIn your replication:
source: stripe_api
target: ducklake
defaults:
object: stripe.{stream_name}
streams:
customers:
mode: incrementalSee API Specs for complete documentation and examples.
Stream Chunking & Parallel Processing
Process large datasets efficiently with automatic chunking and parallel execution:
Break down data into manageable chunks for various modes (
full-refresh,truncate,incremental,backfill)Support for time-based (hours, days, months), numeric, count-based, and expression-based chunks
Run multiple streams concurrently with automatic retry mechanisms
Configurable concurrency and retry settings
streams:
my_schema.events:
mode: full-refresh # works with various modes
primary_key: [id]
update_key: event_date
source_options:
chunk_count: 8 # Process in 8 equal sized chunks
my_schema.orders:
mode: incremental # works with various modes
update_key: order_date
source_options:
chunk_size: 7d # Process in 7-day chunks
env:
SLING_THREADS: 3 # maximum of 3 streams concurrently
SLING_RETRIES: 1 # maximum of 1 retry per failed streamEnvironment variables:
SLING_THREADSsets the maximum number of concurrent stream runs. Accepts an integer value, default is1.SLING_RETRIESsets the maximum number of retries for a failed stream run. Accepts an integer value, default is0.
See Chunking for detailed examples.
Pipelines & Hooks
Extend functionality with hooks and pipelines to create complex workflows. Hooks are used within replications to execute custom logic before/after operations, while Pipelines are standalone workflows that execute multiple steps in sequence.
Available action types:
See Hooks and Pipelines for usage examples and patterns.
Staged Transforms
Transform data with advanced multi-stage processing using expressions and functions:
Apply transformations in sequential stages with cross-column references
Create new columns dynamically without modifying source schemas
Use 50+ built-in functions for string, numeric, date, and conditional operations
Build complex logic with
if/then/elseconditions and record references
streams:
customers:
transforms:
# Stage 1: Clean and normalize data
- first_name: "trim_space(value)"
last_name: "trim_space(value)"
email: "lower(value)"
# Stage 2: Create computed columns
- full_name: 'record.first_name + " " + record.last_name'
email_hash: 'hash(record.email, "md5")'
# Stage 3: Add business logic
- customer_type: 'record.total_orders >= 50 ? "vip" : "regular"'
discount_rate: 'record.customer_type == "vip" ? 0.15 : 0.05'See Transforms for detailed examples and Available Functions for all available functions.
State Based Incremental Loading
Maintain state across file & database loads with intelligent incremental processing:
Track and resume file processing from last successful position
Support for incremental writes to databases and files
Automatic file partitioning and truncation management
See Database to Database Incremental Loading, Database to File Incremental Loading and File to Database Incremental Loading for detailed examples.
Capture Deletes (CDC)
Track deleted records using a deleted_at column:
Automatically detect and mark deleted records
Maintain historical record states
Support for soft deletes in target systems
See Delete Missing Records for implementation details.
MCP Server
Enable AI assistants like Claude and GitHub Copilot to interact with your data infrastructure through the Model Context Protocol (MCP):
Query Databases: Run SQL queries against 30+ database systems
Explore File Systems: List, copy, and inspect files across cloud storage and local systems
Manage Connections: Discover schemas, tables, columns, and endpoints
Execute Workflows: Run data replications and pipelines programmatically
API Integration: Create, test, and debug API specifications
AI-Powered Analysis: Leverage AI assistants for data exploration and pipeline creation
See Sling MCP Server for complete documentation, usage examples, and configuration for all AI assistants.
Frequently Asked Questions
How are tokens validated?
Tokens are validated through CloudFlare's global network, ensuring high reliability and fast response times worldwide. This validation occurs when the Sling CLI process initializes. If you'd like to confirm validation, run sling in debug mode (with flag -d), and you should see a log message: CLI Pro token validated.
Can I get an offline/air-gapped token?
For air-gapped or high-security environments, we offer offline license tokens - please contact [email protected] to request one.
How many subscriptions do I need?
Each CLI Pro subscription includes 2 tokens:
1 Production token: For use in production environments
1 Development token: For development and testing
We recommend purchasing one subscription per team or project, regardless of your deployment method. This allows you to:
Use the production token across your production environments (whether permanent servers or ephemeral containers)
Share the development token among team members for testing and development
For example:
A data engineering team handling customer data → 1 subscription
A separate analytics team handling reporting → 1 subscription
Multiple teams in an organization → 1 subscription per team
Consultancy/Freelancer with multiple customers → 1 subscription per customer
Important Licensing Restrictions:
Reselling and Commercial Redistribution Prohibited
CLI Pro subscriptions are licensed for use by the subscribing organization only. You are prohibited from:
Reselling or redistributing access to CLI Pro features
Acting as a service provider offering CLI Pro to third parties
White-labeling or rebranding CLI Pro as your own service
Providing commercial access to CLI Pro without proper licensing
For Consultants and Service Providers: If you wish to use CLI Pro in a consulting capacity or provide it to your clients, each client organization should have their own CLI Pro subscription. Feel free to contact us at [[email protected]] for further licensing arrangements.
For System Integrators: We offer specific partner licensing programs for system integrators and technology partners. Contact us to discuss appropriate licensing for your use case.
Unauthorized reselling or redistribution will result in immediate termination of your subscription and may subject you to legal action.
Please use tokens responsibly and in accordance with our Terms of Service. Each subscription is intended for use within a single organization or team, not for redistribution to external parties.
Last updated
Was this helpful?