Target Options
Specification
Here we have various keys accepted for target options:
add_new_columns
(Only for database target)
Whether to add new columns from stream not found in target table (when mode is not full-refresh
). Default is true
.
adjust_column_type
(Only for database target)
Whether to adjust the column type when needed. Default is false
.
batch_limit
(Only for database target — since v1.2.11)
The maximum number of records per transaction batch.
column_casing
column_typing
compression
(Only for file target)
The type of compression to use when writing files. Valid inputs are none
, auto
and gzip
, zstd
, snappy
. Default is auto
.
datetime_format
(Only for file target)
delete_missing
(Only for database target)
delimiter
(Only for file target)
The delimiter to use when writing tabular files. Default is ,
.
file_max_bytes
(For file target or temp files for DB target)
The maximum number of bytes to write to a file. 0
means infinite number of bytes. When a value greater than 0
is specified, the output location will be a folder with many parts in it. Default is 50000000
. Does not work with parquet
file format (use file_max_rows
instead).
file_max_rows
(For file target or temp files for DB target)
The maximum number of rows (usually lines) to write to a file. 0
means infinite number of rows. When a value greater than 0
is specified, the output location will be a folder with many parts in it. Default is 500000
.
format
(Only for file target)
The format of the file(s). Options are: csv
, parquet
, xlsx
, json
and jsonlines
.
ignore_existing
Ignore existing target file/table if it exists (do not overwrite/modify). Default is false
.
header
(Only for file target)
Whether to write the first line as header. Default is true
.
post_sql
(Only for database target)
The SQL query to run after loading.
pre_sql
(Only for database target)
The SQL query to run before loading.
table_ddl
(Only for database target)
table_keys
(Only for database target)
table_tmp
(Only for database target)
The temporary table name that should be used when loading into a database. Default is auto-generated by Sling.
use_bulk
(Only for database target)
Table DDL
Here is an example of using the table_ddl
option:
Table Keys
Table keys are used to define the keys of the target table. They are useful to preset keys such as indexes, primary keys, etc.
Here are the accepted keys:
cluster
(used in BigQuery and Snowflake)index
(used in PostgreSQL, MySQL, Oracle, SQL Server, SQLite and DuckDB)partition
(used in PostgreSQL, BigQuery, ClickHouse)primary
(used in PostgreSQL, MySQL, Oracle, SQL Server, SQLite)sort
(used in Redshift)unique
(used in PostgreSQL, MySQL, Oracle, SQL Server, SQLite and DuckDB)aggregate
,duplicate
,distribution
andhash
(used in StarRocks)
Here is an example of using the table_keys
option:
Last updated