Structure
Below is the structure of the replication configuration file.
Root Level
At the root level, we have the following keys:
# 'source', 'target' and 'streams' keys are required
source: <connection name>
target: <connection name>
defaults: <replication stream map>
hooks: <replication level hooks map>
streams:
<stream name>: <replication stream map>
env:
<variable name>: <variable value>
Stream Level
The <stream name> identifies the stream to replicate. This can be either a source table name, a file path, or a wildcard pattern using *. Wildcards allow matching multiple tables within a schema or multiple files within a directory. For example, my_schema.* matches all tables in my_schema, while data/*.csv matches all CSV files in the data directory. See Tags & Wildcards for more details.
The <replication stream map> is a map object which accepts the following keys:
object: <target table or file name>
mode: full-refresh | incremental | truncate | snapshot | backfill
description: <stream description>
disabled: true | false
primary_key: [<array of column names to use as primary key>]
update_key: <column name to use as incremental key>
columns: {<map of column name to data type>}
select: [<array of column names to include or exclude>]
files: [<array of file paths to include or exclude>]
where: <SQL where clause. Also accepts placeholders update_key, incremental_value, and incremental_where_cond>
single: true | false
sql: <source custom SQL query>
transforms: [<array of transforms or map of column name to array of transforms>]
hooks: <stream level hooks map>
source_options: <source options map>
target_options: <target options map>Hooks
The <replication level hooks map> and <stream level hooks map> accepts the keys below. See Hooks for more details.
# replication level, at start and end of replication
start: [<array of hooks>]
end: [<array of hooks>]
# stream level, before and after a stream run
pre: [<array of hooks>]
post: [<array of hooks>]
pre_merge: [<array of hooks>] # since v1.4.24
post_merge: [<array of hooks>] # since v1.4.24Source Options
The <source options map> accepts the keys below. See Source Options for more details.
compression: auto | none | zip | gzip | snappy | zstd
chunk_size: <backfill chunk size>
datetime_format: auto | <ISO 8601 date format>
delimiter: <character to use as flat file delimiter>
encoding: latin1 | latin5 | latin9 | utf8 | utf8_bom | utf16 | windows1250 | windows1252
empty_as_null: true | false
escape: <character to use as flat file quote escape>
flatten: true | false
format: csv | xml | xlsx | json | parquet | avro | sas7bdat | jsonlines | arrow | delta | raw
header: true | false
jmespath: <JMESPath expression>
limit: <integer>
null_if: <null_if expression>
range: <backfill range expression>
sheet: <excel sheet/range expression>
skip_blank_lines: true | falseTarget Options
The <target options map> accepts the keys below. See Target Options for more details.
add_new_columns: true | false
adjust_column_type: true | false
batch_limit: <integer>
column_casing: source | target | snake | upper | lower
column_typing: {map of column type generation configuration}
compression: auto | none | gzip | snappy | zstd
datetime_format: auto | <ISO 8601 date format>
delimiter: <character to use as flat file delimiter>
delete_missing: hard | soft
direct_insert: true | false
encoding: latin1 | latin5 | latin9 | utf8 | utf8_bom | utf16 | windows1250 | windows1252
isolation_level: default | read_uncommitted | read_committed | write_committed | repeatable_read | snapshot | serializable | linearizable
file_max_bytes: <integer>
file_max_rows: <integer>
format: csv | xlsx | json | parquet | raw
header: true | false
ignore_existing: true | false
table_ddl: <ddl sql query>
table_keys: {map of table key type to array of column names}
table_tmp: <name of table>
use_bulk: true | falseReplication Specification
Here we have the definitions for the accepted keys.
source
The source database connection (name, conn string or URL).
target
The target database connection (name, conn string or URL).
hooks
The replication level hooks to apply (at start & end of replication). See here for details.
streams.<key>
The source table (schema.table), local / cloud file path. Use file:// for local paths.
streams.<key>.object
or defaults.object
The target table (schema.table) or local / cloud file path. Use file:// for local paths.
streams.<key>.columns
or defaults.columns
The columns types map. See here for details.
streams.<key>.transforms
or defaults.transforms
The transforms to apply. See here for details.
streams.<key>.hooks
or defaults.hooks
The stream level hooks to apply (pre- & post-stream run). See here for details.
streams.<key>.mode
or defaults.mode
The target load mode to use: incremental, truncate, full-refresh, backfill or snapshot. Default is full-refresh.
streams.<key>.select or defaults.select
Select or exclude specific columns from the source stream. Use - prefix to exclude.
streams.<key>.single or defaults.single
When using a wildcard (*) in the stream name, consider as a single stream (don't expand into many streams).
streams.<key>.sql or defaults.sql
The custom SQL query to use. Accepts file://path/to.query.sql as well.
streams.<key>.primary_key
or defaults.primary_key
The column(s) to use as primary key. If composite key, use array.
streams.<key>.update_key
or defaults.update_key
The column to use as update key (for incremental mode).
streams.<key>.source_options
or defaults.source_options
Options to further configure source. See here for details.
streams.<key>.target_options
or defaults.target_options
Options to further configure target. See here for details.
env
Environment variables to use for replication. See here for details.
Last updated
Was this helpful?