BigQuery
Connect & Ingest data from / to a BigQuery database
Setup
The following credentials keys are accepted:
project
(required) -> The GCP project ID for the projectdataset
(required) -> The default dataset (like a schema)gc_bucket
(optional) -> The Google Cloud Storage Bucket to use for loading (Recommended)key_file
(optional) -> The path of the Service Account JSON. If not provided, the Google Application Default Credentials will be used. You can also provide the JSON content in env varGC_KEY_BODY
.location
(optional) -> The location of the account, such asUS
orEU
. Default isUS
.extra_scopes
(optional) -> An array of strings, which represent scopes to use in addition tohttps://www.googleapis.com/auth/bigquery
. e.g.["https://www.googleapis.com/auth/drive", "https://www.googleapis.com/auth/spreadsheets"]
Using sling conns
sling conns
Here are examples of setting a connection named BIGQUERY
. We must provide the type=bigquery
property:
Environment Variable
You can also provide Sling the Service Account JSON via environment variable GC_KEY_BODY
, instead of a key_file
.
Sling Env File YAML
See here to learn more about the sling env.yaml
file.
If you are facing issues connecting, please reach out to us at support@slingdata.io, on discord or open a Github Issue here.
Last updated