# Environment

Sling looks for connection credentials in several places:

* [Sling Env File](#sling-env-file-env.yaml) (located at `~/.sling/env.yaml`)
* [Project Env File](#project-env-file-.env.sling) (`.env.sling` in the current working directory)
* [DBT Profiles Files](#dbt-profiles-dbt-profiles.yml) (located at `~/.dbt/profiles.yml`)
* [Environment Variables](#environment-variables)

One of the easiest ways is to manage your connections is to use the `sling conns` sub-command. Follow in the next section.

## Managing Connections

Sling makes it easy to **set**, **list** and **test** connections. You can even see the available streams in a connection by using the **discover** sub-command.

```bash
$ sling conns -h
conns - Manage and interact with local connections

See more details at https://docs.slingdata.io/sling-cli/

  Usage:
    conns [discover|list|set|test]

  Subcommands:
    discover   list available streams in connection
    list       list local connections detected
    test       test a local connection
    unset      remove a connection from the sling env file
    set        set a connection in the sling env file
    exec       execute a SQL query on a Database connection

  Flags:
       --version   Displays the program version string.
    -h --help      Displays help with available flag, subcommand, and positional value parameters.
```

### Set Connections

Here we can easily set a connection with the `sling conns set` command and later refer to them by their name. This ensures credentials are not visible by other users when using process monitors, for example.

{% code overflow="wrap" %}

```bash
# set a connection by providing the key=value pairs
$ sling conns set AWS_S3 type=s3 bucket=sling-bucket access_key_id=ACCESS_KEY_ID secret_access_key="SECRET_ACCESS_KEY"

# we set a database connection with just the url
$ sling conns set MY_PG url='postgresql://postgres:myPassword@pghost:5432/postgres'
```

{% endcode %}

To see what credential keys are necessary/accepted for each type of connector, click below:

* File/Storage Connections (see [here](https://docs.slingdata.io/connections/file-connections))
* Database Connections (see [here](https://docs.slingdata.io/connections/database-connections))

### List Connections

Once connections are set, we can run the `sling conns list` command to list our detected connections:

```bash
$ sling conns list
+--------------------------+-----------------+-------------------+
| CONN NAME                | CONN TYPE       | SOURCE            |
+--------------------------+-----------------+-------------------+
| AWS_S3                   | FileSys - S3    | sling env yaml    |
| FINANCE_BQ               | DB - BigQuery   | sling env yaml    |
| DO_SPACES                | FileSys - S3    | sling env yaml    |
| LOCALHOST_DEV            | DB - PostgreSQL | dbt profiles yaml |
| MSSQL                    | DB - SQLServer  | sling env yaml    |
| MYSQL                    | DB - MySQL      | sling env yaml    |
| ORACLE_DB                | DB - Oracle     | env variable      |
| MY_PG                    | DB - PostgreSQL | sling env yaml    |
+--------------------------+-----------------+-------------------+
```

### Test Connections

We can also test a connection by running the `sling conns test` command:

```bash
$ sling conns test LOCALHOST_DEV
9:04AM INF success!
```

### Discover Connections

We can easily discover streams available in a connection with the `sling conns discover` command:

```bash
$ sling conns discover postgres --pattern public.work*
+---+--------+-------------------+-------+---------+
| # | SCHEMA | NAME              | TYPE  | COLUMNS |
+---+--------+-------------------+-------+---------+
| 1 | public | worker_heartbeats | table |      14 |
| 2 | public | workers           | table |      20 |
| 3 | public | workspaces        | table |       9 |
+---+--------+-------------------+-------+---------+

$ sling conns discover aws_s3
+---+------------------+-----------+---------+-------------------------------+
| # | NAME             | TYPE      | SIZE    | LAST UPDATED (UTC)            |
+---+------------------+-----------+---------+-------------------------------+
| 1 | logging/         | directory | -       | -                             |
| 2 | sling_test/      | directory | -       | -                             |
| 3 | work/            | directory | -       | -                             |
| 4 | temp/            | directory | -       | -                             |
| 5 | records.json     | file      | 442 KiB | 2022-12-07 11:05:01 (1y ago)  |
| 6 | test.sqlite.db   | file      | 4.8 MiB | 2022-12-14 21:00:48 (1y ago)  |
| 7 | test1.parquet    | file      | 48 KiB  | 2024-03-31 22:54:52 (29d ago) |
| 8 | test_1000.csv    | file      | 99 KiB  | 2024-02-23 09:53:13 (67d ago) |
+---+------------------+-----------+---------+-------------------------------+
```

Show column level information:

```bash
$ sling conns discover postgres -p public.workspaces --columns
+----------+--------+------------+----+--------------+--------------------------+--------------+
| DATABASE | SCHEMA | TABLE      | ID | COLUMN       | NATIVE TYPE              | GENERAL TYPE |
+----------+--------+------------+----+--------------+--------------------------+--------------+
| postgres | public | workspaces |  1 | id           | bigint                   | bigint       |
| postgres | public | workspaces |  2 | account_id   | bigint                   | bigint       |
| postgres | public | workspaces |  3 | name         | text                     | text         |
| postgres | public | workspaces |  4 | short_name   | varchar                  | string       |
| postgres | public | workspaces |  5 | token        | text                     | text         |
| postgres | public | workspaces |  6 | settings     | jsonb                    | json         |
| postgres | public | workspaces |  7 | created_dt   | timestamp with time zone | timestampz   |
| postgres | public | workspaces |  8 | updated_dt   | timestamp with time zone | timestampz   |
| postgres | public | workspaces |  9 | deleted_dt   | timestamp with time zone | timestampz   |
+----------+--------+------------+----+--------------+--------------------------+--------------+

$ sling conns discover aws_s3 -p test1.parquet --columns
+---------------------------------+----+------------------+----------------+--------------+
| FILE                            | ID | COLUMN           | NATIVE TYPE    | GENERAL TYPE |
+---------------------------------+----+------------------+----------------+--------------+
| s3://my-bucket/test1.parquet    |  1 | id               | INT_64         | bigint       |
| s3://my-bucket/test1.parquet    |  2 | first_name       | UTF8           | string       |
| s3://my-bucket/test1.parquet    |  3 | last_name        | UTF8           | string       |
| s3://my-bucket/test1.parquet    |  4 | email            | UTF8           | string       |
| s3://my-bucket/test1.parquet    |  5 | target           | BOOLEAN        | bool         |
| s3://my-bucket/test1.parquet    |  6 | create_dt        | Timestamp      | datetime     |
| s3://my-bucket/test1.parquet    |  7 | date             | Timestamp      | datetime     |
| s3://my-bucket/test1.parquet    |  8 | rating           | DECIMAL        | decimal      |
| s3://my-bucket/test1.parquet    |  9 | code             | DECIMAL        | decimal      |
| s3://my-bucket/test1.parquet    | 10 | json_data        | UTF8           | string       |
| s3://my-bucket/test1.parquet    | 11 | _sling_loaded_at | INT_64         | bigint       |
+---------------------------------+----+------------------+----------------+--------------+
```

## Credentials Location

### Sling Env File (`env.yaml`)

The Sling Env file is the primary way sling reads connections globally. It needs to be saved in the path `~/.sling/env.yaml` where the `~` denotes the path of the user Home folder, which can have different locations depending on the operating system (see [here for Windows](https://stackoverflow.com/a/42966089/2295355), [here for Mac](https://apple.stackexchange.com/a/51282) and [here for Linux](https://www.linuxshelltips.com/find-user-home-directory-linux/)). Sling automatically creates the `.sling` folder in the user home directory, which is typically as shown below:

* Linux: `/home/<username>/.sling`, or `/root/.sling` if user is `root`
* Mac: `/Users/<username>/.sling`
* Windows: `C:\Users\<username>\.sling`

Once in the user home directory, setting the Sling Env File (named `env.yaml`) is easy, and adheres to the structure below. Running `sling` the first time will auto-create it. You can alternatively provide the environment variable `SLING_HOME_DIR`.

To see what credential keys are necessary/accepted for each type of connector, click below:

* File/Storage Connections (see [here](https://docs.slingdata.io/connections/file-connections))
* Database Connections (see [here](https://docs.slingdata.io/connections/database-connections))

```yaml
# Holds all connection credentials for Extraction and Loading
connections:
  marketing_pg:
    url: 'postgres://...' 
    ssh_tunnel: 'ssh://...' # optional
  
  # or dbt profile styled
  marketing_pg:
    type: postgres        
    host: [hostname]      
    user: [username]      
    password: ${PASSWORD} # you can pass in environment variables as well
    port: [port]          
    dbname: [database name]
    schema: [dbt schema]  
    ssh_tunnel: 'ssh://...' 
  
  finance_bq:
    type: bigquery
    method: service-account
    project: [GCP project id]
    dataset: [the name of your dbt dataset]
    keyfile: [/path/to/bigquery/keyfile.json]

# Global variables for specific settings, available to all connections at runtime (Optional)
variables:
  SLING_CLI_TOKEN: xxxxxxxxxxxxxxxx  # picked up machine wide
  SLING_LOG_DIR: ~/.sling/logs       # write debug logs here
  aws_access_key: '...'
  aws_secret_key: '...'
```

### Dot-Env File (`.env.sling`)

Sling automatically loads a `.env.sling` file from the **current working directory** when it starts. This lets you define per-project connections and variables without modifying the global `env.yaml`. This applies to version *1.5.10+*.

This is useful when you have multiple projects, each with their own connections or credentials. Simply place a `.env.sling` file in the project directory, and Sling will pick it up automatically when run from that directory.

The file uses a simple `KEY=VALUE` format (one per line). Comments and blank lines are supported. Values can optionally be wrapped in single or double quotes.

```bash
# .env.sling - project-specific connections

# Database connections
MY_PG='postgresql://user:password@localhost:5432/mydb'
STAGING_PG='postgresql://user:password@staging-host:5432/mydb'

# File storage
MY_S3='{type: s3, bucket: my-project-bucket, access_key_id: AKID, secret_access_key: SECRET}'

# API Connection
SALESFORCE='{ type: api, spec: salesforce, secrets: { client_id: "xxxxxxxx", client_secret: "xxxxxxxx", instance: "mycompany.my.salesforce.com" } }'

# Other variables
SLING_LOG_DIR=/tmp/logs
SLING_LOADED_AT_COLUMN=timestamp
SLING_CLI_TOKEN=xxxxxxxxxxxxxxxx
```

{% hint style="info" %}
**Existing environment variables are not overwritten.** If a variable is already set in the shell environment, the value from `.env.sling` will be ignored. This means shell-level overrides always take precedence.
{% endhint %}

{% hint style="warning" %}
Since `.env.sling` may contain sensitive credentials, make sure to add it to your `.gitignore` to avoid accidentally committing secrets to version control.
{% endhint %}

### Environment Variables

Sling also reads environment variables. Simply export a connection URL (or YAML payload) to the current shell environment to use them.

To see examples of setting environment variables for each type of connector, click below:

* File/Storage Connections (see [here](https://docs.slingdata.io/connections/file-connections))
* Database Connections (see [here](https://docs.slingdata.io/connections/database-connections))

{% tabs %}
{% tab title="Mac / Linux" %}
{% code overflow="wrap" %}

```bash
$ export MY_PG='postgresql://user:mypassw@pg.host:5432/db1'
$ export MY_PG='{type: postgres, host: "pg.host", user: user, database: "db1", password: "mypassw", port: 5432}'

$ export MY_SNOWFLAKE='snowflake://user:mypassw@sf.host/db1'
$ export MY_SNOWFLAKE='{type: snowflake, host: "<host>", user: "<user>", database: "<database>", password: "<password>", role: "<role>"}'

$ export ORACLE_DB='oracle://user:mypassw@orcl.host:1521/db1'

$ export BIGQUERY_DB='{type: bigquery, dataset: public, key_file: /path/to/service.json, project: my-google-project}' # yaml or json form is also accepted

$ sling conns list
+---------------+------------------+-----------------+
| CONN NAME     | CONN TYPE        | SOURCE          |
+---------------+------------------+-----------------+
| MY_PG         | DB - PostgreSQL  | env variable    |
| MY_SNOWFLAKE  | DB - Snowflake   | env variable    |
| ORACLE_DB     | DB - Oracle      | env variable    |
| BIGQUERY_DB   | DB - Big Query   | env variable    |
+---------------+------------------+-----------------+
```

{% endcode %}
{% endtab %}

{% tab title="Windows" %}
{% code overflow="wrap" %}

```powershell
$ $env:MY_PG='postgresql://user:mypassw@pg.host:5432/db1'
$ $env:MY_PG='{type: postgres, host: "pg.host", user: user, database: "db1", password: "mypassw", port: 5432}'

$ $env:MY_SNOWFLAKE='snowflake://user:mypassw@sf.host/db1'
$ $env:MY_SNOWFLAKE='{type: snowflake, host: "sf.host", user: user, database: db1, password: "mypassw", role: "<role>"}'

$ $env:ORACLE_DB='oracle://user:mypassw@orcl.host:1521/db1'

$ $env:BIGQUERY_DB='{type: bigquery, dataset: public, key_file: /path/to/service.json, project: my-google-project}' # yaml or json form is also accepted

$ sling conns list
+---------------+------------------+-----------------+
| CONN NAME     | CONN TYPE        | SOURCE          |
+---------------+------------------+-----------------+
| MY_PG         | DB - PostgreSQL  | env variable    |
| MY_SNOWFLAKE  | DB - Snowflake   | env variable    |
| ORACLE_DB     | DB - Oracle      | env variable    |
+---------------+------------------+-----------------+
```

{% endcode %}
{% endtab %}
{% endtabs %}

### DBT Profiles (`~/dbt/profiles.yml`)

Sling also reads dbt profiles connections! If you're already set up with dbt cli locally, you don't need to create additional duplicate connections.

See [here](https://docs.getdbt.com/dbt-cli/configure-your-profile) for more details.

```bash
$ sling conns list
+------------------+------------------+-------------------+
| CONN NAME        | CONN TYPE        | SOURCE            |
+------------------+------------------+-------------------+
| SNOWCASTLE_DEV   | DB - Snowflake   | dbt profiles yaml |
| SNOWCASTLE_PROD  | DB - Snowflake   | dbt profiles yaml |
+------------------+------------------+-------------------+
```

## Location String

The location string is a way to describe where sling should look for a file object or database object. It is used in a few places, such as the [`SLING_STATE`](https://docs.slingdata.io/cli-pro#file--state-based-incremental-loading) env var, as well as [Hooks](https://docs.slingdata.io/concepts/hooks) such as [`delete`](https://docs.slingdata.io/concepts/hooks/delete), [`copy`](https://docs.slingdata.io/concepts/hooks/copy) and [`inspect`](https://docs.slingdata.io/concepts/hooks/inspect).

The proper input format is `CONN_NAME/path/to/key` for storage connections, or `CONN_NAME/[database.]schema.table` for database objects.

Local location examples:

* `local/relative/path`
* `local/../parent/relative/path`
* `local//absolute/linux/path`
* `local/C:/absolute/windows/path`

For cloud or remote storage connections (with defined `AWS_S3`, `GCP`, `AZURE`, `SFTP` connections):

* `aws_s3/path/to/folder`
* `gcp/path/to/folder/file.parquet`
* `azure/path/to/folder/file.log`
* `sftp/relative/path/to/folder/file.log`
* `sftp//absolute/path/to/folder/file.log`

Database location examples (for hooks like `inspect`):

* `postgres/public.users` - PostgreSQL table in public schema
* `mysql_db/analytics.events` - MySQL table in analytics schema
* `snowflake/DATABASE.SCHEMA.TABLE` - Snowflake table with explicit database
* `bigquery/project.dataset.table_name` - BigQuery table
* `oracle_db/HR.EMPLOYEES` - Oracle table in HR schema
* `mssql/dbo.customers` - SQL Server table in dbo schema

{% hint style="warning" %}
For **file storage connections** (`local`, `ftp` and `sftp`), you can specify a relative or absolute path. For FTP connections, it will be relative to the default folder of the username connecting.

**Relative Path**: You use the typical single slash (`/`) after the connection name:

* `local/relative/path`
* `sftp/relative/path`
* `ftp/relative/path`

**Absolute Path**: You need to add 2 slashes (`//`) after the connection name:

* `local//absolute/path`
* `local/C:/absolute/path`
* `sftp//absolute/path`
* `ftp//absolute/path`

For **database connections**, use the standard database object naming convention: `connection_name/[database.]schema.table_name`
{% endhint %}
