Fabric
Connect & Ingest data from / to a Microsoft Fabric Warehouse
Microsoft Fabric is a modern data warehouse solution from Microsoft that uses OneLake storage for data persistence. Sling provides optimized bulk loading for Fabric using OneLake staging.
Setup
The following credentials keys are accepted:
host(required) -> The hostname of the Fabric warehouse (e.g.,xxx.datawarehouse.fabric.microsoft.com)database(required) -> The warehouse or database nameuser(optional) -> The username to access the warehouse (not required when using Azure AD auth)password(optional) -> The password to access the warehouse (not required when using Azure AD auth)port(optional) -> The port of the instance. Default is1433.schema(optional) -> The default schema to usefedauth(optional) -> The Azure Active Directory authentication string. See here for more details. Accepted values:ActiveDirectoryDefault,ActiveDirectoryIntegrated,ActiveDirectoryPassword,ActiveDirectoryInteractive,ActiveDirectoryMSI,ActiveDirectoryManagedIdentity,ActiveDirectoryApplication,ActiveDirectoryServicePrincipal,ActiveDirectoryServicePrincipalAccessToken,ActiveDirectoryDeviceCode,ActiveDirectoryAzCli.ssh_tunnel(optional) -> The URL of the SSH server you would like to use as a tunnel (examplessh://user:[email protected]:22)ssh_private_key(optional) -> The private key to use to access a SSH server (raw string or path to file).ssh_passphrase(optional) -> The passphrase to use to access a SSH server.
ABFS / OneLake Configuration
For bulk import operations, Fabric uses OneLake staging via Azure Blob File System (ABFS). The following properties are required for bulk operations:
abfs_endpoint(required for bulk) -> The OneLake endpoint (e.g.,onelake.dfs.fabric.microsoft.com) or Azure Data Lake Storage Gen2 endpoint (e.g.,myaccount.dfs.core.windows.net)abfs_filesystem(required for bulk) -> The workspace UUID identifier for staging files (OneLake) or container name (ADLS Gen2)abfs_parent(required for bulk) -> The lakehouse UUID with the folder path for staging files. This is typically a UUID with the/Filespath, e.g.:a6682eca-b677-40d0-85a0-71665example/Filesformat(optional) -> File format for staging. Acceptsparquet(default) orcsvcopy_into_endpoint(optional) -> Override the endpoint used in the COPY INTO SQL command. Useful when your staging endpoint differs from what Fabric Warehouse expects for reading. See Troubleshooting for details.
How to obtain ABFS values
Create a Lakehouse, and then get the URL for the Files folder.


You will get a URL such as: https://onelake.dfs.fabric.microsoft.com/8e5f41f1-2677-4e95-78d8-7cd1eexample/a6682eca-b677-40d0-85a0-71665example/Files
The abfs_filesystem would be 8e5f41f1-2677-4e95-78d8-7cd1eexample. The abfs_parent would be a6682eca-b677-40d0-85a0-71665example/Files.
Authentication for ABFS (choose one):
account_key-> Storage account key for OneLake accesssas_svc_url-> Shared Access Signature URL for OneLakeclient_id,tenant_id,client_secret-> Azure service principal credentials
If ABFS properties are not provided, Sling will fall back to standard INSERT statements, which are much slower for large datasets. For optimal performance with bulk data loading, configure OneLake staging.
Additional Parameters
Sling uses the go-mssqldb library and thus will accept any parameters listed here. Some parameters that may be of interest:
encrypt->strict,disable,falseortrue- whether data between client and server is encrypted. For Fabric, typically set totrue.log-> logging level (accepts1,2,4,8,16,32).trusted_connection->trueorfalse- whether to connect with a trusted connection using integrated securitytrust_server_certificate->trueorfalse- whether the server certificate is checked. For Fabric, typically set tofalse.certificate-> The file that contains the public key certificate of the CA that signed the server certificate.hostname_in_certificate-> Specifies the Common Name (CN) in the server certificate. Default value is the server host.server_spn-> The kerberos SPN (Service Principal Name) for the server. Default is MSSQLSvc/host:port.driver-> A way to override the sql driver to connect with. Default issqlserver. Other option isazuresql.
Kerberos Parameters
authenticator- set this tokrb5to enable kerberos authentication. If this is not present, the default provider would bentlmfor unix andwinsspifor windows.krb5_config_file(optional) - path to kerberos configuration file. Defaults to/etc/krb5.conf. Can also be set usingKRB5_CONFIGenvironment variable.krb5_realm(required with keytab and raw credentials) - Domain name for kerberos authentication. Omit this parameter if the realm is part of the user name likeusername@REALM.krb5_keytab_file- path to Keytab file. Can also be set using environment variableKRB5_KTNAME. If no parameter or environment variable is set, theDefaultClientKeytabNamevalue from the krb5 config file is used.krb5_cred_cache_file- path to Credential cache. Can also be set using environment variableKRB5CCNAME.krb5_dns_lookup_kdc- Optional parameter in all contexts. Set to lookup KDCs in DNS. Boolean. Default is true.krb5_udp_preference_limit- Optional parameter in all contexts. 1 means to always use tcp. MIT krb5 has a default value of 1465, and it prevents user setting more than 32700. Integer. Default is 1.
Sling supports authentication via 3 methods. See here for more details.
Keytabs - Specify the username, keytab file, the krb5.conf file, and realm.
Credential Cache - Specify the krb5.conf file path and credential cache file path.
Raw credentials - Specify krb5.conf, Username, Password and Realm.
Connection Examples
Using sling conns
sling connsHere are examples of setting a connection named FABRIC. We must provide the type=fabric property:
Environment Variable
Sling Env File YAML
See here to learn more about the sling env.yaml file.
Bulk Import Operations
Fabric supports high-performance bulk loading using OneLake staging. When properly configured with ABFS properties, Sling will:
Stage data to OneLake - Write data files (Parquet or CSV) to your specified OneLake location
Execute COPY INTO - Load data from OneLake into Fabric warehouse using the optimized COPY INTO command
Clean up - Automatically remove staging files after successful load
Performance Tips
Use Parquet format - Parquet provides better compression and faster loads (default)
Configure OneLake staging - Essential for large datasets (100K+ rows)
Use Azure AD authentication - Recommended for production environments
Set appropriate file chunk sizes - Use
file_max_rowsproperty to control staging file size (default: 500,000 rows)
Example Replication with Bulk Loading
Important Notes
Connection Type: Use
type: fabric(notsqlserver) for Fabric-specific optimizationsBCP Not Supported: Unlike SQL Server, Fabric does not use the BCP utility for bulk loading
OneLake Staging: Required for optimal bulk load performance
Azure AD Recommended: Most Fabric deployments use Azure AD authentication
Endpoint Pattern: Fabric warehouses use
.datawarehouse.fabric.microsoft.comendpoints
Troubleshooting
Slow Bulk Loads
If bulk loads are slow, ensure ABFS properties are configured:
Without these, Sling falls back to row-by-row inserts.
Authentication Issues
For Azure AD authentication, ensure you've run:
Or provide service principal credentials if using ActiveDirectoryServicePrincipal.
Connection Errors
Verify your connection with:
COPY INTO Endpoint Issues
When using Azure Data Lake Storage Gen2 (ADLS Gen2) as staging instead of OneLake, you may encounter this error:
This occurs because:
File uploads via ABFS require the
.dfs.core.windows.netendpointCOPY INTO command often works better with the
.blob.core.windows.netendpoint (especially with Service Principal authentication)
Manual Override: If you need a different endpoint for COPY INTO, use the copy_into_endpoint property:
If you are facing issues connecting, please reach out to us at [email protected], on discord or open a Github Issue here.
Last updated
Was this helpful?