# Datadog

Datadog is a monitoring and analytics platform for cloud-scale infrastructure, applications, and logs. The Sling Datadog connector extracts data from the Datadog REST API, supporting monitors, dashboards, events, users, synthetics tests, SLOs, downtimes, incidents, audit logs, and metrics.

{% hint style="success" %}
**CLI Pro Required**: APIs require a [CLI Pro token](https://docs.slingdata.io/sling-cli/cli-pro) or [Platform Plan](https://docs.slingdata.io/sling-platform/platform).
{% endhint %}

## Setup

The following credentials and inputs are accepted:

**Secrets:**

* `api_key` **(required)** -> Your Datadog API key
* `app_key` **(required)** -> Your Datadog Application key
* `site` (optional) -> Your Datadog site domain (default: `datadoghq.com`). Use `datadoghq.eu` for EU, `us3.datadoghq.com`, `us5.datadoghq.com`, `ap1.datadoghq.com`, or `ddog-gov.com` for other regions.

**Inputs:**

* `anchor_date` (optional) -> The starting date for historical data extraction (default: 1 year ago). Format: `YYYY-MM-DD`
* `events_filter_query` (optional) -> Filter query for the events endpoint using [Datadog search syntax](https://docs.datadoghq.com/logs/explorer/search_syntax/) (e.g., `source:kubernetes`, `status:error`)
* `audit_filter_query` (optional) -> Filter query for the audit logs endpoint (e.g., `@action:login`, `@user.email:admin@company.com`)
* `monitor_tags` (optional) -> Comma-separated tags to filter monitors (e.g., `env:prod,team:backend`)
* `monitor_name` (optional) -> Filter monitors by name substring (e.g., `cpu`)
* `host_filter` (optional) -> Filter hosts by name, alias, or tag (e.g., `env:prod`)
* `include_hosts_metadata` (optional) -> Include additional host metadata like agent version and platform (`true`/`false`)
* `metrics_host` (optional) -> Filter active metrics to a specific hostname
* `metrics_tag_filter` (optional) -> Filter metrics by tags with boolean/wildcard support (e.g., `env:production AND service:api`)

### Getting Your API Key

1. Log in to your [Datadog Dashboard](https://app.datadoghq.com/)
2. Go to **Organization Settings** > **API Keys**
3. Click **+ New Key**, give it a name, and copy the key

### Getting Your Application Key

1. In the Datadog Dashboard, go to **Organization Settings** > **Application Keys**
2. Click **+ New Key**, give it a name, and copy the key

{% hint style="warning" %}
**Important:** Both an API key and an Application key are required. The API key alone is not sufficient for most read endpoints.
{% endhint %}

### Using `sling conns`

{% code overflow="wrap" %}

```bash
sling conns set DATADOG type=api spec=datadog secrets='{ api_key: your_api_key, app_key: your_app_key }'
```

{% endcode %}

### Environment Variable

See [here](https://docs.slingdata.io/sling-cli/environment#dot-env-file-.env.sling) to learn more about the `.env.sling` file.

{% code overflow="wrap" %}

```bash
export DATADOG='{ type: api, spec: datadog, secrets: { api_key: "your_api_key", app_key: "your_app_key" } }'
```

{% endcode %}

### Sling Env File YAML

See [here](https://docs.slingdata.io/sling-cli/environment#sling-env-file-env.yaml) to learn more about the sling `env.yaml` file.

```yaml
connections:
  DATADOG:
    type: api
    spec: datadog
    secrets:
      api_key: "your_api_key"
      app_key: "your_app_key"
```

**For EU or other regional sites:**

```yaml
connections:
  DATADOG:
    type: api
    spec: datadog
    secrets:
      api_key: "your_api_key"
      app_key: "your_app_key"
      site: "datadoghq.eu"
```

## Replication

Here's an example replication configuration to sync Datadog data to a PostgreSQL database:

```yaml
source: DATADOG
target: MY_POSTGRES

defaults:
  mode: full-refresh
  object: datadog.{stream_name}

streams:
  # sync all endpoints
  '*':

  # incremental sync for event-based endpoints
  events:
    mode: incremental
  audit_logs:
    mode: incremental
```

## Endpoints

### Core Monitoring

| Endpoint     | Description                                            | Incremental |
| ------------ | ------------------------------------------------------ | ----------- |
| `monitors`   | Alert monitors and their configurations                | No          |
| `dashboards` | Dashboard summaries                                    | No          |
| `events`     | Events with timestamp-based incremental sync           | Yes         |
| `audit_logs` | Audit log events with timestamp-based incremental sync | Yes         |

### Infrastructure & Testing

| Endpoint           | Description                | Incremental |
| ------------------ | -------------------------- | ----------- |
| `hosts`            | Hosts in the organization  | No          |
| `synthetics_tests` | Synthetic monitoring tests | No          |
| `metrics`          | Active metric names        | No          |

### Operational

| Endpoint    | Description              | Incremental |
| ----------- | ------------------------ | ----------- |
| `slos`      | Service Level Objectives | No          |
| `downtimes` | Scheduled downtimes      | No          |
| `users`     | Organization users       | No          |
| `incidents` | Incidents                | No          |

To discover available endpoints:

```bash
sling conns discover DATADOG
```

## Filtering

Several endpoints support optional filter parameters to narrow down results. Filters can be set in two ways:

1. **Connection inputs** — set globally on the connection (applies to all replications)
2. **Replication store hooks** — set per-replication (overrides connection inputs)

### Via Connection Inputs

```yaml
connections:
  DATADOG:
    type: api
    spec: datadog
    secrets:
      api_key: "your_api_key"
      app_key: "your_app_key"
    inputs:
      events_filter_query: "source:datadog"
      monitor_tags: "env:prod,team:backend"
      metrics_tag_filter: "env:production"
```

### Via Replication Store Hooks

Use `store` hooks to set filters per-replication without modifying the connection:

```yaml
source: DATADOG
target: MY_POSTGRES

defaults:
  mode: full-refresh
  object: datadog.{stream_name}

hooks:
  start:
    - type: store
      map:
        events_filter_query: "source:kubernetes"
        monitor_name: "CPU"

streams:
  events:
  monitors:
```

Store values take priority over connection inputs, so you can set defaults in the connection and override per-replication.

### Available Filters

| Filter                   | Endpoint     | Description                                                                                                                                         |
| ------------------------ | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `events_filter_query`    | `events`     | [Datadog search syntax](https://docs.datadoghq.com/logs/explorer/search_syntax/) (e.g., `source:kubernetes`, `status:error`, `tags:env:production`) |
| `audit_filter_query`     | `audit_logs` | Search audit events (e.g., `@action:login`, `@user.email:admin@company.com`)                                                                        |
| `monitor_tags`           | `monitors`   | Comma-separated scope tags (e.g., `env:prod,role:web`)                                                                                              |
| `monitor_name`           | `monitors`   | Substring match on monitor name (e.g., `cpu`)                                                                                                       |
| `host_filter`            | `hosts`      | Filter by name, alias, or tag (e.g., `env:prod`)                                                                                                    |
| `include_hosts_metadata` | `hosts`      | Include additional host metadata like agent version and platform (`true`/`false`)                                                                   |
| `metrics_host`           | `metrics`    | Filter to metrics from a specific host                                                                                                              |
| `metrics_tag_filter`     | `metrics`    | Tag filter with boolean/wildcard support (e.g., `env:production AND service:api`)                                                                   |

## Incremental Sync

The `events` and `audit_logs` endpoints support incremental sync via timestamp:

* **First run:** Fetches all records from `anchor_date` (default: 1 year ago) to present
* **Subsequent runs:** Only fetches records created after the last sync timestamp

## Regional Sites

Datadog operates in multiple regions. Set the `site` secret to use a different region:

| Region        | Site Value          |
| ------------- | ------------------- |
| US1 (default) | `datadoghq.com`     |
| US3           | `us3.datadoghq.com` |
| US5           | `us5.datadoghq.com` |
| EU            | `datadoghq.eu`      |
| AP1           | `ap1.datadoghq.com` |
| US1-FED       | `ddog-gov.com`      |

## Rate Limiting

The Datadog API has rate limits that vary by endpoint. The connector automatically:

* Uses conservative rate limiting (5 requests/second)
* Retries with exponential backoff on 429 (rate limit) responses
* Retries on 5xx server errors with linear backoff

If you are facing issues connecting, please reach out to us at <support@slingdata.io>, on [discord](https://discord.gg/q5xtaSNDvp) or open a Github Issue [here](https://github.com/slingdata-io/sling-cli/issues).
