How it works
1
Send a POST
POST JSON or NDJSON data to the Stream API endpoint with your target table.
2
Authenticate
Include your API key in the
Authorization header.3
Data lands in DuckLake
Definite writes your data to an Iceberg table with automatic schema handling.
4
Query immediately
Your data is available for querying right away.
Endpoint
POSThttps://api.definite.app/v2/stream
Authentication
Include your API key in the Authorization header:Request Body
Fields
| Field | Type | Required | Description |
|---|---|---|---|
data | object or array | Yes | Single record or array of records to ingest |
config.table | string | Yes | Target table in schema.table format (e.g., bronze.events) |
config.mode | string | No | Ingestion mode. Only append is supported. Default: append |
config.wait | boolean | No | Wait for commit and return snapshot ID. Default: false |
config.tags | object | No | Optional metadata tags for tracing |
Response
Response Fields
| Field | Description |
|---|---|
success | Whether the ingestion was successful |
request_id | Unique identifier for this request |
stream_id | Unique identifier for this stream |
table | Fully qualified table name |
accepted | Number of rows parsed and accepted |
successful_rows | Number of rows successfully written |
rejected_rows | Number of rows rejected due to validation |
partitions | Human-friendly partition summary |
snapshot_id | Iceberg snapshot ID (present when wait=true) |
warnings | Warning messages |
errors | Error messages |
Limits
| Parameter | Limit | Description |
|---|---|---|
| Max payload size | 10 MB | Maximum request body size |
| Max rows per request | 50,000 | Maximum number of records per request |
| Max field size | 1 MB | Maximum size of any individual field |
| Max nested depth | 10 | Maximum JSON nesting depth |
Content Types
The Stream API accepts:- JSON (
application/json) - Single object or array of objects - NDJSON (
application/x-ndjson) - Newline-delimited JSON
Compression
You can compress your payload to reduce transfer time:Examples
Python
Python with compression
cURL
cURL with NDJSON
Error Handling
HTTP Status Codes
| Status | Meaning |
|---|---|
200 | Success - data ingested |
400 | Bad request - invalid JSON or schema |
401 | Unauthorized - invalid or missing API key |
413 | Payload too large - exceeds 10MB limit |
429 | Rate limited - too many requests |
500 | Server error - retry with backoff |
Retry Strategy
For transient errors (429, 5xx), implement exponential backoff:Best Practices
- Batch your data - Send multiple records per request (up to 50,000) rather than one at a time
- Use compression - For large payloads, enable gzip compression to reduce transfer time
- Handle partial failures - Check
rejected_rowsin the response; some rows may fail validation - Include idempotency keys - Add a unique ID field to your records for deduplication
- Use appropriate tables - Organize data into logical tables (e.g.,
bronze.events,bronze.users)
Related
- Push-Based Data Ingestion - Run your own extractor for sensitive deployments
- Webhooks - Trigger Definite blocks from external events

