Skip to main content
The File Upload API lets you upload files directly to Definite Drive, a shared storage space accessible from Fi (Definite’s AI agent). Upload CSVs, JSON files, or any data you want Fi to analyze.

How it works

1

Request an upload URL

POST to the File Upload API with your desired file path.
2

Receive a signed URL

Get back a pre-signed GCS URL valid for 1 hour.
3

Upload your file

PUT your file directly to Google Cloud Storage using the signed URL.
4

Access from Fi

Your file is available at /home/user/drive/{path} in Fi sessions.

Endpoint

POST https://api.definite.app/v3/drive/upload-url

Authentication

Include your API key in the Authorization header:
Authorization: Bearer YOUR_API_KEY
Your API key can be found in the bottom left user menu of the Definite app.

Request Body

{
  "path": "data/reports/q4-2024.csv"
}

Fields

FieldTypeRequiredDescription
pathstringYesPath for the file within your drive (e.g., data/reports/q4.csv)

Path Guidelines

  • Use forward slashes for nested directories (e.g., data/reports/file.csv)
  • Paths are relative to your team’s drive root
  • Directory traversal (..) and absolute paths (/) are not allowed
  • Backslashes are not allowed

Response

{
  "upload_url": "https://storage.googleapis.com/...",
  "gcs_path": "gs://bucket/team-id/drive/data/reports/q4-2024.csv",
  "drive_path": "/home/user/drive/data/reports/q4-2024.csv"
}

Response Fields

FieldDescription
upload_urlPre-signed PUT URL for uploading directly to GCS (valid for 1 hour)
gcs_pathFull Google Cloud Storage path where the file will be stored
drive_pathPath where the file will be accessible in Fi sessions

Uploading the File

After receiving the signed URL, upload your file using an HTTP PUT request:
curl -X PUT -T /path/to/your/file.csv "UPLOAD_URL_FROM_RESPONSE"
The signed URL expires after 1 hour. Request a new URL if your upload takes longer.

Limits

ParameterLimitDescription
URL expiration1 hourSigned URLs are valid for 60 minutes
Max file size5 TBGoogle Cloud Storage limit per object
Path length1024 charsMaximum file path length
For files larger than 100 MB, consider using the -T flag with curl for streaming uploads, which avoids loading the entire file into memory.

Examples

Bash / cURL

Get upload URL and upload a file

# Step 1: Get the signed upload URL
RESPONSE=$(curl -s -X POST "https://api.definite.app/v3/drive/upload-url" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"path": "data/sales-data.csv"}')

# Extract the upload URL (requires jq)
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url')

# Step 2: Upload your file
curl -X PUT -T /path/to/sales-data.csv "$UPLOAD_URL"

One-liner for quick uploads

curl -X PUT -T myfile.csv "$(curl -s -X POST 'https://api.definite.app/v3/drive/upload-url' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"path": "myfile.csv"}' | jq -r '.upload_url')"

Upload to nested directory

curl -s -X POST "https://api.definite.app/v3/drive/upload-url" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"path": "reports/2024/q4/revenue.csv"}'

Python

Basic upload

import httpx

API_KEY = "YOUR_API_KEY"
API_URL = "https://api.definite.app/v3/drive/upload-url"


def upload_to_drive(file_path: str, drive_path: str) -> dict:
    """Upload a file to Definite Drive."""

    # Step 1: Get signed upload URL
    response = httpx.post(
        API_URL,
        json={"path": drive_path},
        headers={"Authorization": f"Bearer {API_KEY}"},
        timeout=30.0,
    )
    response.raise_for_status()
    result = response.json()

    # Step 2: Upload file to GCS
    with open(file_path, "rb") as f:
        upload_response = httpx.put(
            result["upload_url"],
            content=f,
            timeout=300.0,  # 5 min timeout for large files
        )
        upload_response.raise_for_status()

    return result


# Example usage
result = upload_to_drive(
    file_path="/path/to/local/data.csv",
    drive_path="data/uploads/data.csv",
)

print(f"File uploaded to: {result['drive_path']}")

Upload with progress tracking

import httpx
from pathlib import Path


def upload_with_progress(file_path: str, drive_path: str) -> dict:
    """Upload a file with progress tracking."""

    file_size = Path(file_path).stat().st_size

    # Get signed URL
    response = httpx.post(
        "https://api.definite.app/v3/drive/upload-url",
        json={"path": drive_path},
        headers={"Authorization": f"Bearer {API_KEY}"},
    )
    response.raise_for_status()
    result = response.json()

    # Upload with progress
    uploaded = 0
    chunk_size = 1024 * 1024  # 1 MB chunks

    with open(file_path, "rb") as f:
        with httpx.stream("PUT", result["upload_url"], content=f) as r:
            for chunk in r.iter_bytes(chunk_size):
                uploaded += len(chunk)
                progress = (uploaded / file_size) * 100
                print(f"Progress: {progress:.1f}%", end="\r")

    print(f"\nUpload complete: {result['drive_path']}")
    return result

Batch upload multiple files

import httpx
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor


def batch_upload(files: list[tuple[str, str]]) -> list[dict]:
    """
    Upload multiple files in parallel.

    Args:
        files: List of (local_path, drive_path) tuples
    """

    def upload_one(local_path: str, drive_path: str) -> dict:
        response = httpx.post(
            "https://api.definite.app/v3/drive/upload-url",
            json={"path": drive_path},
            headers={"Authorization": f"Bearer {API_KEY}"},
        )
        response.raise_for_status()
        result = response.json()

        with open(local_path, "rb") as f:
            httpx.put(result["upload_url"], content=f, timeout=300.0)

        return result

    with ThreadPoolExecutor(max_workers=4) as executor:
        results = list(executor.map(lambda x: upload_one(*x), files))

    return results


# Example: Upload all CSVs from a directory
files_to_upload = [
    (str(f), f"data/{f.name}")
    for f in Path("./reports").glob("*.csv")
]

results = batch_upload(files_to_upload)
print(f"Uploaded {len(results)} files")

Error Handling

HTTP Status Codes

StatusMeaning
200Success - signed URL generated
400Bad request - invalid path (empty, traversal attempt, etc.)
403Forbidden - invalid or missing API key
500Server error - retry with backoff

Common Errors

Error MessageCauseSolution
Path cannot be emptyEmpty path providedProvide a valid file path
Invalid path: path traversal is not allowedPath contains .. or starts with /Use relative paths only
Invalid path: backslashes are not allowedPath contains \Use forward slashes /

Upload Errors

When uploading to the signed URL:
StatusMeaning
200Success - file uploaded
403URL expired or invalid - request a new URL
413File too large

Accessing Files in Fi

Once uploaded, your files are available in Fi sessions at /home/user/drive/. You can ask Fi to:
  • Read and analyze CSV files
  • Process JSON data
  • Work with any uploaded content
Example prompt to Fi:
“Analyze the sales data I uploaded to /home/user/drive/data/sales-data.csv”

Best Practices

  1. Organize with directories - Use meaningful paths like data/reports/2024/q4.csv for easy navigation
  2. Use streaming for large files - Use -T with curl or stream uploads in Python to avoid memory issues
  3. Handle URL expiration - Request a new URL if your upload will take more than an hour
  4. Verify uploads - Check for successful HTTP 200 response after uploading

  • Stream API - Push JSON data directly into DuckLake tables
  • Webhooks - Trigger Definite blocks from external events