The File Upload API lets you upload files directly to Definite Drive, a shared storage space accessible from Fi (Definite’s AI agent). Upload CSVs, JSON files, or any data you want Fi to analyze.
How it works
Request an upload URL
POST to the File Upload API with your desired file path.
Receive a signed URL
Get back a pre-signed GCS URL valid for 1 hour.
Upload your file
PUT your file directly to Google Cloud Storage using the signed URL.
Access from Fi
Your file is available at /home/user/drive/{path} in Fi sessions.
Endpoint
POST https://api.definite.app/v3/drive/upload-url
Authentication
Include your API key in the Authorization header:
Authorization: Bearer YOUR_API_KEY
Your API key can be found in the bottom left user menu of the Definite app.
Request Body
{
"path": "data/reports/q4-2024.csv"
}
Fields
| Field | Type | Required | Description |
|---|
path | string | Yes | Path for the file within your drive (e.g., data/reports/q4.csv) |
Path Guidelines
- Use forward slashes for nested directories (e.g.,
data/reports/file.csv)
- Paths are relative to your team’s drive root
- Directory traversal (
..) and absolute paths (/) are not allowed
- Backslashes are not allowed
Response
{
"upload_url": "https://storage.googleapis.com/...",
"gcs_path": "gs://bucket/team-id/drive/data/reports/q4-2024.csv",
"drive_path": "/home/user/drive/data/reports/q4-2024.csv"
}
Response Fields
| Field | Description |
|---|
upload_url | Pre-signed PUT URL for uploading directly to GCS (valid for 1 hour) |
gcs_path | Full Google Cloud Storage path where the file will be stored |
drive_path | Path where the file will be accessible in Fi sessions |
Uploading the File
After receiving the signed URL, upload your file using an HTTP PUT request:
curl -X PUT -T /path/to/your/file.csv "UPLOAD_URL_FROM_RESPONSE"
The signed URL expires after 1 hour. Request a new URL if your upload takes longer.
Limits
| Parameter | Limit | Description |
|---|
| URL expiration | 1 hour | Signed URLs are valid for 60 minutes |
| Max file size | 5 TB | Google Cloud Storage limit per object |
| Path length | 1024 chars | Maximum file path length |
For files larger than 100 MB, consider using the -T flag with curl for streaming uploads, which avoids loading the entire file into memory.
Examples
Bash / cURL
Get upload URL and upload a file
# Step 1: Get the signed upload URL
RESPONSE=$(curl -s -X POST "https://api.definite.app/v3/drive/upload-url" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"path": "data/sales-data.csv"}')
# Extract the upload URL (requires jq)
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url')
# Step 2: Upload your file
curl -X PUT -T /path/to/sales-data.csv "$UPLOAD_URL"
One-liner for quick uploads
curl -X PUT -T myfile.csv "$(curl -s -X POST 'https://api.definite.app/v3/drive/upload-url' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"path": "myfile.csv"}' | jq -r '.upload_url')"
Upload to nested directory
curl -s -X POST "https://api.definite.app/v3/drive/upload-url" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"path": "reports/2024/q4/revenue.csv"}'
Python
Basic upload
import httpx
API_KEY = "YOUR_API_KEY"
API_URL = "https://api.definite.app/v3/drive/upload-url"
def upload_to_drive(file_path: str, drive_path: str) -> dict:
"""Upload a file to Definite Drive."""
# Step 1: Get signed upload URL
response = httpx.post(
API_URL,
json={"path": drive_path},
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=30.0,
)
response.raise_for_status()
result = response.json()
# Step 2: Upload file to GCS
with open(file_path, "rb") as f:
upload_response = httpx.put(
result["upload_url"],
content=f,
timeout=300.0, # 5 min timeout for large files
)
upload_response.raise_for_status()
return result
# Example usage
result = upload_to_drive(
file_path="/path/to/local/data.csv",
drive_path="data/uploads/data.csv",
)
print(f"File uploaded to: {result['drive_path']}")
Upload with progress tracking
import httpx
from pathlib import Path
def upload_with_progress(file_path: str, drive_path: str) -> dict:
"""Upload a file with progress tracking."""
file_size = Path(file_path).stat().st_size
# Get signed URL
response = httpx.post(
"https://api.definite.app/v3/drive/upload-url",
json={"path": drive_path},
headers={"Authorization": f"Bearer {API_KEY}"},
)
response.raise_for_status()
result = response.json()
# Upload with progress
uploaded = 0
chunk_size = 1024 * 1024 # 1 MB chunks
with open(file_path, "rb") as f:
with httpx.stream("PUT", result["upload_url"], content=f) as r:
for chunk in r.iter_bytes(chunk_size):
uploaded += len(chunk)
progress = (uploaded / file_size) * 100
print(f"Progress: {progress:.1f}%", end="\r")
print(f"\nUpload complete: {result['drive_path']}")
return result
Batch upload multiple files
import httpx
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
def batch_upload(files: list[tuple[str, str]]) -> list[dict]:
"""
Upload multiple files in parallel.
Args:
files: List of (local_path, drive_path) tuples
"""
def upload_one(local_path: str, drive_path: str) -> dict:
response = httpx.post(
"https://api.definite.app/v3/drive/upload-url",
json={"path": drive_path},
headers={"Authorization": f"Bearer {API_KEY}"},
)
response.raise_for_status()
result = response.json()
with open(local_path, "rb") as f:
httpx.put(result["upload_url"], content=f, timeout=300.0)
return result
with ThreadPoolExecutor(max_workers=4) as executor:
results = list(executor.map(lambda x: upload_one(*x), files))
return results
# Example: Upload all CSVs from a directory
files_to_upload = [
(str(f), f"data/{f.name}")
for f in Path("./reports").glob("*.csv")
]
results = batch_upload(files_to_upload)
print(f"Uploaded {len(results)} files")
Error Handling
HTTP Status Codes
| Status | Meaning |
|---|
200 | Success - signed URL generated |
400 | Bad request - invalid path (empty, traversal attempt, etc.) |
403 | Forbidden - invalid or missing API key |
500 | Server error - retry with backoff |
Common Errors
| Error Message | Cause | Solution |
|---|
Path cannot be empty | Empty path provided | Provide a valid file path |
Invalid path: path traversal is not allowed | Path contains .. or starts with / | Use relative paths only |
Invalid path: backslashes are not allowed | Path contains \ | Use forward slashes / |
Upload Errors
When uploading to the signed URL:
| Status | Meaning |
|---|
200 | Success - file uploaded |
403 | URL expired or invalid - request a new URL |
413 | File too large |
Accessing Files in Fi
Once uploaded, your files are available in Fi sessions at /home/user/drive/. You can ask Fi to:
- Read and analyze CSV files
- Process JSON data
- Work with any uploaded content
Example prompt to Fi:
“Analyze the sales data I uploaded to /home/user/drive/data/sales-data.csv”
Best Practices
- Organize with directories - Use meaningful paths like
data/reports/2024/q4.csv for easy navigation
- Use streaming for large files - Use
-T with curl or stream uploads in Python to avoid memory issues
- Handle URL expiration - Request a new URL if your upload will take more than an hour
- Verify uploads - Check for successful HTTP 200 response after uploading
- Stream API - Push JSON data directly into DuckLake tables
- Webhooks - Trigger Definite blocks from external events