Frame.io Python SDK — Upload Guide

This guide explains how to upload files to Frame.io using the Frame.io Python SDK (frameio). The SDK handles chunked multi-part uploads to S3 via pre-signed URLs, with parallel workers, automatic retries, and optional progress tracking.

For the general upload API concepts (upload URLs, headers, chunking), see How Local & Remote Uploads Work.


Prerequisites

1

Authentication

You have a working Frameio client. See the Authentication Guide for setup.

2

Install the SDK

$pip install frameio
3

Target folder

You need the account_id and folder_id where the file should be uploaded. Use the SDK to find them:

1# List your accounts
2accounts = client.accounts.index()
3account_id = accounts.data[0].id
4
5# List workspaces in the account
6workspaces = client.workspaces.index(account_id=account_id)
7workspace_id = workspaces.data[0].id
8
9# List projects in the workspace
10projects = client.projects.index(account_id=account_id, workspace_id=workspace_id)
11project = projects.data[0]
12
13# The project's root folder is the top-level upload target
14folder_id = project.root_folder_id
15
16# Or list subfolders to upload into a specific one
17folders = client.folders.list(account_id=account_id, folder_id=folder_id)

Quick Start

1import os
2from frameio import Frameio
3from frameio.files import FileCreateLocalUploadParamsData
4from frameio.upload import FrameioUploader
5
6client = Frameio(token="YOUR_TOKEN")
7
8file_path = "/path/to/video.mp4"
9file_size = os.path.getsize(file_path)
10
11# 1. Create the file resource and get pre-signed upload URLs
12response = client.files.create_local_upload(
13 account_id="YOUR_ACCOUNT_ID",
14 folder_id="YOUR_FOLDER_ID",
15 data=FileCreateLocalUploadParamsData(
16 name="video.mp4",
17 file_size=file_size,
18 ),
19)
20
21# 2. Upload the file to S3
22with open(file_path, "rb") as f:
23 FrameioUploader(response.data, f).upload()

That’s it. The SDK splits the file into chunks based on the upload URLs returned by the API, uploads them in parallel, and handles retries automatically.


How It Works

Local upload is a two-step process:

1

Create file resource

Call client.files.create_local_upload() with the file name and size. The API creates a placeholder file and returns pre-signed S3 PUT URLs — one per chunk. The number of URLs (and therefore chunks) depends on the file size.

2

Upload to S3

FrameioUploader reads the upload URLs from the response, splits your file into matching chunks, and PUTs each chunk to its URL in parallel using a thread pool. Each request includes the required x-amz-acl: private and Content-Type headers.

The upload goes directly from your application to S3 — it does not pass through the Frame.io API servers. This is the same pattern used by services like YouTube, Vimeo, and Dropbox for large file uploads.


Using FrameioUploader

FrameioUploader is the recommended way to upload files. It wraps the lower-level chunked uploader and handles all the details — extracting upload URLs from the API response, setting required headers, chunking the file, and uploading in parallel.

Progress Tracking

Use the on_progress callback to track upload progress:

1def on_progress(bytes_uploaded: int, total_bytes: int) -> None:
2 pct = bytes_uploaded / total_bytes * 100
3 print(f"\r{pct:.1f}% ({bytes_uploaded:,} / {total_bytes:,} bytes)", end="", flush=True)
4
5with open(file_path, "rb") as f:
6 FrameioUploader(response.data, f, on_progress=on_progress).upload()
7
8print("\nUpload complete!")

The callback is invoked once after each chunk completes, with the cumulative bytes uploaded so far and the total file size.

Rich progress bar

For a polished terminal experience, use Rich:

1from rich.progress import Progress, BarColumn, DownloadColumn, TransferSpeedColumn, TimeRemainingColumn
2
3with Progress(
4 "[progress.description]{task.description}",
5 BarColumn(),
6 DownloadColumn(),
7 TransferSpeedColumn(),
8 TimeRemainingColumn(),
9) as progress:
10 task = progress.add_task("Uploading...", total=file_size)
11
12 with open(file_path, "rb") as f:
13 FrameioUploader(
14 response.data, f,
15 on_progress=lambda done, total: progress.update(task, completed=done),
16 ).upload()

Configuration

FrameioUploader accepts several optional parameters:

ParameterDefaultDescription
max_workers5Number of concurrent upload threads
headers{"x-amz-acl": "private"}Headers sent with every S3 PUT request. Custom headers are merged with the defaults.
max_retries3Retry attempts per chunk (exponential backoff: 1s, 2s, 4s, …)
on_progressNoneCallback (bytes_uploaded, total_bytes) fired after each chunk
1with open(file_path, "rb") as f:
2 FrameioUploader(
3 response.data,
4 f,
5 max_workers=10, # more parallelism for high-bandwidth connections
6 max_retries=5, # more resilient on flaky networks
7 on_progress=on_progress,
8 ).upload()

Full Example

A complete example with authentication, upload, and progress tracking:

1import os
2from frameio import Frameio
3from frameio.auth import ServerToServerAuth
4from frameio.files import FileCreateLocalUploadParamsData
5from frameio.upload import FrameioUploader
6
7# Authenticate
8auth = ServerToServerAuth(
9 client_id="YOUR_CLIENT_ID",
10 client_secret="YOUR_CLIENT_SECRET",
11)
12client = Frameio(token=auth.get_token)
13
14# Prepare the file
15file_path = "/path/to/video.mp4"
16file_name = os.path.basename(file_path)
17file_size = os.path.getsize(file_path)
18
19# Create the file resource
20response = client.files.create_local_upload(
21 account_id="YOUR_ACCOUNT_ID",
22 folder_id="YOUR_FOLDER_ID",
23 data=FileCreateLocalUploadParamsData(
24 name=file_name,
25 file_size=file_size,
26 ),
27)
28
29print(f"Uploading {file_name} ({file_size:,} bytes) in {len(response.data.upload_urls)} chunks...")
30
31# Upload with progress
32def on_progress(uploaded: int, total: int) -> None:
33 print(f"\r{uploaded / total:.0%}", end="", flush=True)
34
35with open(file_path, "rb") as f:
36 FrameioUploader(response.data, f, on_progress=on_progress).upload()
37
38print(f"\nDone! View at: {response.data.view_url}")

If you need full control over the upload process — for example, to handle chunking manually, integrate with an async pipeline, or customize retry logic — see How Local & Remote Uploads Work for the raw API flow and a standalone Python script example.


Remote Upload

If your file is already accessible via a public URL, use remote upload instead. No chunking is needed — Frame.io fetches the file directly:

1from frameio.files import FileCreateRemoteUploadParamsData
2
3response = client.files.create_remote_upload(
4 account_id="YOUR_ACCOUNT_ID",
5 folder_id="YOUR_FOLDER_ID",
6 data=FileCreateRemoteUploadParamsData(
7 name="video.mp4",
8 source_url="https://example.com/video.mp4",
9 ),
10)
11print(f"File created: {response.data.id}")

Remote upload currently has a 50 GB file size limit. For files larger than 50 GB, use local upload instead.


Checking Upload Status

After uploading, you can verify the file was received:

1status = client.files.show_file_upload_status(
2 account_id="YOUR_ACCOUNT_ID",
3 file_id=response.data.id,
4)
5print(f"Upload complete: {status.data.upload_complete}")