Using S3-Compatible Object Storage (C3S) with Ceph RADOS Gateway

C3S Storage is CanHost’s S3-compatible object storage, hosted in Canada and backed by Ceph. It works with the same tools and SDKs you’d use for Amazon S3—because it speaks the S3 API via Ceph’s RADOS Gateway (RGW)


What is “S3 Object Storage”?

Object storage stores data as objects inside buckets:

  • Bucket = a top-level container (like a project or app namespace)
  • Object = the file/data you store (backups, images, logs, archives, etc.)
  • Object key = the object’s “name” inside a bucket (often looks like a path: backups/2026-02-04/db.sql.gz)

S3 is accessed over HTTPS using standard S3 requests (REST API). Ceph RGW provides that S3-compatible endpoint. 


How authentication works (Access Key + Secret Key)

To access C3S you’ll use:

  • Access Key
  • Secret Key

S3 requests are signed using AWS-style signatures derived from those credentials. Most tools (AWS CLI, SDKs, rclone, etc.) generate the signatures for you automatically once configured. 


Common use cases for C3S (S3)

  • Offsite backups (servers, NAS, applications, VM backups that support S3 targets)
  • Application file storage (user uploads, media, documents)
  • Static assets (downloads, images, website assets; often paired with a CDN)
  • Log & artifact storage (CI/CD build outputs, archives, exports)
  • Large file uploads using multipart upload (recommended for big objects)

Note: Many S3 tools cannot query “free space”/capacity via the S3 API because the API doesn’t expose that as a standard feature.


Quick Start: AWS CLI (recommended)

The AWS CLI works great with S3-compatible storage. You’ll just set the endpoint to C3S.

1) Install AWS CLI

Install for your OS from the official AWS CLI documentation or package manager.

2) Configure a profile for C3S

Run:

aws configure --profile c3s

Enter your:

  • AWS Access Key ID = (your C3S access key)
  • AWS Secret Access Key = (your C3S secret key)
  • Default region name = (can be anything; many people use us-east-1 for compatibility)
  • Default output format = (optional, e.g. json)

Important: You will also need your C3S S3 endpoint URL from CanHost (used in commands below).

3) List buckets

aws s3 ls --profile c3s --endpoint-url https://c3s-ca-west1.canhost.ca

4) Create a bucket

aws s3api create-bucket \
  --bucket my-example-bucket \
  --profile c3s \
  --endpoint-url https://c3s-ca-west1.canhost.ca

5) Upload a file

aws s3 cp ./backup.tar.gz s3://my-example-bucket/backups/backup.tar.gz \
  --profile c3s \
  --endpoint-url https://c3s-ca-west1.canhost.ca

6) Download a file

aws s3 cp s3://my-example-bucket/backups/backup.tar.gz ./backup.tar.gz \
  --profile c3s \
  --endpoint-url https://c3s-ca-west1.canhost.ca

7) List objects in a bucket

aws s3 ls s3://my-example-bucket/backups/ \
  --profile c3s \
  --endpoint-url https://c3s-ca-west1.canhost.ca

Example: Python (boto3)

If your application already uses AWS S3 via boto3, you can usually point it at C3S by changing the endpoint URL and credentials.

Install boto3

pip install boto3

Upload + list example

import boto3
from botocore.client import Config

ENDPOINT = "https://c3s-ca-west1.canhost.ca"
ACCESS_KEY = "YOUR_ACCESS_KEY"
SECRET_KEY = "YOUR_SECRET_KEY"
BUCKET = "my-example-bucket"

s3 = boto3.client(
    "s3",
    endpoint_url=ENDPOINT,
    aws_access_key_id=ACCESS_KEY,
    aws_secret_access_key=SECRET_KEY,
    config=Config(signature_version="s3v4"),
)

# Create bucket (ignore if it already exists)
try:
    s3.create_bucket(Bucket=BUCKET)
except Exception as e:
    # Many apps choose to ignore "already exists" errors here
    print("create_bucket:", e)

# Upload a file
s3.upload_file("backup.tar.gz", BUCKET, "backups/backup.tar.gz")

# List objects
resp = s3.list_objects_v2(Bucket=BUCKET, Prefix="backups/")
for obj in resp.get("Contents", []):
    print(obj["Key"], obj["Size"])

Ceph RGW supports an S3-compatible API, and requests are signed using AWS signatures derived from the access/secret key.


Example: Node.js (AWS SDK v3)

This is a common pattern for S3-compatible providers: specify endpoint, credentials, and force path-style addressing if needed.

// npm install @aws-sdk/client-s3
import { S3Client, CreateBucketCommand, PutObjectCommand, ListObjectsV2Command } from "@aws-sdk/client-s3";

const ENDPOINT = "https://c3s-ca-west1.canhost.ca";
const REGION = "us-east-1"; // often used for compatibility
const BUCKET = "my-example-bucket";

const s3 = new S3Client({
  region: REGION,
  endpoint: ENDPOINT,
  credentials: {
    accessKeyId: "YOUR_ACCESS_KEY",
    secretAccessKey: "YOUR_SECRET_KEY",
  },
  forcePathStyle: true, // helpful for many S3-compatible endpoints
});

async function run() {
  try {
    await s3.send(new CreateBucketCommand({ Bucket: BUCKET }));
  } catch (e) {
    console.log("createBucket:", e?.name || e);
  }

  await s3.send(new PutObjectCommand({
    Bucket: BUCKET,
    Key: "hello/hello.txt",
    Body: "Hello from C3S!",
    ContentType: "text/plain",
  }));

  const listed = await s3.send(new ListObjectsV2Command({ Bucket: BUCKET, Prefix: "hello/" }));
  console.log(listed.Contents?.map(o => ({ Key: o.Key, Size: o.Size })) || []);
}

run().catch(console.error);

Example: rclone (easy backups + sync)

rclone is great for workstation/server backups, mirroring folders, and simple automation.

1) Install rclone

Install from your OS package manager or rclone’s official instructions.

2) Create an S3 remote

rclone config

Choose:

  • n (new remote)
  • Name: c3s
  • Storage: s3
  • Provider: choose something like Other (or “Ceph” if offered)
  • Enter your access key + secret key
  • Set the endpoint to your C3S URL (https://c3s-ca-west1.canhost.ca)

3) Copy a folder to a bucket

rclone copy /data/backups c3s:my-example-bucket/backups -P

4) Sync (make destination match source)

rclone sync /data/backups c3s:my-example-bucket/backups -P

Tip: Use rclone crypt if you want client-side encryption before data leaves your system.


Multipart uploads (large files)

For large objects (multi-GB backups, VM images, database dumps), use tools that support multipart upload. Multipart splits the upload into parts and commits them at the end, which is more reliable and often faster for large data.


Troubleshooting tips

  • 403 AccessDenied / SignatureDoesNotMatch: double-check access key/secret key, system clock time, and that your tool is using the correct endpoint. (S3 signing is time-sensitive.) 
  • Bucket name issues: keep bucket names simple (lowercase, numbers, hyphens) for best compatibility.
  • Can’t see “free space”: many S3 clients can’t show capacity/free space because S3 doesn’t provide a standard API for that. 
  • Large uploads failing: switch to multipart uploads (AWS CLI and SDKs do this automatically in many cases; rclone also handles chunking well).

Need help?

If you’d like, CanHost support can help you validate your configuration (endpoint, credentials, bucket naming) and confirm best practices for your specific use case (backups vs application storage). C3S is designed as a reliable, scalable S3-compatible storage option hosted in Canada.

Was this answer helpful? 0 Users Found This Useful (4 Votes)