- Applies to: all Sail Research Co. customers using Sail’s inference services
- Contact: neil@sailresearch.com
- Effective date: 2026-03-01
1. Roles
- Customer controls the data sent to Sail and decides what inference jobs to run.
- Sail processes the data only to run the requested inference job and return results to the Customer.
2. What data Sail processes
Sail may process:- Customer Content: prompts, inputs, files, and related data submitted to run an inference job, plus the outputs returned by the service.
- Job metadata: operational metadata required to run and track jobs (for example, job identifiers, timestamps, status, and routing/state information).
3. How Sail uses Customer data
Sail uses Customer data only to:- run the inference job requested by Customer, and
- operate the service features needed to deliver that job (routing, scheduling, job state tracking).
- use Customer Content to train, fine-tune, or improve any machine learning model (Sail’s or any third party’s),
- use Customer Content for marketing or advertising, or
- collect telemetry from GPU machines that captures Customer Content.
4. Storage and retention
- Persistent storage: Customer Content is stored persistently only in Amazon S3 buckets (unless Customer chooses the customer-owned bucket option below).
- Transient processing: All other processing is transient in-memory only for the duration of the job.
- Automatic deletion: Sail’s production S3 buckets are configured with an automated deletion rule (implemented via S3 bucket lifecycle policy) that deletes Customer Content shortly after processing. Deletion timing can vary in practice due to job retries, failures, or other operational conditions. Under no circumstances will Sail retain Customer Content for longer than 48 hours.
- Customer-owned buckets (optional): Customer may choose to use Customer-owned S3 buckets for additional control over storage configuration and retention.
5. Security and access controls
Sail uses security controls designed to protect Customer data, including:- Authenticated GPU machines: inference GPU machines communicate with Sail infrastructure using TLS (with certificate transparency required) and authenticate using a high-entropy bearer token. GPU machines are not publicly accessible.
- Restricted connectivity: inference machines connect only to the following necessary services:
- container registry
- routing service
- object storage (e.g. Amazon S3)
- Data center assurance: Sail uses cloud/data center providers with SOC 2 and ISO 27001 assurance programs.
- Compute provider restrictions: Sail may use multiple third-party compute providers (“compute subprocessors”), but:
- compute subprocessors are not permitted to access Customer Content, and
- compute subprocessors must never store Customer Content in unencrypted form.
- Sail personnel access: Sail staff are never allowed to access Customer Content without Customer permission.