Ship anything. Anywhere. At scale.
TurboCI is a cloud VPS orchestrator that runs any workload: containers, Docker Compose, or plain apps, across multiple clouds with disaster recovery built in.
# turboci.yaml - Deploy to multiple clouds with one config
- deployment_name: my_app_hetzner
provider: hetzner
location: hil
services:
web:
instances: 3
clusters: 2
healthcheck:
cmd: curl http://localhost/health
test: OK
- deployment_name: my_app_aws
provider: aws
location: us-east-1
duplicate_deployment_name: my_app_hetzner # Inherit all configs
Everything you need, nothing you don't
Terraform is overkill. Kubernetes is overhead. TurboCI is enough.
Run Any Workload
If it runs on Linux, it runs on TurboCI. Docker, Docker Compose, or plain apps — no container lock-in required.
Multi-Cloud by Default
Deploy to Hetzner, AWS, GCP, Azure, or mix them all. Same config works everywhere. Disaster recovery is just another deployment.
Fail-Safe Updates
Sequential health-checked rollouts. First server must pass before others update. Maximum blast radius: one server.
Networking Done For You
Every deployment gets a private network automatically. NAT configured for private servers. Cross-cloud networking just works.
One YAML, Everything
No Terraform modules, no Kubernetes manifests, no Helm charts. Just one simple YAML that defines your entire infrastructure.
VPS Orchestration
Provision servers, install dependencies, configure services, manage load balancers — all from one orchestrator.
One config file. All your clouds.
While Kubernetes requires different manifests per cloud and Terraform needs separate modules for each provider, TurboCI uses one YAML file for everything.
Single YAML
Define all your deployments, services, and instances in one file. No separate configs per cloud provider.
Simple Setup
Just add your cloud provider API keys as environment variables. That's it.
# Add to your environment or .env file
TURBOCI_HETZNER_API_KEY=your_key
TURBOCI_AWS_API_KEY=your_key
TURBOCI_GCP_API_KEY=your_key
TURBOCI_AZURE_API_KEY=your_key
Only add keys for the clouds you use. TurboCI automatically detects available providers.
Compare the complexity
Updates that protect production
TurboCI's sequential health-checked rollout strategy ensures maximum safety with minimal blast radius.
Strict Canary First
The first server gets the update and must pass a strict health check (command + expected output).
If it fails, rollout stops immediately. Only that one server goes down. Your production stays safe.
Batch Rollout on Success
Once the canary succeeds, TurboCI rolls out to batches (default: 50 servers at a time).
Fast updates after validation. Configurable batch size for your needs.
Maximum Blast Radius: One Server
In a cluster of 100 servers, a failed update only ever takes down one server. The other 99 keep running until the issue is fixed.
Compare this to Kubernetes, where misconfigured probes can knock out multiple pods before the rollout halts. TurboCI's fail-fast semantics protect your uptime by default.
Deploy in three simple steps
From zero to production in minutes, not days.
Define Your Infrastructure
Create a simple turboci.yaml file describing your deployment needs.
deployment:
name: my-app
providers:
- aws: us-east-1
instances: 3
Set Your API Keys
Add cloud provider API keys to your environment or .env file. One-time setup.
# .env file
TURBOCI_AWS_API_KEY=your_key
TURBOCI_HETZNER_API_KEY=your_key
Deploy & Scale
Run one command and watch TurboCI orchestrate everything automatically.
$ turboci up
✓ Provisioning servers
✓ Configuring network
✓ Deployed successfully
Advanced: Target or Skip Services
Deploy only specific services:
$ turboci up -t web_hetzner.web
$ turboci up --target web_hetzner.web web_aws.api
Skip specific services:
$ turboci up -s web_hetzner.web
$ turboci up --skip web_hetzner.web web_aws.api
Tear down your infrastructure:
$ turboci down
Why teams choose TurboCI
One orchestrator replaces Kubernetes + Terraform. No vendor lock-in, no complexity.
Feature | TurboCI | Kubernetes | Terraform |
---|---|---|---|
Infrastructure Model | One concept: deployment → VPS + private network | Pods, Services, Ingress, ConfigMaps, Secrets... | Provider-specific resources (EC2, VPC, ALB, RDS...) |
Configuration | One YAML file | Multiple manifests + Helm charts | Dozens of .tf files + modules per provider |
Networking | Automatic per deployment (NAT + routing built-in) | Manual CNI, Services, Ingress setup | Define VPCs, subnets, NAT gateways, security groups |
Multi-Cloud | Native. Same config, any provider. | Complex federation, separate clusters | Rewrite modules per provider |
Workload Flexibility | Containers, Compose, or plain apps | Containers only | N/A (infrastructure only) |
Load Balancing | Built-in service type | Ingress controllers + cloud LBs | AWS ALB, GCP LB, etc. (vendor lock-in) |
Databases | Deploy on VPS, full control | StatefulSets or external services | RDS, CloudSQL, etc. (vendor lock-in) |
Learning Curve | Hours | Weeks to months | Days to weeks |