One easy way to configure all your workloads. Everywhere.

Score is a developer-centric and platform-agnostic workload specification. It ensures consistent configuration between local and remote environments. And it's open source.

Why use Score?

Score is loved by developers because they can run the same workload on completely different technology stacks, without needing to be an expert in any one of them.

Cognitive load

Developers are forced to become experts in a variety of tech and tools, just do deploy a simple change to their apps.

Features over Ops

Score takes care of configs for developers so they can focus on shipping features instead of fighting with infrastructure.

Config drift

Multiple config rules, constructs and values across local and remote environments increase the risk of misconfiguration.

Local to prod

With Score you can easily transition from local to to remote environments. Configs stay consistent, everywhere you deploy.

YAML bloat

Trying to keep many environment-specific config files in sync leads to repetitive configuration work and YAML bloat.

One file

Score lets you use one specification file as the single source of truth, easily translatable across your delivery setup.

A single spec to rule them all

Easily integrates in your existing workflows

Extendable and customizable

The score.yaml file can be extended and customised according to your needs. The score spec leaves room for environment specific overrides as well as platform specific extensions that allow to list additional properties or requirements.

Declarative by nature

Score lets developers define the resources required by their workloads in a declarative way. You declare once that your workload needs to listen on a port to receive requests - and don’t not need to worry where and how the exact port is defined in e.g. a remote Kubernetes environment. By declaring what the workload needs to run, the “how” becomes an environment specific implementation detail that is taken care of by Score.

Seamless tech stack and workflow integration

Score introduces a single change to your setup by adding a score.yaml file to your workloads’ repo. Everything else stays as is. Once Score is set up, you can continue using it even if the underlying tech stack changes.

How Score works

01

Create a score.yaml file for the application

apiVersion: score.dev/v1b1

metadata:
  name: service-a

service:
 ports:
   www:
     port: 8000
     targetPort: 80

containers:
  container-id:
    image: busybox
    variables:
      CONNECTION_STRING: postgresql://${resources.db.user}:${resources.db.password}@${resources.db.host}:${resources.db.port}/${resources.db.name}

resources:
  db:
    type: postgres
Copy
02

Install a Score Implementation CLI

brew install score-spec/tap/score-compose
Copy
brew install score-spec/tap/score-helm
Copy
The Score Specification has the potential to integrate with many container orchestration platforms and tooling such as Kustomize, Amazon ECS, Google Cloud Run, or Nomad. Help us shape the next generation of Score implementation CLI’s and start contributing here.
03

Run your first transform

score-compose run -f ./score.yaml -o ./compose.yaml
Copy
score-helm run -f ./score.yaml -o ./values.yaml
Copy
The Score Specification has the potential to integrate with many container orchestration platforms and tooling such as Kustomize, Amazon ECS, Google Cloud Run, or Nomad. Help us shape the next generation of Score implementation CLI’s and start contributing here.
04

Run your workload

docker-compose -f ./compose.yaml up hello-world
Copy
helm install --values ./values.yaml hello ./hello
Copy
The Score Specification has the potential to integrate with many container orchestration platforms and tooling such as Kustomize, Amazon ECS, Google Cloud Run, or Nomad. Help us shape the next generation of Score implementation CLI’s and start contributing here.

Loved by developers

I love the idea of being able to describe everything my workload needs in one file.
Marius Tolzmann
CTO at Mineiros
It’s just so simple. Score allows to describe what a workloads needs to run and can be utilized throughout the entire development lifecycle.
Min Kim
Cloud Architect at Frontside Software
Running my first transform was such a fun “aha”-moment, it actually worked! I ran everything locally on Docker and pushed it to Humanitec, really cool.
Marius Raesener
Tech Lead at BAUHAUS Deutschland

Configure once. Deploy anywhere. From local to prod.