Score - One YAML to rule them all
The Score Specification enables developers to run their workloads across different technology stacks without risking configuration inconsistencies. As a platform-agnostic declaration file, score.yaml presents the single source of truth on how to run a workload and works to utilise any container orchestration platform and tool - be it Docker Compose or Kubernetes.
A Score sheet is a musical notation that describes notes that are played by a musician for their instrument. It is used by a conductor to see at a glance what each performer should be playing and what the ensemble sound should be. In the same way does the <span class="c">score.yaml</span> specification make developers the conductor of a workload that is being run across a symphony orchestra of technology and tooling.
The problem of configuration mismatch in application development
In modern software development workloads are typically deployed as microservices, each component packaged into its own container. Running containerised workloads allows teams to run their code across different environments without a single worry on their mind. Wishful thinking meets reality once containers need to be managed at scale and teams start making use of container orchestration platforms including a wide range of tooling that support application development. Suddenly there’s a lot more to consider as a developer when preparing your workload for its journey towards production. The variety of tooling involved to successfully deploy code brings developers back to the “but it works on my machine” problem.
As a developer you might be using Docker Compose for local development and deploy to remote environments that are based on systems such as Kubernetes, Google Cloud Run, Amazon ECS or HashiCorp Nomad. To successfully develop, test, deploy and run a workload, you not only have to be familiar with the platform and related tooling your team makes use of, but also keep each workload’s specification in sync. If entities are configured differently across platforms, teams risk configuration inconsistencies. For example: A workload with a dependancy on a database might point to a postgres image or mock server in lower environments. On its way to production however, a database has to be provisioned and allocated via Terraform. Such ‘translation gaps’ between environments exist for all kinds of items - volumes, external services (e.g. Vault or RabbitMQ), ports, DNS records or routes etc.
From a developer’s point of view, you successfully test and run a workload locally and even pass the isolated tests embedded into your CI pipeline. The question of how things are now appropriately reflected in the next environment - which might be running on Kubernetes and is managed via helm charts - is answered differently in every team and depends on the complexity of the task at hand. A variable change is easier to keep in sync than declaring a dependancy on a database across different platforms.
In practice, an ops engineer might jump in to review configurational changes. You might compare the workload specification for each platform yourself. A colleague might be working on a policy definition for yaml files. Either way, if a property is overseen or has accidentally been wrongly specified, the team will end up with a failed deployment or a workload that is running in a way that it’s not intended to.
To address this problem, we created a tool that simplifies application development for developers and ensures a standardised, consistent and transparent approach to configuration management for teams.
As shown in the graphic above, are there 3 core components to consider in the context of Score:
- The Score Specification: A developer-centric workload definition that describes how to run a workload. As a platform-agnostic declaration file, <span class="c">score.yaml</span> presents the single source of truth on a workloads profile of requirements and works to utilise any container orchestration platform and tool.
- A Score Implementation: A CLI tool which the Score Specification can be executed against. It is tied to a platform such as Docker Compose (score-compose), Helm (score-helm) or Humanitec (score-humanitec) and will take care of generating a platform-specific configuration file (such as <span class="c">docker-compose.yaml</span>, <span class="c">chart.yaml</span>) from the Score spec.
- A platform specific configuration file: By running the Score Specification against a Score Implementation such as score-compose, a platform specific configuration file, such as <span class="c">docker-compose.yaml</span> can be generated. This file can then be combined with environment specific parameters to run the workload in the target environment.
With Score the same workload can be run on completely different technology stacks without the developer needing to be an expert in any of them. For example, the same Score Specification can be used to generate a docker-compose file for local development, Kubernetes manifests for deployment to a shared development environment and to a serverless platform such as Google Cloud Run for integration tests.
The Score Specification
The Score specification allows you to specify which containers to use, whether resource or service dependencies exist, if ports are to be opened or what data volumes to reference - whatever it is that a workload requires you to run, it is captured in <span class="c">score.yaml</span>. Structurally, the specification consists of 3 top level items:
- containers: defines how the workload’s tasks are executed.
- resources: defines dependencies needed by the workload.
- service: defines how a workload can expose its resources when executed.
The <span class="c">example-service</span> workload defined in the <span class="c">score.yaml</span> file below is comprised of one <span class="c">busybox</span> container, has a dependancy on a <span class="c">postgres</span> database and advertises two public ports <span class="c">80</span> and <span class="c">8080</span>:
Explore the Score Specification reference in full detail on docs.score.dev.
In this example we are working with a simple Docker Compose service that is based on a <span class="c">busybox</span> image. The <span class="c">score.yaml</span> we created for it looks as follows:
To convert the <span class="c">score.yaml</span> file into an executable <span class="c">compose.yaml</span> file, we simply need to run the <span class="c">score-compose</span> CLI tool:
The generated <span class="c">compose.yaml</span> will contain a single service definition:
The service can now be run with <span class="c">docker-compose</span> as usual:
The same <span class="c">score.yaml</span> can be run with any other Score Implementation CLI. Working with Helm? Check out score-helm next.
With all platform configuration files being generated from the same Specification, the risk of configuration inconsistencies between environments significantly decreases. A change to <span class="c">score.yaml</span> will automatically be reflected in all environments, without the developer needing to manually intervene.
With the Score Specification we’re not only aiming to speed up and advance application development for engineering teams but also foster focus, flow and joy for developers in their day to day work.
If you want a world with lower cognitive load on developers, less config drift or mismanagement, and frankly just more productive, happier days for all of us, go check out the repo, contribute, reach out. This is just the beginning — we’d love to build the future of development together.
The team behind Score
The idea for Score organically formed from looking at hundreds of delivery setups, across engineering orgs of all sizes (from startups to Fortune 100). We have all been involved in platform engineering and developer tooling for the past decade, in one way or another.Some of us built Internal Developer Platforms at the likes of Google, Apple or IBM, some come from the IaC side of things and played leading roles at companies like Hashicorp.
We all share the vision of developer and workload-centric development. We want to reduce cognitive load on developers and make sure engineers can spend their time coding and shipping features, instead of fighting config files.