Today we are going to provide an overview of a theory proposed by Kevin Hoffman in his book Beyond the Twelve-Factor App, in which he posits there are fifteen factors. We at GrapeUp think this is interesting food for thought for the Cloud Foundry community and wanted to share it!
There is one codebase per application, which is tracked in revision control. A shared code between applications does not mean the code should be duplicated — it can be another codebase provided to the application as a component, or even better, as a versioned dependency. The same codebase may be deployed to multiple different environments and will produce the same release. A single repository should not contain multiple applications or multiple entry points — it has to be a single responsibility, a single execution point for the code, ideally a single microservice.
Dependency management and isolation consists of two problems. First, it is very important to explicitly declare the dependencies to avoid a situation in which there is a surprise API change in the dependency, rendering the library useless from the point of the existing code and failing the build or release process.
Second, there are repeatable deployments. In an ideal world, all dependencies would be isolated and bundled with the release artifact of the application. In a platform like Cloud Foundry, this is managed by buildpacks which clearly isolate the application from, for example, the server that runs it.
At no time can credentials or configuration be a part of the source code! The configuration can be passed to the container via environment variables or by files mounted as a volume. It is recommended you think about your application as if it were open source. If you can push all the code to a publicly accessible repository, you have probably already separated the configuration and credentials from the code. An even better way to provide the configuration is to use a configuration server, such as Consul, Spring Cloud Config or (for credentials) Cloud Foundry CredHub.
- Backing services
Treat backing services like attached resources. The bound services are, for example, databases, storage but also configuration, credentials, caches or queues. When the specific service or resource is bound, it can be easily attached, detached or replaced if required. This adds flexibility to using external services and, as a result, it may allow you to easily switch to a different service provider. When using Cloud Foundry, this is easily accomplished using the platform’s service marketplace, where backing services expose their capabilities via the Open Service Broker API.
- Build, release, run
All parts of the deployment process should be strictly separated. First, the artifact is created in the build process. Build artifact should be immutable. This means the artifact can be deployed to any environment as the configuration is separated and applied in the release process. The release artifact is unique per environment, as opposed to the build artifact which is unique for all environments. This means that there is one build artifact for the dev, test and prod environments, but three release artifacts (each for each environment) with specific configuration included. Then the release artifact is run in the cloud. (Hoffman adds yet another part of the process in his book: the design that happens before the build process and includes, but is not limited to, selecting dependencies for the component or the user story.) The Cloud Foundry Application Runtime can make this factor easy for a developer. By pushing code (build artifact), the platform handles creating and managing the release and run artifacts.
- Stateless processes
Execute the application as one or more stateless processes. A frequent question that arises is how a service can be stateless if you need to preserve user data, identities or sessions? In fact, all of this stateful data should be saved to the backing services, like databases or filesystems (for example, Amazon S3, Azure Blob Storage or managed by services like Ceph). The standard filesystem provided by the container to the application can be ephemeral and should be treated as such.
- Port binding
Expose your services via port binding and avoid specifying ports in the code. Port selection should be left for the platform to assign. While some platforms allow for user-specified ports, ideally ports are automatically bound by the platform to all microservices that communicate with each other.
Services should scale out by adding more copies of them, as opposed to vertical scaling. When the application load reaches its limits, it should manually or automatically scale horizontally.
Applications should start and stop rapidly to avoid problems like having an application which does not respond to health checks. Why? Because that allows the platform to more accurately assess the health of an application, and rapidly respond to both health issues and scaling requests that allows the platform to more accurately assess the health of an application, and rapidly respond to both health issues and scaling requests.
- Development, test and production environments parity
Keeping all environments the same, or at least very similar, can be a complex task. The difficulties vary from VMs and licensing costs to the complexity of deployment. The second problem may be avoided by a properly configured and managed underlying platform. The advantages of the approach is to avoid the “works for me” problem, and is absolutely required to achieve automated deployments and continuous delivery and is absolutely required to achieve automated deployments and continuous delivery.
Logs should be treated as event streams entirely independent from the application. The only responsibility of the application is to output the logs to the stdout and stderr streams. Everything else should be handled by the platform. The platform should take responsibility for log aggregation and forwarding to log management systems. The platform should take responsibility for log aggregation and forwarding to log management systems.
- Admin processes
The administrative and management processes should be run as “one-off.” This is self explanatory and can be achieved by creating Concourse pipelines for the processes or by writing Azure Function/AWS Lambda for that purpose.
The first twelve factors were originally created by Heroku. The following three were added by Hoffman in his aforementioned book:
Applications can be deployed into multiple instances, which means it is not viable anymore to connect and debug the application to find out whether it works. The application performance should be automatically monitored, and it must be possible to check the application’s health using automatic health checks. Also, for specific business domains, business metrics can be very can be very useful and should be included in the metrics emitted by the software.
- Authentication and authorization
Authentication and authorization should be an integral part of the application design and development, as well as configuration and management of the platform. RABC or ABAC should be used on each endpoint of the application to make sure the user is authorized to make that specific request to that specific endpoint.
- API First
The API should be designed and discussed prior to implementation. It enables rapid prototyping, allows the use of mock servers and moves the team’s focus to the way services integrate. As a product is consumed by clients, the services are consumed by different services using the public APIs, so the collaboration between the provider and the consumer is necessary to create a useful product. Even excellent code can be useless when hidden behind poorly written and badly documented interface. For more details about the tools and the concept, visit the API Blueprint website.
Is that all?
I would like to propose an additional sixteenth factor, below:
- Agile company and project processes
The way to succeed in the rapidly changing and evolving cloud world is not only to create great code and beautiful, stateless services. The key is to adapt to changes and market need to create a product needed by business. To do this, you must adopt an agile or lean process, extreme programming, and/or pair programming. This allows rapid growth in short development cycles, which translates to quick market response. When each team member believes their commit is a candidate for production release and they work in pairs, the quality of the product improves. The trick is to apply the processes as widely as possible, because very often, as in Conway’s law, the organization of your system is only as good as the organizational structure of your company.
That’s the end of our journey through the perfect cloud native application process. Next time you design your (hopefully cloud native) application, keep this guideline in mind!