This blogpost has been authored by Utobong Frankson, Product Owner for Klutch at anynines and is being posted ahead of Cloud Foundry Day Europe 2025.
Introduction
When Cloud Foundry first emerged, it solved a problem many organizations didn’t yet know they had: the growing complexity of managing services at scale. By introducing the Open Service Broker API (OSBAPI), Cloud Foundry offered one of the earliest standards for connecting applications to external services like PostgreSQL, RabbitMQ, and Redis. It made it so that developers could provision and bind to a service through a single command, while platform operators managed the underlying automation, security, and policies.
This approach reduced cognitive load for developers and provided consistency for operators. In many ways, Cloud Foundry’s early focus on services anticipated the challenges we now see with Kubernetes: how to deliver reliable, consistent data services in a sprawling, multi-cluster environment.
The Uniqueness of Stateful Services
Kubernetes was designed with stateless workloads in mind. Containers can be spun up, scaled out, and terminated with little consequence because no persistent data is tied to them. Databases and message queues behave very differently. They demand durable storage, carefully orchestrated replication, and the ability to recover gracefully from node failures.
Running these systems inside Kubernetes adds layers of complexity. For example, a PostgreSQL cluster cannot simply be redeployed like a web service without risking data corruption. Backups must be consistent across replicas, schema migrations need to be coordinated, and scaling involves more than adding replicas—it requires balancing connections, storage, and replication lag. What I’m saying is while Kubernetes abstracts many operational challenges for stateless workloads, it leaves much of the burden for stateful services to operators and developers.
Kubernetes Sprawl Means More Clusters, More Problems
According to DZone’s 2024 Kubernetes in the Enterprise Trend Report, three-quarters of organizations now use Kubernetes, and nearly 80 percent rely on it in production or development. This is important because the challenge is no longer running a single cluster, it is now about
managing many. Enterprises often operate dozens of clusters across clouds, geographies, or business units.
Each of these clusters tends to evolve its own stack of operators, manifests, and policies. Some teams lean on hyperscaler-managed databases, while others deploy open-source operators, and others rely on VM-based solutions for workloads not yet migrated to Kubernetes. The result is what many call Kubernetes sprawl. In other words, an environment where every cluster looks different, workflows are inconsistent, and platform teams find themselves stitching together a patchwork of tools and practices.
The cost of this fragmentation is significant. Developers spend more time learning how to request or connect to services in different environments than writing features. Platform operators are left managing a tangle of upgrades, backups, and scaling processes across heterogeneous systems. Inconsistent policies create gaps in security and observability, and enterprises end up spending more money supporting multiple overlapping approaches instead of consolidating on a unified model.
Lessons from Cloud Foundry
The Cloud Foundry community encountered a similar problem years ago. Developers wanted to move quickly, but provisioning a database or message queue often meant filing a ticket, waiting days for manual setup, and navigating inconsistent processes across environments. The Open Service Broker API addressed this by creating a consistent abstraction layer. Developers could declare a dependency on a service, and the platform handled the rest whether the service ran on VMs, containers, or external infrastructure.
What made this model powerful was not just the simplicity it offered developers but the control it gave operators. They could standardize automation for backups and scaling, enforce security policies uniformly, and deliver a predictable developer experience across environments. In effect, Cloud Foundry abstracted away the infrastructure while maintaining enterprise-grade governance.
That combination of developer productivity paired with operational consistency, is precisely what organizations running Kubernetes need today.
Platform Engineering for Data Services
This is where platform engineering comes in. Rather than expecting each development team to master the intricacies of stateful services in Kubernetes, platform engineering applies the same principle that Cloud Foundry pioneered: hide complexity behind an opinionated, automated platform layer.
For data services, this means developers should be able to declare the need for a database without worrying about the mechanics of provisioning storage volumes, configuring replication, or scheduling backups. Operators should automate lifecycle tasks (backups, restores, scaling, and upgrades) so they become part of the platform rather than one-off manual interventions. And at the organizational level, policies around security, compliance, and cost should be expressed as code and applied consistently across all environments.
Another emerging practice is the use of ephemeral services tied to short-lived environments such as feature branches or pull requests. This allows developers to test against real databases without carrying the operational burden of long-lived instances. Finally, as multi-cluster environments become the norm, there is growing recognition that managing data services requires a control-plane perspective, not to orchestrate every Kubernetes resource, but to deliver consistency for the subset of services that developers depend on most.
Emerging Approaches
Several open-source projects are exploring how to address these challenges. One example is Klutch, which provides a centralized control plane for provisioning and managing data services across Kubernetes clusters. Instead of forcing developers to navigate different operators or cloud services in each environment, Klutch offers a consistent interface that abstracts away the details of where and how a service is run.
While the implementation details differ, the philosophy is familiar to anyone who has worked with Cloud Foundry. Abstraction, automation, and opinionated workflows make developers more productive while allowing operators to maintain control and consistency at scale. In this sense, Kubernetes is arriving at the same realization Cloud Foundry reached years ago: stateful services need a platform-oriented approach.
What’s Still Missing?
Despite promising progress, the Kubernetes ecosystem still lacks a unifying standard like OSBAPI. Each platform operator defines its own custom resources, and interoperability between them is limited. This creates friction when organizations want to mix and match services or move workloads across clusters and clouds. It also raises the risk of vendor lock-in, as developers become tied to a specific operator or managed service.
There is also an open question about how much standardization is desirable. One of Kubernetes’ strengths is its flexibility, and overly rigid abstractions risk constraining innovation. The challenge for the community will be to strike a balance to standardize the developer-facing interface while allowing room for diverse implementations underneath.
Looking Ahead
The Cloud Foundry community has long championed the idea that developer experience and operational consistency are not mutually exclusive; they are two sides of the same coin. As Kubernetes adoption matures, the same challenge has re-emerged: how to provide developers with fast, self-service access to the data services they need while ensuring operators can enforce standards at scale.
The lesson from Cloud Foundry is that the answer lies in abstraction and automation. By lifting complexity out of the developer’s path and embedding operational best practices into the platform, organizations can reduce toil, improve security, and accelerate delivery.
Just as OSBAPI transformed how developers consumed services in the Cloud Foundry ecosystem, the next generation of platform engineering efforts will shape how data services are managed in Kubernetes. And just as before, the communities that succeed will be the ones that keep developers focused on building value, while giving operators the tools to deliver reliable services behind the scenes.