Cloud-Native Speed and Performance Matters

In November 2020, an industry research report often cited by Microsoft stated that 500,000,000 new software applications will be created in the next five years.

In an ever-increasingly crowded ecosystem of software applications in every conceivable market, it is no longer viable to compete solely on functional differentiators — those features and capabilities that make a software application unique. Future software applications will need to compete on speed, reliability, and security in addition to their core functional differentiators in a cloud-native world.

Software applications designed and built in 2021 should be different from how they were built only a few years ago. Cloud service providers and their Kubernetes and serverless execution environments have matured. Old and new programming languages are innovating to make security, runtime durability (primary through increased type safety), and performance complimentary.

Most importantly, a large body of knowledge exists that shows the financial benefits when using software systems designed with the performance of the customer in mind, and not designed primarily for the convenience of the developer.

Zectonal builds with purpose — software designed to meet enterprise cloud-native performance, durability, and security requirements customers should demand from all of their software suppliers.

Setting the Stage for Performance — Utility Billing

I was fortunate to have a birds-eye view of a new way for enterprise customers to purchase compute and storage resources via metered utility billing.

An important and sometimes stressful part of my job at AWS was explaining to those early customers their monthly bills, many of whom were only used to the predictable flat-rate billing from years past.

More than a few were surprised when they received their first metered bill. Utility computing wasn’t necessarily new, but AWS made it mainstream and disruptive.

Utility billing set the stage for a long-game competition where speed in a cloud-native world will be a fundamental differentiator. The faster a software program runs and completes a unit of work, the less it costs per unit of work. The more times a program can run, the more units of work that get executed. Slower running programs will cost more and do less compared to their faster equivalents. Conquering the much-espoused paradox of increased speed, increased reliability, lowered cost is possible, but it certainly has an asterisk attached. The playing field for this long-game competition and where utility billing and scale are most pronounced is already visible within Big Data analytics, Kubernetes hosted environments, and serverless computing.

Distributed Systems & Big Data

Apache Spark is one of the more commonly used and mature frameworks for processing large amounts of data (i.e. “Big Data”). For most of Apache Spark’s decade-long history, there was a discrepancy of performance when using Python versus a Java Virtual Machine (“JVM”) language such as Scala or Java. The latest major release of Apache Spark (version 3 at the time of writing) promises to close the gap on this performance issue. Python is generally understood as an easier language to learn, allowing more people to develop software applications. In Apache Spark’s case, the benefit of introducing Python was that more developers started writing Spark applications. This is generally a very good thing. When operating a large Apache Spark cluster in the cloud though, speed matters. Not all Spark workloads are the same, but in many real-world examples for computationally intensive Spark jobs such as geospatial analytics that are optimally utilized (i.e. always a job queued for immediate processing), running on a 10-node m5.xlarge EC2 cluster using on-demand utility pricing, it was found that using a JVM language versus Python could lead to cost savings of upwards of ten thousand dollars a month. At any scale, whether a mid-sized firm operating a single Spark cluster, or a large enterprise operating dozens of clusters, the financial imperative is obvious. An analytic application written in Python that costs more and runs slower is never as advantageous as one written for the JVM that runs faster and can process more data in the same period of time.


This financial imperative is even more important with respect to the adoption of Kubernetes. Kubernetes orchestrates [containerized] software application workloads over an underlying computer and storage infrastructure with an expectation that it will lead to an increased utilization and increase reliability. Hosting is more efficient when disparate workloads can run on the same shared-infrastructure Kubernetes cluster versus using virtualized or bare-metal hosts. As more applications are designed and built to run on Kubernetes, the financial imperative to maintain a high degree of utilization within a Kubernetes cluster will be a top priority.

Functionally equivalent application workloads (different software applications that essentially perform the same function) which consume more CPU (because they run slower) and consumer more memory (due to garbage collection or other underlying factors) from the underlying cluster will be viewed as a “financial drain” on the enterprise. These less-than-optimal workloads take CPU and Memory away from other workloads sharing resources amongst the underlying cluster.

Assuming even a fraction of the 500,000,000 new applications developed in the next 5 years make it to a production Kubernetes cluster, chances are significant there many of these applications will be sub-optimal from a financial cost perspective.

Application workloads like those built by Zectonal with purpose that optimizes speed, reliability, and security will beat those sub-optimal applications that compromise for convenience.


Serverless computing represents the quintessential and most transparent of utility billing platforms — customers pay for the number of requests and the duration for how long their software application performs units of work. Everything but the software code is fully managed and standardized. Serverless is where the performance knobs relative to programming languages really come into play. Recent announcements related to GraalVM’s native image capabilities and AWS Lambda C++ and Rust runtimes are clear indicators of interest for application developers to build with performance in mind for this long-game competition.

Enter Rust

Zectonal believes speed, performance, and security matter when building enterprise software applications. These characteristics, when implemented correctly, save customers money, are inherently more secure (which ultimately saves customers money as well), and just perform better than the alternatives. Rust continues to be a maturing programming language that is embraced year over year. Recent news related to the Rust Foundation is very encouraging for Rust’s long-term stability. Zectonal has embraced Rust as a primary language for developing our class of enterprise software applications to provide Situational Awareness to an AI-Centric World.

Are you a developer who likes to use Rust, or a cloud-native technologist ready to take on a significant challenge to provide situational awareness for AI? Reach out to learn more details about what situational awareness really means to us. We are hiring

Situational Awareness for AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store