Home » Six key design considerations for stateful applications at the Edge

Six key design considerations for stateful applications at the Edge

by D F
edge - zero touch provisioning security

The edge is where things start getting interesting. It’s not just about making sure applications can scale and handle traffic spikes. You also have to consider how they behave when things go wrong. For example, what happens if an application goes down? How do you recover quickly? How do you make sure users don’t lose data? These questions all relate to statefulness.

Stateless applications are easy to deploy but hard to manage. They’re simple to set up, but it’s difficult to scale them without downtime.

Stateful applications are more complex to deploy, but once deployed, they’re easier to manage. They’re scalable because they store information locally, which makes recovery faster. However, this comes with challenges such as managing multiple versions of data.

There are three main types of cloud-based mobile apps: native, hybrid, and web. Each type offers different benefits and tradeoffs.

Here are six key considerations for stateful applications :


Edge locations typically do not have the compute and storage resources to run deep analytics on vast amounts of data. However, they often have large amounts of local memory (RAM) available.

To take full advantage of this resource, you must be able to scale your application horizontally. Horizontal scaling allows you to add additional servers as needed. In addition, you can use load balancing techniques such as round-robin DNS and sticky sessions to distribute requests evenly across multiple servers.

You may also need to think about vertical scaling. Vertical scaling involves adding additional nodes to the cluster to improve performance. This approach usually requires you to move workloads off the existing nodes.

Latency and Throughput

It’s not feasible to send telemetric or transaction data back to cloud applications for determining a course of action. For example, if an autonomous vehicle detects a traffic jam, it would be difficult to determine whether to turn left or right at the next intersection. Real-time analytics is used at the edges of the network to detect anomalies.

This type of analysis is performed by collecting sensor data and sending it to the cloud for processing. When the results come back, they are sent back to the edge node.

Network Partitions

Depending on the quality of the network between the edge and the Cloud, different operating modes may occur. A partition occurs when two or more networks fail to communicate with each other.

When this happens, the system cannot reach a consensus on decisions made by the cloud. To avoid these types of problems, you should design your systems to tolerate partitions. You could choose to replicate data across multiple edge nodes, or you could store data redundantly on both sides of the partition.

Other Failures

At the near edge, while node or pod failures are common, applications can leverage racks and zones for higher resiliency. Even with this fault isolation in mind, regional outages can still occur.

For example, if power were to go out in one region, all edge nodes within that zone would likely experience similar issues. As a result, you might see increased latency or even loss of connectivity.

Software Stack

When choosing components for the software stack, it is important to consider the agility and ease of using them. Engineering teams need to design for fast iteration on applications. One way to achieve these goals is to use well-established frameworks that enable instant developer productivi­ty, and a well-developed and feature-rich database (such as MySQL) that developers already know well.

In addition, you should look for open-source tools that provide a rich set of features.

Also, read about talent acquisition strategies


Security is extremely important for applications running at the edge. There is a large surface area for attack due to the inherently distributed nature of cloud computing. It is important to consider the least privilege, zero trust, and zero-touch provisioning for all service and component deployments.

zero trust provisioning: The concept of zero trust is based on the idea that users do not have access to any sensitive information. Zero trust provides a level of security by isolating user accounts from each other. In addition, zero trust prevents malicious actors from accessing data or resources.

least privilege: Least privilege refers to granting permissions only to those who need them. Permissions should be granted only to the minimum amount required to perform their job.

zero-touch provisioning: Zero-touch provisioning allows administrators to deploy new services without having to manually install packages. Instead, they simply create a configuration file that specifies what needs to be installed.


Edge computing is a paradigm shift from traditional centralized server-based computing to distributed computing. There isn’t one single database reference architecture that works well for every application in this environment. However, depending on the requirements of an application and the tradeoffs involved in meeting them, enterprises may make different design choices to satisfy their needs, and then adapt those choices when needs evolve.

You may also like

Leave a Comment