Home / Technology / 4 things you need to understand about edge computing

4 things you need to understand about edge computing

Edge computing has claimed a spot in the technology zeitgeist as one of the topics that signals novelty and cutting-edge thinking. For a few years now, it has been assumed that this way of doing computing is, one way or another, the future. But until recently the discussion has been mostly hypothetical, because the infrastructure required to support edge computing has not been available.

That is now changing as a variety of edge computing resources, from  micro data centers to specialized processors to necessary software abstractions, are making their way into the hands of application developers, entrepreneurs, and large enterprises. We can now look beyond the theoretical when answering questions about edge computing’s usefulness and implications. So, what does the real-world evidence tell us about this trend? In particular, is the hype around edge computing deserved, or is it misplaced?

Below, I’ll outline the current state of the edge computing market. Distilled down, the evidence shows that edge computing is a real phenomenon born of a burgeoning need to decentralize applications for cost and performance reasons. Some aspects of edge computing have been over-hyped, while others have gone under the radar. The following four takeaways attempt to give decision makers a pragmatic view of the edge’s capabilities now and in the future.

1. Edge computing isn’t just about latency

Edge computing is a paradigm that brings computation and data storage closer to where it is needed. It stands in contrast to the traditional cloud computing model, in which computation is centralized in a handful of hyperscale data centers. For the purposes of this article, the edge can be anywhere that is closer to the end user or device than a traditional cloud data center. It could be 100 miles away, one mile away, on-premises, or on-device. Whatever the approach, the traditional edge computing narrative has emphasized that the power of the edge is to minimize latency, either to improve user experience or to enable new latency-sensitive applications. This does edge computing a disservice. While latency mitigation is an important use case, it is probably not the most valuable one. Another use case for edge computing is to minimize network traffic going to and from the cloud, or what some are calling cloud offload, and this will probably deliver at least as much economic value as latency mitigation.

The underlying driver of cloud offload is immense growth in the amount of data being generated, be it by users, devices, or sensors. “Fundamentally, the edge is a data problem,” Chetan Venkatesh, CEO of Macrometa, a startup tackling data challenges in edge computing, told me late last year. Cloud offload has arisen because it costs money to move all this data, and many would rather not move it to if they don’t have to. Edge computing provides a way to extract value from data where it is generated, never moving it beyond the edge. If necessary, the data can be pruned down to a subset that is more economical to send to the cloud for storage or further analysis.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

A very typical use for cloud offload is to process video or audio data, two of the most bandwidth-hungry data types. A retailer in Asia with 10,000+ locations is processing both, using edge computing for video surveillance and in-store language translation services, according to a contact I spoke to recently who was involved in the deployment. But there are other sources of data that are similarly expensive to transmit to the cloud. According to another contact, a large IT software vendor is analyzing real-time data from its customers’ on-premises IT infrastructure to preempt problems and optimize performance. It uses edge computing to avoid backhauling all this data to AWS. Industrial equipment also generates an immense amount of data and is a prime candidate for cloud offload.

2. The edge is an extension of the cloud

Despite early proclamations that the edge would displace the cloud, it is more accurate to say that the edge expands the reach of the cloud. It will not put a dent in the ongoing trend of workloads migrating to the cloud. But there is a flurry of activity underway to extend the cloud formula of on-demand resource availability and abstraction of physical infrastructure to locations increasingly distant from traditional cloud data centers. These edge locations will be managed using tools and approaches evolved from the cloud, and over time the line between cloud and edge will blur.

The fact that the edge and the cloud are part of the same continuum is evident in the edge computing initiatives of public cloud providers like AWS and Microsoft Azure. If you are an enterprise looking to do on-premises edge computing, Amazon will now send you an AWS Outpost – a fully assembled rack of compute and storage that mimics the hardware design of Amazon’s own data centers. It is installed in a customer’s own data center and monitored, maintained, and upgraded by Amazon. Importantly, Outposts run many of the same services AWS users have come to rely on, like the EC2 compute service, making the edge operationally similar to the cloud. Microsoft has a similar aim with its Azure Stack Edge product. These offerings send a clear signal that the cloud providers envision cloud and edge infrastructure unified under one umbrella.

3. Edge infrastructure is arriving in phases

While some applications are best run on-premises, in many cases application owners would like to reap the benefits of edge computing without having to support any on-premises footprint. This requires access to a new kind of infrastructure, something that looks a lot like the cloud but is much more geographically distributed than the few dozen hyperscale data centers that comprise the cloud today. This kind of infrastructure is just now becoming available, and it’s likely to evolve in three phases, with each phase extending the edge’s reach by means of a wider and wider geographic footprint.

Phase 1: Multi-Region and Multi-Cloud

The first step toward edge computing for a large swath of applications will be something that many might not consider edge computing, but which can be seen as one end of a spectrum that includes all the edge computing approaches. This step is to leverage multiple regions offered by the public cloud providers. For example, AWS has data centers in 22 geographic regions, with four more announced. An AWS customer serving users in both North America and Europe might run its application in both the Northern California region and the Frankfurt region, for instance. Going from one region to multiple regions can drive a big reduction in latency, and for a large set of applications, this will be all that’s needed to deliver a good user experience.

At the same time, there is a trend toward multi-cloud approaches, driven by an array of considerations including cost efficiencies, risk mitigation, avoidance of vendor lock-in, and desire to access best-of-breed services offered by different providers. “Doing multi-cloud and getting it right is a very important strategy and architecture today,” Mark Weiner, CMO at distributed cloud startup Volterra, told me. A multi-cloud approach, like a multi-region approach, marks an initial step toward distributed workloads on a spectrum that progresses toward more and more decentralized edge computing approaches.

Phase 2: The Regional Edge

The second phase in the edge’s evolution extends the edge a layer deeper, leveraging infrastructure in hundreds or thousands of locations instead of hyperscale data centers in just a few dozen cities. It turns out there is a set of players who already have an infrastructure footprint like this: Content Delivery Networks. CDNs have been engaged in a precursor to edge computing for two decades now, caching static content closer to end users in order to improve performance. While AWS has 22 regions, a typical CDN like Cloudflare has 194.

What’s different now is these CDNs have begun to open up their infrastructure to general-purpose workloads, not just static content caching. CDNs like Cloudflare, Fastly, Limelight, StackPath, and Zenlayer all offer some combination of container-as-a-serviceVM-as-a-servicebare-metal-as-a-service, and serverless functions today. In other words, they are starting to look more like cloud providers. Forward-thinking cloud providers like Packet and Ridge are also offering up this kind of infrastructure, and in turn AWS has taken an initial step toward offering more regionalized infrastructure, introducing the first of what it calls Local Zones in Los Angeles, with additional ones promised.

Phase 3: The Access Edge

The third phase of the edge’s evolution drives the edge even further outward, to the point where it is just one or two network hops away from the end user or device. In traditional telecommunications terminology this is called the Access portion of the network, so this type of architecture has been labeled the Access Edge. The typical form factor for the Access Edge is a micro data center, which could range in size from a single rack to roughly that of a semi trailer, and could be deployed on the side of the road or at the base of a cellular network tower, for example. Behind the scenes, innovations in things like power and cooling are enabling higher and higher densities of infrastructure to be deployed in these small-footprint data centers.

New entrants such as Vapor IO, EdgeMicro, and EdgePresence have begun to build these micro data centers in a handful of US cities. 2019 was the first major buildout year, and 2020 – 2021 will see continued heavy investment in these buildouts. By 2022, edge data center returns will be in focus for those who made the capital investments in them, and ultimately these returns will reflect the answer to the question: are there enough killer apps for bringing the edge this close to the end user or device?

We are very early in the process of getting an answer to this question. A number of practitioners I’ve spoken to recently have been skeptical that the micro data centers in the Access Edge are justified by enough marginal benefit over the regional data centers of the Regional Edge. The Regional Edge is already being leveraged in many ways by early adopters, including for a variety of cloud offload use cases as well as latency mitigation in user-experience-sensitive domains like online gaming, ad serving, and e-commerce. By contrast, the applications that need the super-low latencies and very short network routes of the Access Edge tend to sound further off: autonomous vehicles, drones, AR/VR, smart cities, remote-guided surgery. More crucially, these applications must weigh the benefits of the Access Edge against doing the computation locally with an on-premises or on-device approach. However, a killer application for the Access Edge could certainly emerge – perhaps one that is not in the spotlight today. We will know more in a few years.

4. New software is needed to manage the edge

I’ve outlined above how edge computing describes a variety of architectures and that the “edge” can be located in many places. However, the ultimate direction of the industry is one of unification, toward a world in which the same tools and processes can be used to manage cloud and edge workloads regardless of where the edge resides. This will require the evolution of the software used to deploy, scale, and manage applications in the cloud, which has historically been architected with a single data center in mind.

Startups such as Ori, Rafay Systems, and Volterra, and big company initiatives like Google’s Anthos, Microsoft’s Azure Arc, and VMware’s Tanzu are evolving cloud infrastructure software in this way. Virtually all of these products have a common denominator: They are based on Kubernetes, which has emerged as the dominant approach to managing containerized applications. But these products move beyond the initial design of Kubernetes to support a new world of distributed fleets of Kubernetes clusters. These clusters may sit atop heterogeneous pools of infrastructure comprising the “edge,” on-premises environments, and public clouds, but thanks to these products they can all be managed uniformly.

Initially, the biggest opportunity for these offerings will be in supporting Phase 1 of the edge’s evolution, i.e. moderately distributed deployments that leverage a handful of regions across one or more clouds. But this puts them in a good position to support the evolution to the more distributed edge computing architectures beginning to appear on the horizon. “Solve the multi-cluster management and operations problem today and you’re in a good position to address the broader edge computing use cases as they mature,” Haseeb Budhani, CEO of Rafay Systems, told me recently.

On the edge of something great

Now that the resources to support edge computing are emerging, edge-oriented thinking will become more prevalent among those who design and support applications. Following an era in which the defining trend was centralization in a small number of cloud data centers, there is now a countervailing force in favor of increased decentralization. Edge computing is still in the very early stages, but it has moved beyond the theoretical and into the practical. And one thing we know is this industry moves quickly. The cloud as we know it is only 14 years old. In the grand scheme of things, it will not be long before the edge has left a big mark on the computing landscape.

James Falkoff is an investor with Boston-based venture capital firm Converge.

Let’s block ads! (Why?)

VentureBeat

About

Check Also

The scale of ambition in gaming is getting bigger | Brian Ward fireside chat

The scale of ambition for Saudi Arabia when it comes to moving into the games …