Edge Computing Overview and Some Applications

Understanding computing at the edge and some of the capabilities of what is considered to be the evolution of the cloud computing era.

Karim Awd
AWS in Plain English

--

I have been trying to understand what exactly is edge computing and found out that everybody has a different opinion as to what exactly it means.

For example, if you are speaking to someone who works at a telecom, they’re going to talk about 5G and how it needs servers inside of cell towers for edge computing. Meanwhile, a person working with IoTs will talk about moving logic to local places, such as the IoT device itself, or to an edge server, and the list goes on and on.

So in this article, I will try to simply explain from a general point of view what I learned about edge computing, what does the word edge mean and list some existing as well as proposed applications of edge computing.

What is edge computing?

Edge computing is a networking philosophy focused on bringing computation logic as close to the client (end-user/device) as possible in order to reduce latency and bandwidth usage.

In simpler terms, edge computing means moving some of the computations from the cloud to places in the middle, between the cloud and the client but as close to the client as possible. This means some of the cloud’s computations are moved to the client-side itself in some cases like IoT and Deep Learning.

What is the network edge?

The important takeaway is that the edge of the network is geographically close to the end device, unlike origin servers and cloud servers, which can be very far from the devices they communicate with.

Applications

There are many existing applications for edge computing. We will talk about some of them. I encourage you to lookup more applications and tools, it will be worth your time!

1- voice assistants:

When the voice assistant, such as Amazon Alexa or Apple Siri, is awakened and you ask it about something, the voice recording is sent to the cloud for further parsing, interpretation, and query response. This explains the latency you meet before getting your response and also explains why your assistant asks for internet connection before processing your request.

But have you noticed that your assistant will get awakened when you say the wake-word even if you are not connected to the internet? This is because the wake-words detector of the voice assistant is an application of natural language processing on the edge.

The wake-word detector is an on-device specialized speech recognizer which is always listening just for its wake-up phrase. Hence, much faster assistant awakening responses and the ability for the detector to recognize that you said the wake-word even without an internet connection.

2- wireless video surveillance systems:

Applications like Vigil and Commercial devices like Amazon DeepLens follows an edge-based approach where image detection is performed locally in order to reduce latency and scenes of interest are only uploaded to the cloud for remote viewing if an interesting object is detected.

This is done to save bandwidth consumption compared to a naive approach of uploading all frames to the cloud for analysis, which can reduce scalability as well because the uplink bandwidth to a cloud server may become a bottleneck if there are a large number of cameras uploading large video streams containing all the frames.

Proposed applications

1- Gaming at the edge:

To explain this proposed application we need to talk about cloud gaming first. For example, Google Stadia which is a cloud-based game streaming service that streams games to you similar to the way that you already stream TV shows or music. No installs, no downloads, no waiting for updates, you pay for the game and get instant access to play it using web browsers of compatible phones/tablets, any TV and any laptop/desktop independent of its specs and operating system!

This isn’t magic, it’s Cloud Computing. The game is stored, all its graphics is processed and rendered on the cloud and a player is sending key presses as inputs to the cloud while simultaneously receiving video and audio streams back. This makes cloud gaming much more dependent on the quality and speed of internet connection of the gamer no matter how fast Google’s servers are.

One of the proposed solutions as an evolution of cloud gaming is edge gaming, where hyper-local network edges will bring processing closer to the gamers themselves. This can deliver reduced latency communications and eliminate the need to send information to and from a few cloud servers thousands of miles away. But this will come at the costs of building infrastructure for edge compute with enough resources and GPUs to be able to stored and render games on the edge.

2- In-network caching:

If we look at a CDN, for example, AWS CloudFront, where we have all these points of presence all over the world, what is happening is that we are moving the content closer to the end-users.

On the other hand, in edge computing we move some of the compute logic along with its generated content closer to the users, for example, using AWS you can customize content that AWS CloudFront delivers using AWS lambda@edge, another service provider is cloudflare, you can customize cloudflare’s CDN deliveries using cloudflare workers, and there are much more service providers you can find.

Attaching compute logic to CDNs unlocks a lot of really powerful use cases, For example:

  • We know that end devices in the same geographical region may request the same content many times from a remote server so we use content delivery networks (CDNs). But imagine the possibilities that integrating edge computing with the CDN will offer. For example, we can prefetch into the CDN the next page for a user browsing paginated content. We can predict content popularity in a geographical region and cache it before it’s even popular, which will reduce the response time and network traffic.
  • Deep insights into media performance tell you where you can optimize assets further and increase website speed. And here comes Cloudinary’s Media Optimizer — the Media Optimizer automates rich media format and quality optimizations to make your websites and apps fast and visually engaging. You don’t need to create multiple asset versions or completely rewrite all media URLs. Instead, offload format and quality optimization to the Media Optimizer and it will automatically deliver images, videos, and other rich media in the format and quality suited for the end users’ devices and all this happens at the edge as close as possible to end-users.

Finally

With all that said, this is just the tip of the iceberg and you can read much more about edge computation and its applications. In the end, all this doesn’t mean the cloud will disappear. It means parts of the cloud are coming closer to you!

More content at plainenglish.io

--

--