The Razor’s Edge of Edge Computing – From My VantagePoint

There is a tremendous amount of discussion in the industry around Edge Computing. The hype has gotten to the point that some pundits forecast that the “Edge Cloud” will become larger and more important than the Hyperscale Cloud. This talk about Edge Computing can be confusing because this is such a broad term. It can mean classic servers or hyper-converged solutions that are sitting in a large retail location which is clearly an Edge Computing use case. It can also mean a very small embedded appliance acting as an IoT gateway. Another exciting example is the “mini-cloud” sitting in a 5G base station, and there are many others.

What has not been discussed very broadly as yet is another form of Edge Computing which offers simplicity, a superior software model, and a unique form of low-latency and programmability. This new model consists of a modest multi-core X86-based Ubuntu Linux server and associated NIC card under the covers of an Ethernet access switch that may be sitting in a wiring closet. This Ubuntu Linux server would include a direct 20G connection to the switch fabric and would be capable of running one or more VMs or containers. Moreover, this form of Edge Compute server can not only execute some classic Edge Computing use cases, but it is also capable of executing network remediation applications (such as packet capture), as well as augmenting the capabilities of the network operating system. This type of edge solution benefits from the cooling, power, and internal connectivity inherent inside the switch by sharing subsystems that are already present; not to mention the benefits of the simplified packaging and deployment model.

Let’s explore this model a bit more. Some early motivating use cases are naturally very network-centric – like placing a firewall VNF running as a VM on the internal server and sending traffic from a selected set of ports through this NGFW application. An example use case might be the following. Ports N thru N+7 might be set aside on an access switch to allow a third-party vendor that has equipment in a retail store to connect into the retailer’s own network. Another example might be to mirror some ports to the server for packet capture. 

An Edge Computing application could be one of any number of things. These edge applications tend to be time-sensitive (such as 5G MEC applications with 5ms latency requirements) or the data volume of the application is so large that it isn’t feasible to send it all to the cloud. Instead, it needs to be pre-processed at the edge. Two examples might be IoT gateways, such as a LoRA gateway application, or AWS Green Grass serverless IoT applications. Another example might be a video analytics application that is accelerated by an accompanying AI video analytics adapter connected to the CPU’s PCI bus. The data would be pre-processed, then the associated metadata would be packaged up and sent on to a cloud-based application.

An important benefit of this form of Edge Computing is that there is a true virtualization-capable X86 CPU in the switch with its own vNIC running Ubuntu. To the operator of this server, it is simply a Linux server that can run any application that fits within X many GB of main storage, fits within the CPU’s capabilities and doesn’t need more than 128 GB of flash memory. The benefits, of course, are that any reasonable application that can run on a VM in the cloud, can also be run at the Edge. This, in turn, leads to some interesting speculation about the future.

Clearly, with a true Edge Computing application running inside a switch, issues such as diagnostics, orchestration, etc. are of interest. A simple mechanism could be a CLI command sequence that pushes the desired app out to the Linux OS and fires it up. A more interesting approach would be to use something like an Ansible playbook. But perhaps an even more intriguing model might be to execute a small Kubernetes node on this server and then network together a set of these servers to co-operate in a highly-available and dynamic manner.  Hmmm.

Now let’s take this new Razor’s Edge model of edge computing in a different direction. Let’s say that the Edge Computing application deployed on the server becomes a big hit – but – now the use case requires additional CPU, memory, and storage that for our example also requires a video AI accelerator chip such as the Intel Movidius or the Google TPU or the new nVidia XYZ chip. Given that this is a Razor’s Edge model, and we have a switch involved, let’s connect a vertically-oriented rack-mounted small form factor mini-server blade adapter to one of the 1G/2.5G/5G PoE Ethernet ports. The switch provides the power and the connectivity for the mini-server and retains the same model as the server under the covers. Voila – instant capacity expansion – 30W (or 60W) at a time.

In summary, edge computing is here and is growing given the plethora of use cases. The Razor’s Edge is just another unique way to deliver computing to the edge that has unique advantages versus some of the models otherwise being deployed in the industry with the benefit that it can be utilized not only for edge computing applications, but can improve network security, network remediation, and more.

This blog was originally authored by Chief Technology Officer, Eric Broockman.

Get the latest stories sent straight to your inbox!

Casos Relacionados