X
Business

Purpose built: 5G and the machines that would move the edge

Where the realities of maintaining a multi-terabit-per-second network mandate a new way of approaching the problem of orchestrating network services. By the time we're done, will we still have servers as we know them? And where will the edge actually be?
Written by Scott Fulton III, Contributor

Video: 5G wireless networks: What you need to know

5G wireless is not really about downlink speeds becoming faster, or frequencies becoming more diverse and distributed. These are phenomena brought about by the actual technology that inspired its existence in the first place -- one which may not have feasible at the time 4G was conceived. There's nothing about 4G which would have disabled its ability to be sped up, even the millimeter-wave system with which gigabit Internet service would be made available in dense, downtown areas.

Must read: Part one: The biggest switch: 5G and the race to replace the future | Part two: Wiring for wireless: 5G and the tower in your backyard | Part three: Backhand slice: 5G and the surprise for the wireless cloud at the edge | Part four: Over the edge: 5G and the internet of very different things

Wireless Transmitter Facilities (WTF) are too costly to maintain, and run too hot. 5G metamorphosed from a pipe dream to an urgent need by proposing that the software run by base stations be moved to cloud data centers. This would eliminate the need for high-speed processors in the base stations and the antennas, and dramatically reduce cooling costs. For many telcos throughout the world, it could make their networks profitable again.

The virtualization of wireless networks' Evolved Packet Core (EPC) is already taking place with 4G LTE. There's no single way to do this -- indeed, EPC is a competitive market with a variety of vendors. Next, 5G would add to that the virtualization of the Radio Access Network (RAN). Open source vendors remain very adamant about enabling one and only one way to accomplish this. But consortia, coalitions, and associations of vendors sharing a market together have never been about openness. So just how 5G will pull off this mode of virtualization, even now, remains very much just an idea.

The Way to Nova Gra

180606-scale-m05-w04-f04.jpg

In the previous edition of Scale, we jumped over the edge of our last map and found ourselves in a completely different world.

There, we introduced you to Dr. Andreas Mueller, Head of Communication and Network Technology at Bosch GmbH, and the Chairman of the 5G Alliance for Connected Industries and Automation (5G-ACIA). As you may recall, Dr. Mueller introduced attendees of the recent Brooklyn 5G Summit to the radical notion that network functions virtualization (NFV) for the customer side of 5G networks be partitioned, or "sliced," in such a way that individual customer applications be given their own complete virtual networks. We've heard of "vertical slicing" before (customer-driven network segmentation), although this step forward in the art is being called deep slicing.

Dr. Mueller was asked directly by an engineer with Sprint, does he believe that 3GPP -- the association of commercial wireless stakeholders -- has fully specified the industry requirements for the level of network slicing that Bosch requires? "Well, the answer is no," he responded.

"We are working on that. There is an initiative, not just from Bosch but also from other players, to make sure that industrial use case requirements are considered in 3GPP. As we are sitting here, there are discussions ongoing in SA1 [3GPP's Services working group] and SA2 [its architecture group] about what we really require to do that. But I would not go that far as saying that everything has already been written down and specified. Because also, we are still learning. I mean, it's the learning phase that all of us probably needs. So the ITC industry needs to understand the domain and what is actually required; and we also have to understand the capabilities, and so on. And we also have to find a common language."

Read also: How 5G will impact the future of farming and John Deere's digital transformation

It's not as though the search for a common language among data center and telco professionals hasn't already begun.

Ildiko Vancsa is an ecosystem technical lead with the OpenStack Foundation. OpenStack, you'll recall, is a hybrid cloud platform originally devised for enterprises, as a way for them to stand up their own services like they would on a public cloud platform, but on a vendor-neutral stack. It was telcos such as AT&T that approached the OpenStack community in 2012, she reminded us, with a request to participate in that platform's development. The effort which followed brought together the relevant parties in the creation of the OPNFV collaborative body, enabling server vendors such as Cisco, Dell, and IBM to work together toward the common goal of virtualizing carrier-grade network functions on everyday x86 servers.

"We had more and more telecommunications companies showing up," Vancsa told ZDNet Scale, "and saying, 'Yea, this whole cloud technology thing looks really great, and we think that we could use that as well for our services and running the VNFs on top.' For our community, it was a little bit of a challenge, just from the pure perspective of the different vocabularies that enterprise data center people and telecommunications people are using, and just getting on the same page with the requirements and why telecommunications people are caring more about things that data center people care about less, like all these five-9's, and the more advanced challenges and requirements in the networking space."

There's precedent for addressing the very problem that the Brooklyn 5G Summit folks were looking to resolve. In fact, some of the organizations belonging to both the OpenStack Foundation and 3GPP, may be represented by the very same people. But with all the various masks and personae that engineers in both industries carry with them nowadays, it's still conceivable that communications gaps may persist. Vancsa said she believes her team's experience bridging those gaps is helping it do a better job championing the necessary definitions for edge computing.

180606-scale-m05-w03-f05.jpg

But as the pioneering SDN architect Tom Nadeau, now with Linux distributor Red Hat, told us at Waypoint 3, he believes it should not be the role of any industry consortium to attempt to standardize -- to seal and affix the definitions and specifications -- of any technology that is being forged by an open source community. What would 3GPP accomplish, in other words, by coming along and redefining deep slicing after the fact, if the OpenStack and OPNFV folks came up with an industry-wide solution of their own?

Read also: 5G mobile networks: A cheat sheet (TechRepublic)

"Being overly pedantic and prescriptive about these architectures that these organizations push out -- they're rapidly becoming less and less relevant," Nadeau told us, referring to the class of association to which 3GPP belongs. "I've talked to different operators over the last year, and they have a different view of what they want to do. They, increasingly, are understanding that not only do they have cost pressures, but they also need to figure out how to innovate. Because the over-the-top guys are eating their lunch."

Reflux

If there is any artificially created force more sensitive to mood swings than fashion, it's economics. The conditions which led to cataclysmic shifts in technology, are themselves fickle things with limited life spans.

180606-scale-m05-w05-f01.jpg

Of the four forces which Gartner analysts declared in 2012 were transitioning the modern world -- social, mobile, cloud, and information -- none seem to be in anything approaching stable condition. Two years ago, the mobile device market was declared stuck in a rut, and has yet to escape. The stagnation of social media growth has led to an uptick in questions and debates over its waning influence upon society, and whether the multitude of polarized sources are merely cancelling each other out amid the noise and waste.

Many large enterprises have already begun pulling back their digital assets from the public cloud, as their campaigns to build big data systems on virtual platforms collide head-on with break-even points and total costs of ownership. And information itself has already become a weapon in the U.S. and several other countries, with the principal casualty being the health and well-being of democracy.

If those four forces were truly responsible for a conspiracy to bring about a technological renaissance in 2012, then by that same logic, we should be experiencing at least a technological recession in 2018, and these forces should be tracked down, arrested, and charged with neglect of duty.

Read also: 5G adoption: The first 3 industries that will be at the forefront (TechRepublic)

The mistake here is one of perception. Rather than four forces per se, acting as agents unto themselves, these items Gartner isolated are actually by-products of one larger force at work: the global effort toward connectivity, and the fuel of our metaphorical "Union Strategic Pathway" locomotive. Those by-products are actually resources -- the payback for a successful connectivity effort. Think of these resources like commodities in a massive, global data center. What is changing is not the health of these commodities in and of themselves, but how they're being configured together. Enterprises, organizations, and industries are shifting their bets.

The decision by the world's telcos to move their base stations' baseband unit functionality and radio access network cores to the cloud, is a shifting of bets. They tried it one way, with mixed results at first, waning results later. 5G is a realignment of their business interests. It's not that mobility as an ideal suddenly decided to off itself. It's that the most important mobile element in a mobile technology framework is not the device, but the customer.

"Even then, we have to drill down to try to explain, what is a customer service?" remarked Igal Elbaz, AT&T's vice president of ecosystem and innovation. "Are we talking about a unified communications service? Or an AR [augmented reality] platform that allows me to render at the edge? Or are we talking about access to a user, that allows me to get data services? You understand, these are different questions for different services.

"We are thinking about investing at the edge," Elbaz continued, "because the convergence of computation and network is important to unleash some of the use cases and the business models that people are talking about -- AR, content distribution, autonomous cars. Some of the characteristics of 5G has to do with network slicing, which will allow us to serve certain customers in a different class, in a certain way -- apply some policies to certain customers. But how, where is the service, and how do you define a service -- I want to make sure we are accurate there."

The word that stands out in Elbaz' comment here is class. His use cases for network slicing involve categories of applications, not the specific instances of applications walled off for exclusive use by a particular customer -- as Bosch's Mueller would prefer. AT&T knows its cloud data centers, when and if they come online, will initially lack the clout to compete against Amazon AWS, Microsoft Azure, and Google Cloud. So it must spend those early years peeling away specific customer classes away from the public cloud -- classes that may desire low latency, fast rendering, ease of access, or anything else that may give a carrier a strategic advantage, on account of operating services at the edge. Evidently there's a marketing reason, as well as a security reason, behind AT&T's assertion that the issue of what's sliceable has already been decided.

The type of exclusive, tailored service Mueller describes might only be feasible if the Bosches of the world were to unite, perhaps to gather together their purchasing power. On the other hand, as he told the Brooklyn 5G Summit, customers at Bosch's level cannot be lumped together into classes. They all have very particular use case requirements, and would rather be treated exclusively or not at all. Bosch is actively considering owning and operating its own data center once again, if it means its IT personnel can have granular control of the company's operating configurations.

Read also: 5G could widen the gap between haves and have-nots (CNET)

"We have very high requirements on latency, on reliability," said Mueller, "and we need something like deep slicing as well. The slice doesn't end at a communication interface; we also have to take the operating system into account, the scheduling of the operating system, and maybe everything underneath the application. That's also something that, in our opinion, needs to be further discussed. Of course, it's also about the pricing model. There's a willingness in the manufacturing industry to pay a little bit more for 5G and everything that is coming up here, but is not that there are no limits. It has to be attractive, and it has to be a simple pricing model."

Revenge of the hard core

180606-scale-m05-w05-f02.jpg

Several times in recent years, we've seen that the market force responsible for a trend later becomes responsible for the reversal of that trend. Enterprises' decisions to pull back their digital assets from the public cloud have been driven by the same logic that sent them to the cloud to begin with. The resources that underlie a technology are fluid things. They shift and redistribute themselves. The public cloud once seemed like the perfect place to build a data lake. Not now -- especially since local storage in large bulk has become cheaper and, with solid-state storage, faster. The fluidity of the resources changes how the same trend plays out, in this case reversing the flow.

When Facebook launched the Open Compute Project, it was with the understanding at the time that inexpensive, perhaps less-than-perfectly-reliable, x86 servers could perform the same jobs in a hyperscale data center than a lesser number of more expensive, premium servers. If Facebook could publish the specs of these servers and get enough potential purchasers onto the bandwagon, their collective purchasing volume could drive manufacturers to build these servers in bulk, and then to sell them for minimum markups. But along comes hyperconvergence, changing the role of servers from providing compute capability in set quantities, to supplying vital resources in fluid amounts.

Though the telco industry isn't exactly investigating hyperconvergence per se, it is allowing itself to reopen the investigation into the identity and the role of the core component of the modern data center: the server. Specifically, if the infrastructure that supports both customer- and network-facing functions is virtualized anyway, then could a simpler server be devised particularly for NFV?

Read also: Who's most ready for 5G? China, not the US, leads all (CNET)

"Everybody has said, if you've got cloud, you've got to run everything on servers. Yea, okay, agreed with that," remarked Nick Cadwgan, Nokia's director of IP mobile networking. "But hmmm, what happens when the bandwidth grows exponentially, or latency, or latency and bandwidth, or we want local communication? We at Nokia said, 'Let's look at the user plane function. What is that function?' That function now is basically a data forwarding function. Because the control plane is compute / memory / processing -- it's designed to go onto a compute platform. But the interesting point now comes to the user plane. What's the optimal platform, or platforms, or technology for the user plane?"

One of the key functions and benefits of software-defined networking is the separation of the control plane (traffic related to the control of the network itself) from the user plane (traffic related to the application). Each runs on its own network path, and that path can be altered and rerouted independently of the other. Network functions virtualization is a higher form of SDN. It perceives a complete service chain, or traffic flow pattern. Whereas SDN pertains to the configuration of the platform on which the software-based network resides, NFV adds the concept of a service component residing on that network with an explicit ingress and egress point, and a pattern through the addressable points within that component. Imagine NFV as an electric Lego block that can plug into the SDN base platform, and you'll get the basic idea.

What Nokia's Cadwgan is suggesting here is that an NFV component need not necessarily be software. Ridiculous though this may seem on the surface, he is suggesting a kind of hardware-defined software-defined component. Why? Because the rules are changing, and the fluidity of the resources involved are shifting.

Cadwgan reminded us that Nokia is now a chip maker in its own right, having released its 2.4 terabit-per-second FP4 network processor a year ago. While that chip is primarily designed to run in router appliances such as its 7750 SR-s, he said there's ample precedent for the notion that software can be imputed through hardware.

"We're not committing cloud heresy here," he remarked. "We're actually following a trend in data centers. They already have function-specific silicon. They have general processors, but guess what: For certain functions, they have things like graphics accelerators, which are function-specific silicon to do a specific job. All we're saying is, hmmm, this is good. Bandwidth's going up, latency [down]. There is going to be a point where function-specific silicon in the user plane makes sense."

Read also: Why Estonia finds itself in the middle of a 5G arms race

Just as the rising cost for maintaining a database in the public cloud has a break-even point, where what you pay the cloud provider is equivalent to what you would have spent to store the data locally, Cadwgan says there's a similar break-even point here: When operating an edge cloud data center where the heavy workload is conducted entirely by software-based infrastructure, the rising cost will inevitably cross a break-even point, compared to using purpose-built silicon. We haven't faced that dilemma very much up to now, but 5G could drive us to discover it quite soon.

What this may mean is, whether we eventually split the user-facing edge from the customer-facing edge as AT&T would prefer, or converge the two as the OPNFV engineers suggest, the platform we currently expect to handle throughputs of hundreds of terabits per second, won't be up to the job. Arguably, the network-facing functions would be most likely to require ultra-high-speed throughput. But even if a fraction of that throughput were devoted to customer functions, the speed boost could give an edge cloud network a value proposition the public cloud services, with their x86 server platforms, can't match.

"It's not for everything," Cadwgan warned. "What you actually need is a hybrid. There are some services and applications -- or, dare I say it, there are some services and applications that are in certain core slices, because we can slice the core -- that are best served by virtualized user plane functions. By the way, those functions, by way of the control / user plane separation, we can put them down wherever you want them -- centralized, distributed. There are going to be other services and applications, or core slices carrying those services and applications, that may be better suited having physical user plane function, that may be laid down wherever you need them, centralized or distributed, or edge cloud. What we are saying is, you've got to think beyond one thing for everything. You've got to look at the services and the applications, their characteristics, and what is the optimal way of supporting them in the network?"

180606-scale-m05-w05-f03.jpg

As we've come to define "the edge" up to this point, we've assumed it would be the location best suited for running an application with minimum latency. In the network that Nick Cadwgan is envisioning, there may be no discrete edge -- at least, not geographically, and perhaps not physically. NFVs may be creating and re-creating dynamic slices of 5G network infrastructure from pools of both physical and virtual resources.

Read also: Stingray spying: 5G will protect you against surveillance

This is the stark difference between 5G visions: One describes a physical boundary between user and network functions, and places manageable network slices in discrete physical locations. The other is hybridized, fully distributed, and without clear borders anywhere, much more like a containerized data center with microservices. The fact that both outcomes are equally possible speaks volumes as to how much further out we are from 19 months' time to finally declaring 5G a complete design.

Central intelligence

We might call this the "elephant in the room" if it weren't so obvious that nobody sees it, more like a mosquito or an inefficient head of state: If multi-tenancy on a cloud data center platform could be maintained -- in other words, if you could have one network slice where a whole class of customers would reside, partitioned from the network core -- then Bosch's concept of deep slicing could theoretically cut even deeper. It was the question asked in the auditorium at Brooklyn 5G Summit, tossed around a bit, but left unanswered: As long as it's possible for a cloud data center to absorb the functionality of an IoT device, rendering the embedded devices industry a relic of history, why not enable the same technology to absorb the functionality of the user equipment (UE)?

That term "UE," by the way, refers to your smartphone.

In such an environment, a mobile phone would act more like a virtual desktop (VDI) with radio access serving as its tether to the data center. All the device would need to do is present a rendering of the application its user would be running, but that application would run on the server, not the phone. Such a system could increase radio traffic, but for telcos that charge by the gigabyte anyway, that might not be a problem.

Read also: Samsung and KDDI complete 5G trial in baseball stadium

It's difficult to imagine a smartphone ecosystem without Apple or Samsung producing premium devices. But perhaps we don't have to: A virtual smartphone could be the next generation of the feature phone, giving so called "value-class" customers a lower-cost option. That cost could be rock bottom if the device class could be produced in very large quantities, for very large markets that are yearning for a ramp up to the present century.

In our final stop on this roller coaster ride, we'll introduce you to one of these nations -- a place known for its technological potential, but whose infrastructure up to now has weighed it down to the point of economic collapse. This is a place where a virtual smartphone could be the catalyst that makes the rest of 5G happen. Until then, hold faith.

Journey Further - From the CBS Interactive Network

Elsewhere

"Gargoyles" appearing in the map of Septentrionalis were created by Katerina Fulton.

Editorial standards