Bengaluru: Elon Musk’s Tesla cars make timely and autonomous driving decisions. The reason: The vehicles are embedded with powerful on-board computers that allow for near real-time, low-latency data processing which is collected by the vehicle’s numerous sensors.
Intel estimates that autonomous cars will generate 40 terabytes of data for every eight hours of driving. This implies that it is unsafe and impractical to send such humongous amounts of data to the cloud.
But what if some of the computing can be done in the cars itself, making the vehicle a mini data centre? Would that make autonomous cars much more reliable and secure while keeping the consumer’s data private? Further, what if wearables and wireless medical devices need to process complex data in real time? Would cloud computing, with its bandwidth and related latency issues, suffice?
Also, regulatory and compliance issues may dictate that not all data can be sent to the cloud. Similarly, in the consumer segment, would online multiplayer games—where milliseconds can mean the difference between winning and losing—not work better if their latency issues are solved?
For decades, computing was done solely on servers in the backyard of companies. Over the last two decades, however, businesses gradually began shifting their workloads to the cloud. This trend, better known as cloud computing, has helped companies reduce capital expenditure and increase return on investment. As a result, private cloud (on-premises), public cloud (on a network—typically the internet) and hybrid cloud (a combination of both public and private) are terms that are well understood by companies today, even if not fully implemented.
However, even as companies are talking about a “multi-cloud” approach—one that envisages the use of multiple cloud vendors—the term edge computing (Cisco Inc.’s “fog computing” has a similar goal) is gaining ground with good reason.
At VMWorld 2018 in Las Vegas, for instance, VMware Inc.’s chief executive Pat Gelsinger, was insistent that as billions of devices get connected as part of the Internet of Things (IoT) trend, computing will increasingly be done at the so-called “edge”—at, or near, the source of the data. Technology vendors like VMware—a listed unit of Dell Technologies Inc.—believe this trend will prompt companies to process and analyse data using artificial intelligence (AI) and machine learning (ML) in a hybrid cloud operating model.
VMWare is making a good bet. According to market research firm International Data Corporation (IDC), in another four years, more than 30% of organizations’ cloud deployments in India alone will include edge computing to address bandwidth bottlenecks, reduce latency and process data for decision support in real time.
There are a number of reasons for this. For one, the number of phones in the global market will lead to an explosion of data, giving the ability to derive more personalized services from a B2B (business to business) and B2C (business to consumer) perspective, leading to an increased need for analytics tools and more ML models, etc.
A lot of computing is already shifting to the edge such as phones themselves and even microchips embedded in light bulbs (read light fidelity). And, “with 5G, you could do very interesting things. Edge computing has tremendous number of use cases in cities, shipping and logistics, etc., because of 5G,” Rick Harshman, managing director (Asia-Pacific) at Google Cloud, said in a recent interview.
However, while cloud computing has traditionally served as a reliable and cost-effective way to manage these data streams, the total growth of data will put increasing strain on network bandwidth, which is where edge computing comes into play. This data will need to be processed, which is why Nvidia Corp. is betting that its edge servers (like the ones used in Nvidia EGX platform) will prove handy and process data in real time, reducing the amount of data that must be sent to the cloud. Nvidia has reportedly roped in Dell EMC, Cisco Systems Inc., Fujitsu Ltd and Lenovo Group Ltd as EGX server partners, while in IoT space, they have tied up with Microsoft and Amazon Web Services.
Also Read | The behavioural trend that Artificial Intelligence will spawn
Edge computing is not a new concept but several trends have come together to create an opportunity to help industrial organizations turn massive amounts of machine-based data into actionable intelligence closer to the source of the data, according to General Electric Co. (GE)—a company that understands industrial IoT (IIoT) very well.
For instance, “edge” in IIoT refers to the computing infrastructure that exists close to the sources of data. These include industrial machines, (wind turbines, magnetic resonance scanners, etc.,) industrial controllers such as SCADA (supervisory control and data acquisition) systems, and time series databases aggregating data from a variety of equipment and sensors. These edge computing devices typically reside away from the centralize computing available in the cloud.
Cloud computing, according to GE, will take a more dominant position when actions require significant computing power, managing data volumes from across plants, asset health monitoring, ML and the like. On the other hand, edge computing will work well where there is low latency or where there are bandwidth constraints at places such as mines or offshore oil platforms, which make it neither practical nor affordable and, in some cases, impossible to send all data from machines to the cloud.
GE recommends that for industrial companies to fully realize the value of the massive amounts of data being generated by machines, edge computing and cloud computing must work together.
But edge computing needs an ecosystem too. According to CBInsights, the hierarchy for the edge ecosystem comprises four broad tiers. The first is edge sensors and chips—where the data is initially collected. The second is edge devices including the edge sensors and chips that collect the data as well as the computational resources to process and analyse it (edge devices range from smartwatches to autonomous vehicles).
The third component is edge infrastructure—data centres and microdata centres that “offer far more data processing and storage capacity than edge devices as well as extremely low latency compared to the centralized cloud (which could be located states away)”.
The fourth component is the centralized cloud that has become a primary location for storing, analysing and processing large-scale data sets (but not in real time). “This is where edge data will go to end its journey and be added to other, relevant, historic data,” according to CBInsights.
According to the CB Insights Market Sizing tool, the global edge computing market is estimated to reach $6.72 billion by 2022. This prediction is reasonable since with billions of devices connected to the internet, faster and more reliable data processing will become crucial. Moreover, the rise of IoT and mobile computing has put a further strain on networking bandwidth with the connected devices.
That said, edge computing is definitely not without its challenges. For instance, all edge devices would ideally have to run the same version of the software in a company—not very practical. Further, as you keep on adding nodes, monitoring the usage and performance of edge software will become increasingly difficult, and so will remote debugging and troubleshooting.
This implies that it is not an edge computing versus cloud computing battle. Both these technologies will complement each other—at least, for quite some time to come.