Edge Computing vs Cloud Computing

Learn how edge computing is different from cloud computing how it overcomes challenges in areas such as data quality, latency, security, and scalability

Learn more
MORE ON THIS TOPIC
No items found.

Pull Quote here Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation

Edge Computing vs Cloud Computing

Cloud computing abstracts the application infrastructure traditionally managed by enterprises by placing server hardware in private data centers using infrastructure as a service (IaaS) implementation, such as a remote virtual machine, or a platform as a service (PaaS) model, such as a managed database service. Edge computing complements cloud computing by bringing the cloud services close to end-user devices for data-intensive applications requiring fast roundtrip response time that can’t be guaranteed by a cloud computing service centralized in a geographic region. 

The following table summarizes how the two technologies compare. This free educational guide offers primers in the technologies covered in this article to help readers who are less familiar with distributed stream processing concepts. 

Cloud Computing Edge Computing
Time to insight Delayed: Due to network latency between data centers and devices, it takes some time to process the data and provide insights. Near real-time: Eliminates slow response time and provides near-real-time results.
Cost High: Moving a large amount of data to data centers incurs high data transfer costs. Low: Data is processed locally at the edge node, reducing backhaul costs.
Scalability High: Scale up and scale out as needed. High: Scale up or scale out as needed.
Security High: Both Cloud & Edge providers adhere to high-security standards.
Compliance Complex: Compliance with new regulations (geofencing or depersonalizing) requires due diligence and may incur additional expenses. Easy: Edge computing provides geofencing and processes the data on local networks.
Data quality High: Data is backhauled to a central location, so it requires less data synchronization. High (Conditional): Only if the edge provider offers Conflict-free Replicated Data Type (CRDT).
Scalability High: The cloud has virtually unlimited resources and can handle complex processing. High (Conditional): As long as the edge provider offers many points of presence.
Storage Unlimited: The cloud provides virtually unlimited storage. Unlimited: A qualified edge provider can offer virtually unlimited storage.

Table 1. Comparison of Cloud and Edge computing

What Is Cloud Computing?

Cloud computing is the on-demand delivery of computing resources while abstracting the complexities of the underlying infrastructure from end-users. Cloud computing systems are software-defined environments that offer computing services, including servers, storage, networking, databases, software intelligence, and analytics solutions, and much more. The cloud is implemented on the internet and created on top of data centers or server farms. Instead of buying and maintaining hardware, one can use services from a cloud provider as needed. 

Amazon EC2 is one of the best known cloud services and lets users create a virtual machine with their choice of processor, storage, networking, operating system, and much more. It only takes a few seconds to create the virtual machine and start using it. Other well-known cloud services include Google Kubernetes Engine, Google BigQueryAmazon RDS, Azure IoT Hub, and Azure Databricks. Amazon, Google, and Microsoft are three major cloud vendors, but other offerings are available in the market from Alibaba, IBM, Oracle, SAP, DigitalOcean, and more.

Some of the significant benefits of cloud computing include the following:

  • Cost: Cloud computing is cheaper because it has a pay-for-usage model rather than maintaining its own data centers. 
  • Productivity: Data centers require a lot of maintenance, such as hardware setup and frequent software patches, to keep them up and running. With cloud computing, the team can focus on more important business goals and save the cost of having specialized personnel.
  • Speed: Computing services in the cloud are self-service and on-demand, which means you can be up and running in a few seconds; for example, setting up a new server in a cloud requires just a few clicks.
  • Scalability: Cloud computing resources are elastic and easy to scale, including adding more compute power, extra storage, or bandwidth. Furthermore, one can scale up close to customer bases across the globe. These days, major cloud providers even offer to scale-out applications without any downtime. 
  • Performance: Typically, cloud vendors are connected across the globe using proprietary networks and regularly update to the latest hardware. This means they can provide top-notch performance. 

There are various “as a service” models in the cloud, such as IaaS, PaaS, and SaaS. Infrastructure as a service (IaaS) refers to renting IT infrastructure such as servers, storage, and virtual machines. IaaS is one of the most commonly used models in cloud computing. Amazon Web Services (AWS), Google Cloud Platform(GCP), and Microsoft Azure are some examples of IaaS. Platform as a service (PaaS) adds another abstraction layer of Operating system or runtime on top of IaaS as it offers a software platform and hardware, as shown in Fig 1. Heroku, Windows Azure, Google App Engine, and SAP Cloud are examples of PaaS. Finally, software as a service (SaaS), also known as cloud application services, delivers a complete application from the cloud, as shown in Figure 1. The cloud provider manages the hardware, operating system, and application with SaaS, with the application usually accessible via a web browser. In addition, the cloud provider handles all software updates. Some well-known examples here are Gmail, web-based Outlook, Dropbox, and Salesforce.

Fig 1. IaaS, Paas, and SaaS compared to custom. Source

There are various types of cloud: public, private, and hybrid. The public cloud is the most common type, where computing resources are owned by a third party and can be used over the internet. Multiple organizations share all the resources (hardware, storage, and network devices) simultaneously. A private cloud is a set of computing resources owned and used exclusively by a specific organization. It can be hosted on-premises or by a third-party vendor but will be accessible only on that private network. Private clouds are often used by financial institutions, government agencies, and other organizations having custom requirements to set up the cloud environment. Finally, a hybrid cloud is a combination of both public and private clouds. The organization moves the data between the public and private cloud using some middleware or a virtual private network (VPN). 

Challenges with Cloud Computing

Cloud computing has been designed with centralized architecture in mind, where all the data is brought into a centralized data center for processing. As a result, it provides disaster recovery, scalability, unlimited storage, and computation, enabling application development. However, there are use cases where such centralized architecture doesn’t perform well, and the network becomes a bottleneck. 

The cloud’s centralized approach simplifies the processing architecture, but the Achilles’ heel of the cloud is the network. The cloud can centralize data processing, but it is counterbalanced by the need to transfer the data on the web, especially when scaled across geographies. Also, it can introduce synchronization issues between different data centers. Devices can generate terabytes of data to be moved over the network, which incurs costs and adds network delays. 

The other challenge is response time: the rate at which the cloud returns results based on the input data. Data is first uploaded to a centralized cloud, then processed, and finally, a result is sent back to the device. Each step takes time.

Imagine a smart car connected with the cloud and making decisions based on transferred data from car sensors. Suppose the vehicle has to make a critical decision: If it is using the cloud, it has to wait for the computation results as it transfers a great deal of data for object recognition and then gets a response. Many real-time applications like these are both critical and require answers in a small fraction of a second, which means it makes more sense to have the data processing be local. 

Other use cases where cloud computing isn’t the optimal solution include content delivery networks, real-time safety monitoring, smart cities, and most importantly, the Internet of Things (IoT).  

IoT is a set of physical devices or sensors that work together to communicate and transfer data over the network without human-to-human or human-to-computer interaction. IoT growth has enabled data collection from connected devices and allows businesses to derive value from the data. As a result, it has enhanced business decision-making and helped businesses proactively mitigate risks, and as a result, grown exponentially. However, it has the same challenge as the cloud in that a massive amount of data is moved from “things” (devices) to data centers, increasing cost, latency, and response time. 

There was a dire need for an architecture that could quickly analyze data and provide better response time cost-effectively. This has led to various ways to tackle the cloud’s challenges, such as edge computing, fog computing, and mist computing.

Edge computing is one architecture that addresses the limitations of the centralized cloud and provides quick results for computing, more immediate insights, lower risk, more trust, and better security. 

What Is Edge Computing?

Edge computing is a distributed framework that brings computation and storage close to the geographical location of the data source. The idea is to offload less compute-intensive processing from the cloud onto an additional layer of computing nodes within the devices’ local network, as shown in Figure 2. Edge computing is often confused with IoT even though edge computing is an architecture while IoT is one of its most significant applications. 

Figure 2. Edge computing infrastructure. Source

Edge solutions provide low latency, high bandwidth, device-level processing, data offload, and trusted computing and storage. In addition, they use less bandwidth because data is processed locally. Compared to cloud computing, only aggregated results are uploaded to the cloud, where all the raw data is transferred to a centralized data center. Edge computing also provides better data security because only depersonalized data moves out of the local network.

Figure 3. Edge computing in a nutshell. Source 

Edge computing exists in different forms including device edge and cloud edge. Device edge is when processing happens on a machine with limited processing power next to the devices. Cloud edge uses a micro data center for data processing locally and communicating with the cloud. In some cases, endpoint devices are also capable of processing natively and communicating directly with the cloud.

Examples

Autonomous cars generate four terabytes of data every few hours. In such a use case, cloud computing will not be a viable solution as the network will become a bottleneck, and cars need to act in a split second. Edge computing can come to the rescue here and complement cloud computing, with significant data processing happening at the edge nodes. 

Similarly, edge computing is being used widely in augmented reality (AR) and virtual reality (VR) applications. A good example is a Pokémon game, where the phone does a lot of processing while acting as an edge node. 

Machine learning can benefit from the edge as well. For example, machine learning models are trained using a massive amount of data on the cloud, but once they are trained, they are deployed on edge for real-time predictions. 

The Apple iPhone is an excellent example of an edge device taking care of privacy and security. It does encryption and stores the user’s biometric information on the device itself, so it isn’t uploaded to the cloud or any other central repository. In addition, it takes care of all the authentication on the devices, and only depersonalized information is shared to the cloud. 

Voice assistants still use cloud computing, and it takes a noticeable amount of time for the end-user to get a response after sending a command. Usually, the voice command is compressed, sent to the server, uncompressed, processed, and the results sent back. Wouldn’t it be amazing if the device itself or an edge node nearby could process those commands and respond to the queries in real-time? It’s possible to achieve such low latency using edge computing. 

5G is also being rolled out offering higher wireless network bandwidth than older technologies. Telcos need to deploy data centers close to the telco towers to complement their infrastructure with edge computing and avoid bottlenecks while processing vast amounts of data generated by new 5G cell phone and tablet devices.

Finally, edge computing can be implemented inside enterprise networks or in factory buildings, trains, planes, or private homes. In that scenario, all the sensors will be connected to a local edge node that will process the data from the connected devices (sensors) and process it before sending it to the cloud servers. Such a network is more secure and privacy-compliant as it will send only aggregated data with the personal information taken out of it. 

Usually, it’s an edge server on a local network that receives data from different devices and processes it in real-time. However, endpoint devices don’t have a great deal of processing power, and they have minimal battery capacity, so conducting any intensive processing on them can deplete their resources. 

Challenge

Edge computing moves the compute and storage to edge nodes, which offers geographically distributed data storage, state management, and data manipulation across multiple devices. Edge locations must perform stateful computing and reconcile copies of data asynchronously to scale, but synchronizing local data copies with peer edge locations is complex and requires specialized technology. Another challenge in developing applications capable of taking advantage of edge computing is the need to integrate various technologies such as a NoSQL database, a graph database, application messaging, and event streaming processing.

Solutions

Different technologies exist that provide geo-replication capabilities, including MongoDB, Redis CRDB, and Macrometa. MongoDB is a JSON, document-oriented, no-SQL database that provides eventual consistency for geo-replication. The eventual consistency model guarantees that nodes will eventually synchronize if there are no new updates.

Similarly, Redis is an in-memory cache that offloads read from the database to a fast in-memory cache. CRDB is an extension that enables Redis replication across different regions. However, it is limited to the amount of data that can be stored in the database, so it is not ideal for use cases where there is frequently changing big data. Also, it only offers a maximum of five regions for replication. 

Macrometa is a purpose-built hosted platform that offers an edge-native architecture for building multi-region, multi-cloud, and edge computing applications. Macrometa provides virtually unlimited edge nodes with a coordination-free approach and can be used with existing architecture without significant architectural changes. In addition, it automates data synchronization across multiple data centers allowing users to develop applications without requiring a specialized knowledge of data synchronization techniques. 

Macrometa provides a modern NoSQL multi-model interface supporting the following models:

Conclusion

The idea of edge computing is to get closer to devices to reduce the amount of data that needs to be transferred, which results in better response time. It is not a replacement for the cloud, but it complements cloud computing by addressing some of its shortcomings for specific use cases.  Edge computing systems only transfer relevant data to the cloud, reducing network bandwidth and latency and providing near-real-time results for business-critical applications.

Edge computing is evolving rapidly, and some in the industry believe that the cloud will be used only for massive computations and storage in the future, while all other data will be processed in edge data centers. 

Macrometa offers a free guide to event stream processing for those interested to learn more about the technologies discussed in this article.

Resources

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labor

Learn more
RESOURCE TITLE
Text linkText link
RESOURCE TITLE
Text linkText link
RESOURCE TITLE
Text linkText link