Announcing PhotonIQ: The New AI CDN For Accelerating Apps, APIs, Websites and Services
Edge

Faster Apps At The Edge With A Geo-Distributed & Replicated Cache

Post Image

Enterprises are challenged to deliver fast response times when data requests travel to and from the faraway cloud. Bringing your data to the edge and caching data closer to where it originates and where it is needed can provide lower latency and reduce costs. This blog will outline key sections from our cache whitepaper, including why and what you should cache, the key components of a real-time, geo-distributed, and replicated cache, and how the Macrometa Global Data Network (GDN) supports caching.

Why cache?

With edge computing, information can be cached at points of presence (PoPs) locations worldwide, so applications can access data faster with less latency.

Caching can reduce your database load and decrease overprovisioning instances to drive costs down significantly, as many databases charge for throughput. Processing and computing can be done closer to the originating device by caching data at the PoPs, with a lot less backhaul network to the cloud.

What should you cache?

The question should be: what does the PoP need to avoid sending a request to a central hub before replying to a user request? A few common choices are specific rows of recent data from databases, API results to access globally shared data, and configuration files for user applications. If we look at food delivery as an example, you may want to cache static map information, route calculations, and any metadata about the deliveries that are frequently accessed.

Key components of a real-time, geo-distributed & replicated cache

It makes sense that a real-time, geo-distributed, and replicated cache has to be fast and have a wide geographic reach, but let’s go over all the key attributes in detail.

  1. A rich data model that can handle data in any format from traditional databases and data streams for unstructured data in a unified manner.
  2. The ability to operate at memory speed for real-time information that needs to be transmitted almost immediately - an edge cache needs to allow a PoP to enable quick responses for time-critical tasks.
  3. The ability to work efficiently with serverless applications - since the stateless and distributed concepts of serverless computing complement the edge's enhanced reachability and networking capabilities.
  4. It must be part of a global network that can handle a geographical subset of clients anywhere in the world.
  5. It needs to have geographic boundaries and limit where data travels, thus conforming to regulations like GDPR.

How the Macrometa GDN supports caching

The Macrometa Global Data Network (GDN) was designed to support a real-time, geo-distributed, and replicated edge cache. The image below shows how two PoPs communicate through streams handled by Conflict-Free Replicated Data Types (CRDT) resolution. At the top, various queries enter in different formats and protocols. The in-memory cache in each PoP responds quickly to client requests. Let’s walk through other features that help enable a real-time, geo-distributed and replicated cache.

Image

Multi-model operation

The Macrometa GDN offers multi-model operation and enables users to access the same copy of data via multiple interfaces. You simply use different databases at the same endpoint - so it is much easier to integrate different data streams into an application.

In-memory side cache with disk-based persistence

Applications can retrieve data immediately without needing even to access a disk, much less send a request over the network. The data is written to disk so that it will survive in the case of a hardware failure, and the cache can be restored to a new or restarted node.

Hyper-distribution across several regions

Macrometa maintains more than 175 PoPs scattered through many different regions. Macrometa also partners with telecom companies to use its facilities to provide geographic flexibility to its clients. Two key features in the GDN that enable rich hyper-distribution of cache across several regions are:

  • CRDT model: The Macrometa GDN uses a CRDT update model. Each PoP logs all the changes it makes to its cache and sends these changes out to every other PoP at regular intervals. These data transfers are called streams, and instead of hub-and-spokes, CRDT creates peers. PoPs send out these compressed and encrypted streams frequently, and each stream contains just data for a few seconds, so it is not a network burden.
  • Coordination-free garbage collection: Each PoP can independently send, receive, and process data at its own rate without acknowledging or waiting for other points of presence. The coordination-free architecture can be leveraged for efficient garbage collection - timely truncation of logs to remove old entries.

Compliance with flexibility

The Macrometa GeoFabrics feature allows you to use a subset of regions, all based on your own criteria and not pre-generated (such as US, Europe, or Asia). GeoFabrics helps you comply with local data regulations (such as GDPR) and reduce read/write costs associated with global replication. GeoFabrics offers more options for storing or accessing data - with a simple on/off switch.

Standardized “Redis-like” cache interface

The GDN cache interface is similar to common caching solutions, so you can quickly get started and implement real-time, geo-replicated caches for your applications. If your application presently uses a cloud-backed cache to serve users, you can easily extend your use cases with the GDN to make your apps ready for the edge.

Learn more today!

A geo-distributed and replicated cache can support use cases like real-time dynamic advertising, or any app with real-time user expectations. Download this cache whitepaper for more details about caching data closer to your customers, partners, suppliers, and staff at the edge.

Photo by Robynne Hu on Unsplash

Join our newsletter and get the latest posts to your inbox
Form loading...





Featured Posts

Related Posts

Recent Posts

Platform

PhotonIQ
Join the Newsletter