Announcing PhotonIQ: The New AI CDN For Accelerating Apps, APIs, Websites and Services
Edge

Computing On The Edge: Screaming In The Cloud Podcast Recap

Post Image

Originally published 11/1/2022 by Corey Quinn, Chief Cloud Economist at The Duckbill Group, as a Screaming in the Cloud podcast, episode 405, “Computing on the Edge with Macrometa’s Chetan Venkatesh.

Chetan Venkatesh, Macrometa CEO and Co-founder chatted with Corey Quinn to define the edge and how the edge is “new kind of distributed cloud.” Developers can build apps in a distributed way with Macrometa’s Global Data Network that you “just can’t do in the cloud anymore.” In his chat with Corey, Chetan runs through the need for a consistent data view, the layers of the Macrometa Global Data Network, typical use cases, and how Macrometa is helping developers lower their carbon footprint with more efficient coding and tools.

What is the edge anyway?

The term “edge” has been applied across many aspects of IT, especially around 5G technologies. The recent hype is not surprising, when you consider that edge-related annual revenue has been predicted to grow to nearly $274B in 2025 by IDC. During the podcast, Chetan clarifies the term edge with a specific definition and indicates there are three different types.

Chetan indicated the edge is “very different from the cloud” when you consider “...that the cloud is defined by centralization, i.e., you’ve got a giant hyperscale data center somewhere far, far away...where you run things at scale and somebody else manages them for you.”

In contrast, “the edge is actually defined by location” and here are the three types per Chetan. And these three types “are now becoming important as a place for you to build and run enterprise apps.”

The first is the Content Delivery Network (CDN) edge and Chetan describes how this is historically “where we’ve been trying to make things faster with the internet and make the internet scale.” Akamai created the CDN “that allowed the web to scale….that’s the first location that defines the edge…where a lot of the peering happens between different network providers and the on-ramp around the cloud happens.” Akamai and Macrometa are working closely together to expand the edge and Akamai made quite a few appearances during our recent Developer Week.

“The second edge is the telecom edge. That’s actually right next to you…because every time you do something on your computer, it goes through that telecom layer. And now we have the ability to actually run web services, applications, data, directly from that telecom layer, “ Chetan relayed to Corey. Macrometa works closely with telecom providers like Cox Edge and DISH Wireless to expand edge capabilities.

The third edge is something that is ready and available at your fingertips and you take it everywhere -your mobile device! Per Chetan, “it’s your internet gateway” and “where you have some compute power, but it’s very restricted and it only deals with things that are interesting or important to you as a person.”

Consistent view of data anywhere in the world

The cloud was originally designed for economies of scale by having data in a centralized location and to make it easy to add or change technologies. However, the cloud does present some challenges that Chetan talks about like how do you “take data and chop it into 50 copies and keep it in 50 different places on Earth.. then keep all those copies in sync?.. So you start to deal with some very basic computer science problems like distributed state and how do you build applications that have a consistent view of that distributed state?..”

Chetan goes on to describe how a edge solution for this issue has to cover three areas: “a way for programmers to do this in a way that doesn’t blow their heads with complexity, a way to do this cheaply and effectively enough where you can build real-world applications that serve billions of users concurrently at a cost point that actually is economical and makes sense, and third, a way to do this with adequate levels of performance where you don’t die waiting for the spinning wheel on your screen to go away.”

Introduction to the Macrometa Global Data Network

The Macrometa Global Data Network (GDN) is based on cutting edge computer science incorporating new ideas and research in the areas of messaging, event processing, data consistency, and replication. The Macrometa GDN has three layers: a Global Data Mesh that is a real-time, geo-distributed storage layer, an Edge Compute layer to process data to run real-time apps close to your customers, and a Data Protection layer to help organizations comply with regulatory and legal standards.

Global Data Mesh

The Global Data Mesh has all the benefits of a NoSQL database with KV, doc, and graph stores - plus Macrometa Streams with Pub/Sub and messaging queues. What separates Macrometa from other data storage solutions is that it brings compute closer to people and where data is generated, regardless of where that is. 

Organizations need a consistent view of data, even if multiple copies are sitting all around the world. Say you are a large retail distribution company and folks are ordering from your site, you need real-time answers from many different locations. 

Chetan explains to Corey that programmers need to be able to operate, modify, and replicate that data with consistency guarantees and high levels of performance. Edge computing and the Macrometa platform “literally puts data and applications within 50 milliseconds of 90% of humans and devices on Earth.” This may seem like a new paradigm, but Chetan indicates it is more of a continuum. The Macrometa platform “feels very much like Lambdas, and a poly-model database.”

Data comes in different forms and developers don’t have the time to reformat data. Macrometa “provides continuity by looking like a key-value store like Redis” and has “a streaming graph engine built into it that kind of looks and behaves like a graph database, like Neo4j.” Macrometa can distribute and sync data across hundreds of locations but “it looks like a conventional NoSQL database.”

Plus, Macrometa provides an ACID-like consistency and can support changes (like read and update requests) when you’ve apps running in 100s of locations and each of those places is modifying the same record.

Chetan explains a typical example of organizations with multiple locations. “We have the ability to connect data sources across all kinds of boundaries; you’ve got some data in Germany and you’ve got some data in New York. How do you put these things together and get them streaming so that you can start to do interesting things with correlating this data?”

Another example is when organizations need to collaborate with supply chain or other partners in real-time to address a supply shortage or distribution issue. With the Global Data Mesh you can, “very quickly connect data wherever it might be in legacy systems, in flat files, in streaming databases, in data warehouses,” per Chetan. Macrometa has “500-plus types of connectors.”

But it is not only about getting the data streaming - it is making the data fungible with APIs. “Because the minute you put an API on it and it’s become fungible now that data has actually got a lot of value. And so the data mesh is a way to very quickly connect things up and put an API on it. And that API can now be consumed by front-ends, it can be consumed by other microservices, things like that,” Chetan describes the instant benefits of building APIs on top of the Global Data Mesh.

Edge Compute

As Chetan relays to Corey, the GDN also “provides an [Edge] Compute layer that’s deeply integrated directly with the data layer itself. So think of it as Lambdas running stored procedures inside the database… We’ve built a very, very specialized compute engine that exposes containers in functions as stored procedures directly on the database.” You can easily build apps in Python, Go, or the language of your choice.

The Edge Compute layer has advanced tools such as Query Workers that lets developers create simple REST APIs on top of the data mesh and Stream Workers that enables Complex Event Processing (CEP) functions and Stream Processing workloads. 

Macrometa has a compute runtime “that is Docker compatible, so it runs containers,.. it’s Lambda-like,.. so it runs functions.” You can build stateful microservices that can interact with data locally and “then schedule them along with the data on our network of 175 regions.” Chetan goes on to provide some examples of building distributed apps on the GDN.

With a cloud provider, your microservices back-end for a banking, HR SaaS, or ecommerce app may run in a specific region or two like us-east-1 and Virginia. In the Macrometa GDN, it will potentially run in 15, 18, or even 25 cities - where your end-users are located. If you are ingesting data from an electricity grid in 15-20 cities for industrial IoT, you can do all of that locally with Macrometa.

“So that’s what the edge functions does, it flips the cloud model around because instead of sending data to where the compute is in the cloud, you’re actually bringing compute to where the data is originating, or the data is being consumed, such as through a mobile app,” Chetan explains the key benefits of Macrometa vs a typical cloud model.

Data Protection

The Data Protection layer lets you have fine-grained control of your data. Chetan describes the region-specific benefits of this layer to Corey. With Macrometa, “you can build an app once, and we’ll provide all the necessary localization for region processing with the tokenization of data so you can exfiltrate data without violating potentially Pii sensitive data exfiltration laws..”

Macrometa helps organizations comply with different privacy and regulatory frameworks and helps keep data secure in each region with geo-pinning or geo-fencing. Developers and operations can easily set up localized data fabrics for regional and global data, and adjust locations with just a toggle. Macrometa is SOC 2 certified and offers user, token-based, and API keys authentication.

Plug and play with APIs on the edge

Of course, almost all customers have an infrastructure already in place and the question may be how to begin with Macrometa. With Macrometa, you can simply plug a set of serverless APIs into your existing applications. 

Chetan explained how customers can get started, depending on the application, “Some of your applications work great in the cloud. Maybe there are just parts of that app that should be on our edge. And that’s usually where most customers start; they take a single web service or two that’s not doing so great in the cloud because it’s too far away; it has data sensitivity, location sensitivity, time sensitivity, and so they use us as a way to just deal with that on the edge.

And there are other applications where it’s completely what I call edge native, i.e., no dependency on the cloud and runs completely distributed across our network and consumes primarily the edge’s infrastructure, and just maybe send some data back on the cloud for long-term storage or long-term analytics.”

Chetan provides further details and an example here, “The edge is a speciation….of the cloud, into a fast tier where interactions happen, i.e, the edge.” This may be credit card companies checking for fraud patterns, to shut down transactions before they lose money, or other time-sensitive transactions. Chetan describes the Macrometa GDN as a “system of interaction, not a system of record.” The three layers of the GDN - the Global Data Mesh, Edge Compute and Data Protection - “are the way that our APIs are shaped to help our enterprise customers solve these problems,” per Chetan.

One way to understand the full capabilities across Macrometa is to “imagine ten years from now what DynamoDB and global tables with a really fast Lambda and Kinesis with actual Event Processing built directly into Kinesis might be like. That’s Macrometa today, available in 175 cities,” Chetan elaborates on Macrometa functionality.

A network independent of cloud providers

The GDN is available in 175 cities - or points of presence (PoPs) around the world. With cloud computing, latency and cost are issues, as data has to travel to and from a centralized location and the cost of exporting data can be high. Macrometa, in contrast, is set up to reach 2B users and 10B devices in 50ms, at up to 70% less than typical cloud costs.*

Chetan describes the unique way the PoPs network was created,“What we have built is a network that is completely independent of the cloud providers. We’re built on top of five different service providers. Some of them are cloud providers, some of them are telecom providers, some of them are CDNs.”

“And so we’re building our Global Data Network on top of routes and capacity provided by transfer providers who have different economics than the cloud providers do. So our cost for egress falls somewhere between two and five cents, for example, depending on which edge locations, which countries, and things that you’re going to use over there. We've got a pretty generous egress fee where, you know, for certain thresholds, there’s no egress charge at all,” Chetan explains to Corey as he questions him about egress charges.

Macrometa has different pricing offerings based on the distribution, scale, and support you need for your organization.

Democratizing latency with the Macrometa GDN

“We stop this whole ping-pong effect of data going around and help create deterministic guarantees around latency, around location, around performance,” Chetan provides more detail about latency. ”We’re trying to democratize latency and these types of problems in a way that programmers shouldn’t have to worry about all this stuff. You write your code, you push publish, it runs on a network, and it all gets there with a guarantee that 95% of all your requests will happen within 50 milliseconds round-trip time, from any device, in these population centers around the world.”

An Edge as A Service Platform delivering value

The Macrometa platform reduces complexity and really accelerates developer velocity. Chetan walks Corey through different use cases. Macrometa can support the targeted and personalized approach required by today’s subscription economy with low latency, in-region processing, while helping you comply with data regulation laws. For example, one of our customers “is a SaaS company in marketing uses Macrometa to inject offers while people are on their website browsing. Literally, you hit their website, you do a few things, and then boom, there’s a customized offer for them,” as Chetan describes to Corey.

Chetan provides another use case example where low latency is required to deliver real-time personalization ads and offers, “in banking you’re making your minimum payments on your credit card, but you have a good payment history and you’ve got a decent credit score, well, let’s give you an offer to give you a short-term loan. So those types of new applications, you know, are really at this intersection where you need low latency, you need in-region processing, and you also need to comply with data regulation.”

Lowering your carbon footprint by being conscious of code efficiency

Chetan talks about ways that Macrometa “cares a lot about helping make developers more aware of what kind of carbon footprint their code tangibly has on the environment.” Macrometa has “started a foundation that we call the Carbon Conscious Computing Consortium—the four C’s. We’re going to announce that publicly next year, we’re going to invite folks to come and join us and be a part of it.”

Macrometa is also “building a completely open-source, carbon-conscious computing platform that is built on real data that we’re collecting about, to start with, how Macrometa’s platform emits carbon in response to different types of things you build on it,” according to Chetan. “So for example, you wrote a query that hits our database and queries, maybe 20 billion objects inside of our database. It’ll tell you exactly how many micrograms or how many milligrams of carbon—it’s an estimate,” per Chetan.

The cost of carbon will vary in different places and with different technologies but the estimate will be a great starting point. Developers can decide when to throttle down and they can know how much micrograms of carbon a specific query will emit. Per Chetan, “there’s a cost to run code - whether it is a function, container, a query, every operation has a carbon cost. And we’re on a mission to measure that and provide accurate tooling directly in our platform… ” Stay tuned and be sure to follow us on LinkedIn to get the most up-to-date information on our Carbon Conscious Computing Consortium. 

Learn more about Macrometa

Are you ready to dive deeper about the edge and Macrometa? Your organization may be considering the edge because they are challenged by latency and higher costs in cloud applications and workloads. Check out this whitepaper written by Durga Gokina, CTO and Co-founder of Macrometa that discusses how to reduce write costs and the importance of data models and platforms. The paper explains how applying multiple data models on a single copy of data reduces storage and processing costs and how Macrometa's coordination-free architecture works for data-in-motion and data-at-rest use cases.

You can also get started with Macrometa in minutes with so many many tutorials, QuickStarts, and easy-to-follow example documentation. I mean you can actually build a streaming app in 10 mins or less! Start by requesting a 30 day trial, or schedule a demo with one of our experts.

*Results may vary.

Photo by ANIRUDH on Unsplash

Join our newsletter and get the latest posts to your inbox
Form loading...





Featured Posts

Related Posts

Recent Posts

Platform

PhotonIQ
Join the Newsletter