Macrometa's Global Data Network is based on cutting edge computer science incorporating new ideas and research in the areas of messaging, event processing, data consistency and replication. Macrometa's team of scientists and engineers have incorporated leading edge ideas from distributed systems and concurrent databases that incorporate technologies like Conflict Free Replicated Datatypes (CRDT) to create a new highly secure, high performance, low latency data infrastructure.
At its heart, Macrometa's architecture is a geo distributed event source with a materialized view engine and CRDT based replication. To achieve consistency in replication, Macrometa provides an adaptive model that allows developers to define the consistency level with fine grained granularity to achieve different levels of isolation and consistency depending on their needs.
Edge computing and its step-sibling, "fog computing", are shrouded in the excitement and mystery of a new frontier, where trillions of dollars in a new addressable market will open up: and makers of digital picks and shovels are busy lighting the fires of their software-defined forges, to design and build the edge infrastructure of tomorrow.Download
Unlike eventually consistent databases, Macrometa manages changes at the atomic field level and merges changes made across the network to maintain a single version of the truth via CRDTs.
The CRDT (Conflict-Free Replicated Data Type) native data platform exposes the power and versatility of CRDTs as a regular JSON database and API.
The GDN responds to read and update requests while maintaining consistent views of data worldwide. The platform improves latencies (global P99 of 50ms) and scales elastically to meet web-scale applications' demands.
The GDN allows reads and writes locally in all locations in parallel and does not require a user to know which data should be placed in which location or require that user to redesign the schema every time they want to add/remove a location.
Macrometa GDN can serve queries, reads and writes to your side apps with less than 50 milliseconds of total round trip time from the client to our edge database and back.
User your data for multiple purposes while still maintaining a single point of truth. Store, query, modify data as key/value pairs, JSON documents, graphs, and streams on our multi master architecture. Respond with local latencies to read and update requests, while maintaining consistent views of fast data worldwide.
Real-time stream processing allows you to ingest, transform, and process data instantly without needing to send it to the cloud.
Event driven model - respond to events in the real world as they happen from within region.
Stateful Stream Processing - Look up state in the database to process events and build powerful new ways to analyze data.