This section contains information about various sink types supported in Macrometa stream workers.
📄️ Google Pub-Sub
The Google PubSub sink publishes messages to a topic in the Google PubSub server. If the required topic does not exist, Google PubSub Sink creates the topic and publishes messages to it.
The http-call sink publishes messages to endpoints via HTTP or HTTPS protocols using methods such as POST, GET, PUT, and DELETE on formats text or JSON and consume responses through its corresponding http-call-response source. It also supports calling endpoints protected with basic authentication or OAuth 2.0.
The http-service-response sink send responses of the requests consumed by its corresponding http-service source, by mapping the response messages to formats such as text and JSON.
HTTP sink publishes messages via HTTP or HTTPS protocols using methods such as POST, GET, PUT, and DELETE on formats text and JSON. It can also publish to endpoints protected by basic authentication or OAuth 2.0.
JMS sink allows users to subscribe to a JMS broker and publish JMS messages.
A Kafka sink publishes events processed by GDN stream worker to a topic with a partition for a Kafka cluster.If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the GDN stream worker event.
A Kafka sink publishes events processed by gdn SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT, JSON or Binary format. If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the stream worker event. To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.
This is a sink that can be used as a logger. This will log the output events in the output stream with user specified priority and a prefix.
The MQTT sink publishes messages to a topic in the MQTT server. If the required topic does not exist, then the MQTT sink creates the topic and publishes messages to it.
This sink publishes events processed by stream worker into Prometheus metrics and exposes them to the Prometheus server at the specified URL. The created metrics can be published to Prometheus via server or pushGateway, depending on your preference. The metric types that are supported by the Prometheus sink are counter, gauge, histogram, and summary. The values and labels of the Prometheus metrics can be updated through the events.
S3 sink publishes events as Amazon AWS S3 buckets.
HTTP SSE sink sends events to all subscribers within the GDN only.
A stream worker application can be configured to publish events via the TCP transport by adding the type='tcp' annotation at the top of an event stream definition.