Friday, April 18, 2025
HomeBig DataCourse of thousands and thousands of observability occasions with Apache Flink and...

Course of thousands and thousands of observability occasions with Apache Flink and write on to Prometheus


AWS lately introduced help for a brand new Apache Flink connector for Prometheus. The brand new connector, contributed by AWS to the Flink open supply challenge, provides Prometheus and Amazon Managed Service for Prometheus as a brand new vacation spot for Flink.

On this submit, we clarify how the brand new connector works. We additionally present how one can handle your Prometheus metrics information cardinality by preprocessing uncooked information with Flink to construct real-time observability with Amazon Managed Service for Prometheus and Amazon Managed Grafana.

Amazon Managed Service for Prometheus is a safe, serverless, scaleable, Prometheus-compatible monitoring service. You should utilize the identical open supply Prometheus information mannequin and question language that you simply use at this time to observe the efficiency of your workloads with out having to handle the underlying infrastructure. Flink connectors are software program elements that transfer information into and out of an Amazon Managed Service for Apache Flink software. You should utilize the brand new connector to ship processed information to an Amazon Managed Service for Prometheus vacation spot beginning with Flink model 1.19. With Amazon Managed Service for Apache Flink, you may rework and analyze information in actual time. There are not any servers and clusters to handle, and there’s no compute and storage infrastructure to arrange.

Observability past compute

In an more and more related world, the boundary of programs extends past compute belongings, IT infrastructure, and purposes. Distributed belongings resembling Web of Issues (IoT) units, related vehicles, and end-user media streaming units are an integral a part of enterprise operations in lots of sectors. The power to look at each asset of your online business is essential to detecting potential points early, enhancing the expertise of your prospects, and defending the profitability of the enterprise.

Metrics and time collection

It’s useful to think about observability as three pillars: metrics, logs, and traces. Probably the most related pillar for distributed units, like IoT, is metrics. It’s because metrics can seize measurements from sensors or counting of particular occasions emitted by the gadget.

Metrics are collection of samples of a given measurement at particular occasions. For instance, within the case of a related car, they are often the readings from the electrical motor RPM sensor. Metrics are usually represented as time collection, or sequences of discrete information factors in chronological order. Metrics’ time collection are usually related to dimensions, additionally known as labels or tags, to assist with classifying and analyzing the information. Within the case of a related car, labels could be one thing like the next:

  • Metric identify – For instance, “Electrical Motor RPM”
  • Car ID – A novel identifier of the car, just like the Car Identification Quantity (VIN)

Prometheus as a specialised time collection database

Prometheus is a well-liked answer for storing and analyzing metrics. Prometheus defines a typical interface for storing and querying time collection. Generally utilized in mixture with visualization instruments like Grafana, Prometheus is optimized for real-time dashboards and real-time alerting.

Typically thought-about primarily for observing compute assets, like containers or purposes, Prometheus is definitely a specialised time collection database that may successfully be used to look at various kinds of distributed belongings, together with IoT units.

Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service. See What’s Amazon Managed Service for Prometheus? to be taught extra about Amazon Managed Service for Prometheus.

Successfully processing observability occasions, at scale

Dealing with observability information at scale turns into tougher, because of the variety of belongings and distinctive metrics, particularly when observing massively distributed units, for the next causes:

  • Excessive cardinality – Every gadget emits a number of metrics or kinds of occasions, every to be tracked independently.
  • Excessive frequency – Gadgets would possibly emit occasions very continuously, a number of occasions per second. This would possibly end in a big quantity of uncooked information. This facet particularly represents the principle distinction from observing compute assets, that are normally scraped at longer intervals.
  • Occasions arrive at irregular intervals and out of order – In contrast to compute belongings which can be normally scraped at common intervals, we regularly see delays of transmission or briefly disconnected units, which trigger occasions to reach at irregular intervals. Concurrent occasions from completely different units would possibly comply with completely different paths and arrive at completely different occasions.
  • Lack of contextual info – Gadgets usually transmit over channels with restricted bandwidth, resembling GPRS or Bluetooth. To optimize communication, occasions seldom include contextual info, resembling gadget mannequin or consumer element. Nonetheless, this info is required for an efficient observability.
  • Derive metrics from occasions – Gadgets usually emit particular occasions when particular details occur. For instance, when the car ignition is turned on or off, or when a warning is emitted by the onboard laptop. These aren’t direct metrics. Nonetheless, counting and measuring the charges of those occasions are invaluable metrics that may be inferred from these occasions.

Successfully extracting worth from uncooked occasions requires processing. Processing would possibly occur on learn, once you question the information, or upfront, earlier than storing.

Storing and analyzing uncooked occasions

The widespread strategy with observability occasions, and with metrics particularly, is “storing first.” You may merely write the uncooked metrics into Prometheus. Processing, resembling grouping, aggregating, and calculating derived metrics, occurs “on question,” when information is extracted from Prometheus.

This strategy would possibly turn into significantly inefficient once you’re constructing real-time dashboards or alerting, and your information has very excessive cardinality or excessive frequency. As a time collection database is constantly queried, a big quantity of information is repeatedly extracted from the storage and processed. The next diagram illustrates this workflow.

Process on query

Preprocessing uncooked observability occasions

Preprocessing uncooked occasions earlier than storing shifts the work left, as illustrated within the following diagram. This will increase the effectivity of real-time dashboards and alerts, permitting the answer to scale.

Pre-process

Apache Flink for preprocessing observability occasions

Preprocessing uncooked observability occasions requires a processing engine that lets you do the next:

  • Enrich occasions effectively, trying up reference information and including new dimensions to the uncooked occasions. For instance, including the car mannequin primarily based on the car ID. Enrichment permits including new dimensions to the time collection, enabling evaluation in any other case inconceivable.
  • Combination uncooked occasions over time home windows, to scale back frequency. For instance, if a car emits an engine temperature measurement each second, you may emit a single pattern with the typical over 5 seconds. Prometheus can effectively combination frequent samples on learn. Nonetheless, ingesting information with a frequency a lot larger than what is beneficial for dashboarding and real-time alerting is just not an environment friendly use of Prometheus ingestion all through and storage.
  • Combination uncooked occasions over dimensions, to scale back cardinality. For instance, aggregating some measurement per car mannequin.
  • Calculate derived metrics making use of arbitrary logic. For instance, counting the variety of warning occasions emitted by every car. This additionally permits evaluation in any other case inconceivable utilizing solely Prometheus and Grafana.
  • Assist event-time semantics, to combination over time occasions from completely different sources.

Such a preprocessing engine should additionally be capable of scale and course of the big quantity of enter uncooked occasions, and to course of information with low latency—usually subsecond or single-digit seconds—to allow real-time dashboards and altering. To handle these necessities, we see many shoppers utilizing Flink.

Apache Flink meets the aforementioned necessities. Flink is a framework and distributed stream processing engine, designed to carry out computations at in-memory pace and at scale. Amazon Managed Service for Apache Flink provides a completely managed, serverless expertise, permitting you to run your Flink purposes with out managing infrastructure or clusters.

Amazon Managed Service for Apache Flink can course of the ingested uncooked occasions. The ensuing metrics, with decrease cardinality and frequency, and extra dimensions, could be written to Prometheus for a simpler visualization and evaluation. The next diagram illustrates this workflow.

Amazon Managed Service for Apache Flink, Amazon Managed Prometheus and Grafana

Integrating Apache Flink and Prometheus

The brand new Flink Prometheus connector permits Flink purposes to seamlessly write preprocessed time collection information to Prometheus. No intermediate part is required, and there’s no requirement to implement a customized integration. The connector is designed to scale, utilizing the flexibility of Flink to scale horizontally, and optimizing the writes to a Prometheus backend utilizing a Distant-Write interface.

Instance use case

AnyCompany is a automotive rental firm managing a fleet of a whole lot of hundreds hybrid related automobiles, in a number of areas. Every car constantly transmits measurements from a number of sensors. Every sensor emits a pattern each second or extra continuously. Automobiles additionally talk warning occasions when one thing fallacious is detected by the onboard laptop. The next diagram illustrates the workflow.

Example use case: connected cars

AnyCompany is planning to make use of Amazon Managed Service for Prometheus and Amazon Managed Grafana to visualise car metrics and arrange customized alerts.

Nonetheless, constructing a real-time dashboard primarily based on uncooked information, as transmitted by the automobiles, could be sophisticated and inefficient. Every car may need a whole lot of sensors, every of them leading to a separate time collection to show. Moreover, AnyCompany needs to observe the conduct of various car fashions. Sadly, the occasions transmitted by the automobiles solely include the VIN. The mannequin could be inferred by trying up (becoming a member of) some reference information.

To beat these challenges, AnyCompany has constructed a preprocessing stage primarily based on Amazon Managed Service for Apache Flink. This stage has the next capabilities:

  • Enrich the uncooked information by including the car mannequin, and searching up reference information primarily based on the car identification.
  • Cut back the cardinality, aggregating the outcomes per car mannequin, accessible after the enrichment step.
  • Cut back the frequency of the uncooked metrics to scale back write bandwidth, aggregating over time home windows of some seconds.
  • Calculate derived metrics primarily based on a number of uncooked metrics. For instance, decide whether or not a car is in movement when both the inner combustion engine or {the electrical} motor are rotating.

The results of preprocessing are extra actionable metrics. A dashboard constructed on these metrics can, for instance, assist decide whether or not the final software program replace launched over-the-air to all automobiles of a selected mannequin in particular areas, is inflicting points.

Utilizing the Flink Prometheus connector, the preprocessor software can write on to Amazon Managed Service for Prometheus, with out intermediate elements.

Nothing prevents you from selecting to put in writing uncooked metrics with full cardinality and frequency to Prometheus, permitting you to drill all the way down to the one car. The Flink Prometheus connector is designed to scale by batching and parallelizing writes.

Answer overview

The next GitHub repository comprises a fictional end-to-end instance protecting this use case. The next diagram illustrates the structure of this instance.

Example architecture

The workflow consists of the next steps:

  1. Automobiles, radio transmission, and ingestion of IoT occasions have been abstracted away, and changed by a knowledge generator that produces uncooked occasions for 100 thousand fictional automobiles. For simplicity, the information generator is itself an Amazon Managed Service for Apache Flink software.
  2. Uncooked car occasions are despatched to a stream storage service. On this instance, we use Amazon Managed Streaming for Apache Kafka (Amazon MSK).
  3. The core of the system is the preprocessor software, working in Amazon Managed Service for Apache Flink. We are going to dive deeper into the small print of the processor within the following sections.
  4. Processed metrics are instantly written to the Prometheus backend, in Amazon Managed Service for Prometheus.
  5. Metrics are used to generate real-time dashboards on Amazon Managed Grafana.

The next screenshot exhibits a pattern dashboard.

Grafana dashboard

Uncooked car occasions

Every car transmits three metrics virtually each second:

  • Inner combustion (IC) engine RPM
  • Electrical motor RPM
  • Variety of reported warnings

The uncooked occasions are recognized by the car ID and the area the place the car is situated.

Preprocessor software

The next diagram illustrates the logical stream of the preprocessing software working in Amazon Managed Service for Apache Flink.

Flink application logical data flow

The workflow consists of the next steps:

  1. Uncooked occasions are ingested from Amazon MSK from Flink Kafka supply.
  2. An enrichment operator provides the car mannequin, which isn’t contained within the uncooked occasions. This extra dimension is then used to combination the uncooked occasions. The ensuing metrics have solely two dimensions: car mannequin and area.
  3. Uncooked occasions are then aggregated over time home windows (5 seconds) to scale back frequency. On this instance, the aggregation logic additionally generates a derived metric: the variety of automobiles in movement. A brand new metric could be derived from uncooked metrics with arbitrary logic. For the sake of the instance, a car is taken into account “in movement” if both the IC engine or electrical motor RPM metric aren’t zero.
  4. The processed metrics are mapped into the enter information construction of the Flink Prometheus connector, which maps on to the time collection data anticipated by the Prometheus Distant-Write interface. Seek advice from the connector documentation for extra particulars.
  5. Lastly, the metrics are despatched to Prometheus utilizing the Flink Prometheus connector. Write authentication, required by Amazon Managed Service for Prometheus, is seamlessly enabled utilizing the Amazon Managed Service for Prometheus request signer supplied with the connector. Credentials are routinely derived from the AWS Id and Entry Administration (IAM) function of the Amazon Managed Service for Apache Flink software. No extra secret or credential is required.

Within the GitHub repository, you could find the step-by-step directions to arrange the working instance and create the Grafana dashboard.

Flink Prometheus connector key options

The Flink Prometheus connector permits Flink purposes to put in writing processed metrics to Prometheus, utilizing the Distant-Write interface.

The connector is designed to scale write throughput by:

  • Parallelizing writes, utilizing the Flink parallelism functionality
  • Batching a number of samples in a single write request to the Prometheus endpoint

Error dealing with complies with Prometheus Distant-Write 1.0 specs. The specs are significantly strict about malformed or out-of-order information rejected by Prometheus.

When a malformed or out-of-order write is rejected, the connector discards the offending write request and continues, preferring information freshness over completeness. Nonetheless, the connector makes information loss observable, emitting WARN log entries and exposing metrics that measure the amount of discarded information. In Amazon Managed Service for Apache Flink, these connector metrics could be routinely exported to Amazon CloudWatch.

Obligations of the consumer

The connector is optimized for effectivity, write throughput, and latency. Validation of incoming information can be significantly costly when it comes to CPU utilization. Moreover, completely different Prometheus backend implementations implement constraints in a different way. For these causes, the connector doesn’t validate incoming information earlier than writing to Prometheus.

The consumer is accountable of constructing certain that the information despatched to the Flink Prometheus connector follows the constraints enforced by the actual Prometheus implementations they’re utilizing.

Ordering

Ordering is especially related. Prometheus expects that samples belonging to the identical time collection—samples with the identical metric identify and labels—are written in time order. The connector makes certain ordering is just not misplaced when information is partitioned to parallelize writes.

Nonetheless, the consumer is liable for retaining the ordering upstream within the pipeline. To attain this, the consumer should rigorously design information partitioning inside the Flink software and the stream storage. Solely partitioning by key should be used, and partitioning keys should compound the metric identify and all labels that can be utilized in Prometheus.

Conclusion

Prometheus is a specialised time collection database, designed for constructing real-time dashboards and altering. Amazon Managed Service for Prometheus is a completely managed, serverless backend appropriate with the Prometheus open supply customary. Amazon Managed Grafana lets you construct real-time dashboards, seamlessly interfacing with Amazon Managed Service for Prometheus.

You should utilize Prometheus for observability use circumstances past compute useful resource, to look at IoT units, related vehicles, media streaming units, and different extremely distributed belongings offering telemetry information.

Straight visualizing and analyzing high-cardinality and high-frequency information could be inefficient. Preprocessing uncooked observability occasions with Amazon Managed Service for Apache Flink shifts the work left, drastically simplifying the dashboards or alerting you may construct on high of Amazon Managed Service for Prometheus.

For extra details about working Flink, Prometheus, and Grafana on AWS, see the assets of those companies:

For extra details about the Flink Prometheus integration, see the Apache Flink documentation.


Concerning the authors

Lorenzo NicoraLorenzo Nicora works as Senior Streaming Answer Architect at AWS, serving to prospects throughout EMEA. He has been constructing cloud-centered, data-intensive programs for over 25 years, working throughout industries each by way of consultancies and product firms. He has used open-source applied sciences extensively and contributed to a number of tasks, together with Apache Flink, and is the maintainer of the Flink Prometheus connector.

Francisco MorilloFrancisco Morillo is a Senior Streaming Options Architect at AWS. Francisco works with AWS prospects, serving to them design real-time analytics architectures utilizing AWS companies, supporting Amazon MSK and Amazon Managed Service for Apache Flink. He’s additionally a major contributor to the Flink Prometheus connector.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments