Troubleshoot and optimize data processing workloads with Data Jobs Monitoring
Data Jobs Monitoring (DJM) is a solution that helps data platform teams and engineers detect and debug failing or long-running jobs while offering insights into job cost and optimization opportunities. DJM gathers performance telemetry from Spark and Databricks jobs across all accounts and environments, providing full context to understand the health and efficiency of data pipelines. It enables users to identify issues with their data processing workloads, pinpoint and resolve job issues faster, and reduce costs by optimizing overprovisioned clusters and inefficient jobs. DJM is now available for Databricks or Spark jobs on Amazon EMR or Kubernetes.
Company
Datadog
Date published
June 20, 2024
Author(s)
Fionce Siow, Ryan Warrier
Word count
1356
Language
English
Hacker News points
None found.