Today, Nvidia's Run:ai is now supported by the OpenMeter Collector, enabling accurate billing and invoicing for GPU-intensive workloads. The platform pools and allocates GPU resources across teams and environments, allowing organizations to meter what's used, tie it to customers or teams, and bill accurately. The OpenMeter Collector captures usage data from multiple sources in your infrastructure, including Run:ai workloads and pods, Kubernetes pods, storage and network usage, Prometheus metrics, and databases like PostgreSQL and ClickHouse. It provides detailed resource metrics for GPU allocation time, memory usage, bandwidth, and multi-tenant attribution, helping with cost visibility, chargebacks, and metered billing. OpenMeter supports pricing models such as per GPU type, by allocation, by workload type, and SLA-based, allowing organizations to generate invoices automatically after usage and set tenant limits and usage thresholds.