The PyTorch team has introduced a new Ray Scheduler for TorchX, which allows developers to run scalable and distributed PyTorch workloads without setting up infrastructure or changing training scripts. This scheduler is built on top of the Ray framework, providing a rich set of native libraries for ML workloads and a general-purpose core for building distributed applications. With TorchX's Ray Scheduler, users can easily deploy their PyTorch machine learning applications from R&D to production, leveraging the ecosystem of libraries and integrations available in PyTorch Distributed and PyTorch Lightning. The scheduler provides features such as hyperparameter optimization, model serving, and distributed data-parallel computing, allowing developers to build complex pipelines while decoupling their training script from infrastructure. Users can submit jobs to a cloud of their choice using the TorchX CLI or SDK, and monitor job status and progress through the TorchX SDK and Ray Jobs API. The joint engineering effort between Meta AI PyTorch and Anyscale ML teams has made this feature possible, enabling users to run scalable and distributed PyTorch workloads with ease.