LightGBM-Ray is a new framework that integrates LightGBM with the distributed computing platform Ray, allowing users to easily scale their LightGBM training and prediction workloads on large clusters or cloud providers. The framework provides seamless integration with Ray Tune for hyperparameter search, multi-node and multi-GPU training support, and built-in support for categorical variables. It offers several advantages over XGBoost-Ray, including faster training times, better accuracy in certain situations, and optimized performance for larger datasets. LightGBM-Ray does not change the underlying LightGBM code but rather leverages Ray to manage data sharding and actors, ensuring fault-tolerant distributed training and prediction. While it currently relies on XGBoost as a hard dependency, efforts are underway to remove this requirement in future releases.