We Pull Our Socks (SOC) Up For Security |
Thomas Bordes |
Feb 27, 2025 |
172 |
- |
All You Need Is One GPU: Inference Benchmark for Stable Diffusion |
Eole Cervenka |
Oct 05, 2022 |
1248 |
- |
Will YOU win Lambda’s Golden Ticket in October?! |
Robert Brooks IV |
Oct 03, 2024 |
284 |
- |
What's New - 2025 |
Doug Pan |
Jan 13, 2025 |
85 |
- |
DeepChat 3-Step Training At Scale: Lambda’s Instances of NVIDIA H100 SXM5 vs A100 SXM4 |
Chuan Li |
Oct 12, 2023 |
371 |
- |
Tips for implementing SSD Object Detection (with TensorFlow code) |
Chuan Li |
Jan 06, 2019 |
2037 |
- |
Setting up Horovod + Keras for Multi-GPU training |
Chuan Li |
Aug 28, 2019 |
736 |
- |
Choosing the Best GPU for Deep Learning in 2020 |
Michael Balaban |
Feb 18, 2020 |
1142 |
- |
Lambda Echelon – a turn key GPU cluster for your ML team |
Stephen Balaban |
Oct 06, 2020 |
431 |
- |
NVIDIA GH200 Grace Hopper Superchips Now on Lambda and Available On-Demand |
Nick Harvey |
Nov 14, 2024 |
639 |
- |
Introducing ML Times: Your Destination For Digestible AI News And Insights |
David Hartmann |
Apr 09, 2024 |
998 |
- |
Set up a GPU accelerated Docker container using Lambda Stack + Lambda Stack Dockerfiles on Ubuntu 20.04 LTS |
Stephen Balaban |
Feb 10, 2019 |
337 |
- |
TensorFlow 2.0 Tutorial 03: Saving Checkpoints |
Chuan Li |
Jun 06, 2019 |
674 |
- |
September 2022 Lambda GPU Cloud Release Notes |
Cody Brownstein |
Oct 11, 2022 |
316 |
- |
Text Generation: Char-RNN Data preparation and TensorFlow implementation |
Chuan Li |
Feb 08, 2019 |
1569 |
- |
Lambda Demos: Simplifying ML Demo Hosting |
Kathy Bui |
May 24, 2023 |
447 |
- |
TensorFlow 2.0 Tutorial 02: Transfer Learning |
Chuan Li |
Jun 05, 2019 |
1017 |
- |
Lambda launches new Hyperplane Server with NVIDIA H100 GPUs and AMD EPYC 9004 series CPUs |
Maxx Garrison |
Sep 07, 2023 |
298 |
- |
Partner Spotlight: Testing Llama 3.3 70B inference performance on NVIDIA GH200 with Baseten |
Baseten |
Feb 07, 2025 |
1086 |
- |
Lambda Teams Up With Razer to Launch the World’s Most Powerful Laptop for Deep Learning |
Rick |
Apr 12, 2022 |
787 |
- |
How to serve DeepSeek-R1 & v3 on NVIDIA GH200 Grace Hopper Superchip (400 tok/sec throughput, 10 tok/sec/query) |
Luke Miles |
Feb 24, 2025 |
710 |
- |
TensorFlow 2.0 Tutorial 01: Basic Image Classification |
Chuan Li |
Oct 01, 2019 |
1886 |
- |
Install TensorFlow & PyTorch for the RTX 3090, 3080, 3070 |
Michael Balaban |
Aug 10, 2021 |
333 |
- |
Lambda Cloud Clusters now available with NVIDIA GH200 Grace Hopper Superchip |
Maxx Garrison |
Nov 13, 2023 |
454 |
- |
Tesla A100 Server Total Cost of Ownership Analysis |
Chuan Li |
Sep 22, 2021 |
2221 |
- |
GPT-3 A Hitchhiker's Guide |
Michael Balaban |
Jul 20, 2020 |
1898 |
- |
How to Run OpenAI's GPT-2 Text Generator on Your Computer |
Stephen Balaban |
Feb 16, 2019 |
1014 |
- |
NVIDIA H100 Tensor Core GPU - Deep Learning Performance Analysis |
Chuan Li |
Oct 05, 2022 |
1655 |
- |
Lambda and Scale Nucleus: Empowering Your Model Training with Better Data |
Justin Pinkney |
Oct 19, 2021 |
805 |
- |
Getting Started Guide — Lambda Cloud GPU Instances |
Remy Guercio |
May 03, 2020 |
1088 |
- |
Lambda Cloud Clusters to support NVIDIA H200 Tensor Core GPUs |
Maxx Garrison |
Nov 13, 2023 |
373 |
- |
Lambda at NVIDIA GTC 2025: Accelerating AI with NVIDIA Blackwell GPU Clusters |
Maxx Garrison |
Mar 13, 2025 |
584 |
- |
Lambda's Machine Learning Infrastructure Playbook and Best Practices |
Stephen Balaban |
Feb 23, 2022 |
78 |
- |
Deep Learning Hardware Deep Dive – RTX 3090, RTX 3080, and RTX 3070 |
Michael Balaban |
Sep 14, 2020 |
1556 |
- |
Lambda selected as 2024 NVIDIA Partner Network AI Excellence Partner of the Year |
Robert Brooks IV |
Mar 19, 2024 |
446 |
- |
StyleGAN 3 |
Justin Pinkney |
Nov 29, 2021 |
1967 |
- |
How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda |
Justin Pinkney |
Sep 28, 2022 |
1294 |
- |
Training Neural Networks in Record Time with the Hyperplane-16 |
Chuan Li |
Dec 19, 2019 |
1490 |
- |
Lambda Selected as 2023 Americas NVIDIA Partner Network Solution Integration Partner of the Year |
Jaimie Renner |
Apr 04, 2023 |
305 |
- |
ShadeRunner: Chrome plugin for enhanced on-page research |
David Hartmann |
Feb 13, 2024 |
742 |
- |
Cutting the cost of deep learning — Lambda Cloud 8-GPU V100 instances |
Remy Guercio |
May 13, 2020 |
776 |
- |
Unveiling Hermes 3: The First Full-Parameter Fine-Tuned Llama 3.1 405B Model is on Lambda’s Cloud |
Mitesh Agrawal |
Aug 15, 2024 |
556 |
- |
Lambda among first NVIDIA Cloud Partners to deploy NVIDIA Blackwell-based GPUs |
Maxx Garrison |
Mar 18, 2024 |
610 |
- |
Putting the NVIDIA GH200 Grace Hopper Superchip to good use: superior inference performance and economics for larger models |
Thomas Bordes |
Nov 22, 2024 |
870 |
- |
Kubernetes cluster deployment made easy with Lambda and SkyPilot |
Mitesh Agrawal |
Sep 12, 2024 |
182 |
- |
Fine-tuning Falcon LLM 7B/40B |
Xi Tian |
Jun 29, 2023 |
664 |
- |
Voltron Data Case Study: Why ML teams are using Lambda Reserved Cloud Clusters |
Lauren Watkins |
Nov 01, 2022 |
1445 |
- |
Setting up environments: Anaconda |
Mark Dalton |
Dec 31, 2021 |
591 |
- |
Lambda's Deep Learning Curriculum |
Stephen Balaban |
Nov 01, 2021 |
327 |
- |
Keeping an eye on your GPUs - GPU monitoring tools compared |
Justin Pinkney |
Mar 29, 2022 |
1720 |
- |
Reproduce Fast.ai/DIUx imagenet18 with a Titan RTX server |
Chuan Li |
Jan 15, 2019 |
959 |
- |
Hugging Face x Lambda: Whisper Fine-Tuning Event |
Chuan Li |
Dec 01, 2022 |
2034 |
- |
NVIDIA A100 GPU Benchmarks for Deep Learning |
Stephen Balaban |
May 22, 2020 |
1270 |
- |
How To Fine Tune Stable Diffusion: Naruto Character Edition |
Eole Cervenka |
Nov 02, 2022 |
403 |
- |
Install CUDA 10 on Ubuntu 18.04 |
Stephen Balaban |
Feb 10, 2019 |
120 |
- |
NVIDIA NGC Tutorial: Run a PyTorch Docker Container using nvidia-container-toolkit on Ubuntu |
Stephen Balaban |
Jul 19, 2021 |
271 |
- |
A Gentle Introduction to Multi GPU and Multi Node Distributed Training |
Stephen Balaban |
May 31, 2019 |
566 |
- |
Host Stable Diffusion with Lambda Demos in just a few clicks! |
Cody Brownstein |
May 18, 2023 |
340 |
- |
Exploring AI's Role in Summarizing Scientific Reviews |
Xi Tian |
Sep 14, 2023 |
2405 |
- |
TensorFlow 2.0 Tutorial 04: Early Stopping |
Chuan Li |
Jun 06, 2019 |
452 |
- |
How a Golden Ticket Could Transform Medicine Forever |
Thomas Bordes |
Feb 28, 2025 |
407 |
- |
NVIDIA GeForce RTX 4090 vs RTX 3090 Deep Learning Benchmark |
Chuan Li |
Oct 31, 2022 |
934 |
- |
Lambda raises $24.5M to build GPU cloud and deep learning hardware |
Stephen Balaban |
Jul 16, 2021 |
585 |
- |
A100 vs V100 Deep Learning Benchmarks |
Michael Balaban |
Jan 28, 2021 |
383 |
- |
How FlashAttention-2 Accelerates LLMs on NVIDIA H100 and A100 GPUs |
Chuan Li |
Aug 24, 2023 |
934 |
- |
NVIDIA Hopper: H100 and FP8 Support |
Jeremy Hummel |
Dec 07, 2022 |
1245 |
- |
Be First, Scale Fast - NVIDIA Blackwell GPU Clusters Now Live on Lambda |
Maxx Garrison |
Mar 18, 2025 |
1105 |
- |
Deep learning is the future of gaming. |
Stephen Balaban |
Jan 04, 2022 |
90 |
- |
Chat with a PDF using Falcon: Unleashing the Power of Open-Source LLMs! |
Xi Tian |
Jul 24, 2023 |
512 |
- |
Titan V Deep Learning Benchmarks with TensorFlow |
Michael Balaban |
Mar 12, 2019 |
1025 |
- |
Introducing NVIDIA RTX™ A6000 GPU Instances on Lambda Cloud |
Remy Guercio |
Apr 23, 2021 |
468 |
- |
Benchmarking ZeRO-Inference on the NVIDIA GH200 Grace Hopper Superchip |
Chuan Li |
Dec 20, 2023 |
434 |
- |
V100 server on-prem vs AWS p3 instance cost comparison |
Chuan Li |
Feb 11, 2019 |
1170 |
- |
Lambda Cloud Storage is now in open beta: a high speed filesystem for our GPU instances |
Kathy Bui |
Apr 19, 2022 |
319 |
- |
Lambda Raises $480M to Expand AI Cloud Platform |
Stephen Balaban |
Feb 19, 2025 |
409 |
- |
Lambda Raises $320M to Build a GPU Cloud for AI |
Stephen Balaban |
Feb 15, 2024 |
500 |
- |
Lambda Cloud Deploys On-Demand NVIDIA HGX H100 with 8x H100 SXM Instances |
Kathy Bui |
Aug 02, 2023 |
284 |
- |
TensorFlow 2.0 Tutorial 05: Distributed Training across Multiple Nodes |
Chuan Li |
Jun 07, 2019 |
691 |
- |
How To Use mpirun to Launch a LLaMA Inference Job Across Multiple Cloud Instances |
Chuan Li |
Mar 14, 2023 |
891 |
- |
On-prem GPU Training Infrastructure for Deep Learning - Slides |
Stephen Balaban |
Jan 25, 2019 |
69 |
- |
RTX 2080 Ti Deep Learning Benchmarks with TensorFlow |
Stephen Balaban |
Mar 04, 2019 |
1021 |
- |
Introducing the Lambda Inference API: Lowest-Cost Inference Anywhere |
Nick Harvey |
Dec 12, 2024 |
1211 |
- |
lambdalabs.com is now lambda.ai |
Thomas Bordes |
Mar 25, 2025 |
527 |
- |
Lambda launches Vector One, a new single-GPU desktop PC |
Samuel Park |
Dec 12, 2023 |
493 |
- |
Get Into The ARMs Race: Future-Proof Your Workloads Now With Lambda |
Thomas Bordes |
Dec 19, 2024 |
570 |
- |
Lambda Cloud Deploys NVIDIA H100 Tensor Core GPUs |
Kathy Bui |
May 10, 2023 |
580 |
- |
Lambda Cloud accounts now support teams! |
Kathy Bui |
Jan 13, 2023 |
350 |
- |
Considerations for Large-Scale NVIDIA H100 Cluster Deployments |
David Hall |
Jul 13, 2023 |
845 |
- |
RTX A6000 Deep Learning Benchmarks |
Michael Balaban |
Jan 04, 2021 |
514 |
- |
Perform GPU, CPU, and I/O stress testing on Linux |
Stephen Balaban |
Feb 17, 2019 |
272 |
- |
OpenAI's GPT-3 Language Model: A Technical Overview |
Chuan Li |
Jun 03, 2020 |
2669 |
- |
Persistent storage for Lambda Cloud is expanding! |
Kathy Bui |
Sep 20, 2023 |
363 |
- |
RTX A6000 vs RTX 3090 Deep Learning Benchmarks |
Chuan Li |
Aug 09, 2021 |
465 |
- |
Persistent storage now available for on-demand NVIDIA H100 GPU instances |
Kathy Bui |
Dec 19, 2023 |
288 |
- |
Tracking system resource (GPU, CPU, etc.) utilization during training with the Weights & Biases Dashboard |
Chuan Li |
Aug 12, 2019 |
2040 |
- |
Partner Spotlight: Evaluating NVIDIA H200 Tensor Core GPUs for AI Inference with Baseten |
Baseten |
Oct 25, 2024 |
1618 |
- |
Lambda is a Diamond Sponsor at NVIDIA GTC! |
Maxx Garrison |
Mar 12, 2024 |
917 |
- |
Multi node PyTorch Distributed Training Guide For People In A Hurry |
Chuan Li |
Aug 26, 2022 |
3043 |
- |
Multi-GPU enabled BERT using Horovod |
Chuan Li |
Feb 06, 2019 |
1022 |
- |
Lambda Cloud Adding NVIDIA H100 Tensor Core GPUs in Early April |
Mitesh Agrawal |
Mar 21, 2023 |
426 |
- |
More Options for AI Developers: New On-Demand 1x, 2x and 4x NVIDIA H100 SXM Tensor Core GPU Instances in Lambda’s Cloud |
Mitesh Agrawal |
Oct 07, 2024 |
528 |
- |
Training YoloV5 face detector on Lambda Cloud |
Cooper L |
Aug 15, 2022 |
2098 |
- |
NVIDIA A40 Deep Learning Benchmarks |
Chuan Li |
Nov 30, 2021 |
478 |
- |
Lambda Selected as 2021 NVIDIA Partner Network Solutions Integration Partner of the Year |
Rick |
Apr 05, 2022 |
327 |
- |
ResNet9: train to 94% CIFAR10 accuracy in 100 seconds with a single Turing GPU |
Chuan Li |
Jan 07, 2019 |
668 |
- |
Unleashing the power of Transformers with NVIDIA Transformer Engine |
Chuan Li |
Nov 21, 2023 |
532 |
- |
How to Transfer Data to Lambda Cloud GPU Instances |
Remy Guercio |
May 03, 2020 |
1394 |
- |
Setting Up A Kubernetes Run:AI Cluster on Lambda Cloud |
Chuan Li |
Jun 03, 2022 |
1908 |
- |
Fine tuning Meta's LLaMA 2 on Lambda GPU Cloud |
Corey Lowman |
Jul 20, 2023 |
621 |
- |
Lambda Honored to Accelerate AI Innovation in Healthcare with NVIDIA |
Sam Khosroshahi |
Mar 19, 2025 |
554 |
- |
1, 2 & 4-GPU NVIDIA Quadro RTX 6000 Lambda GPU Cloud Instances |
Remy Guercio |
Oct 29, 2020 |
732 |
- |
MLPerf Inference v5.0: Lambda’s Clusters Prove Ready for Today and Tomorrow’s AI Inference Demands |
Amit Kumar |
Apr 02, 2025 |
540 |
- |
Lambda Named NVIDIA Partner Network Solution Integration Provider of the Year |
Tejas Mehrotra |
Jun 21, 2021 |
421 |
- |
Setting up a Mellanox InfiniBand Switch (SB7800 36-port EDR) |
Stephen Balaban |
Oct 30, 2019 |
271 |
- |
Best GPU for Deep Learning in 2022 (so far) |
Chuan Li |
Feb 28, 2022 |
2082 |
- |
Lambda Raises $44M to Build the World’s Best Cloud for Training AI |
Stephen Balaban |
Mar 21, 2023 |
310 |
- |
Hyperplane-16 InfiniBand Cluster Total Cost of Ownership Analysis |
Stephen Balaban |
Apr 07, 2020 |
1329 |
- |
Introducing Lambda 1-Click Clusters, a new way to train large AI models |
Mitesh Agrawal |
Jun 03, 2024 |
762 |
- |