Title | Author | Date | Word count | HN points |
---|---|---|---|---|
Achieving 50x Faster Inference than HuggingFace with MonsterDeploy | Sparsh Bhasin | Jan 01, 2025 | 1117 | - |
Achieving 62x Faster Inference than HuggingFace with MonsterDeploy | Sparsh Bhasin | Jan 01, 2025 | 1127 | 2 |
How Neural Networks Work: A Beginner's Guide | Sparsh Bhasin | Jan 02, 2025 | 657 | - |
Role of AI in Personalized Marketing | Sparsh Bhasin | Jan 04, 2025 | 950 | - |
Difference Between Upstream & Downstream in Microservice | Sparsh Bhasin | Jan 08, 2025 | 718 | - |
Using ORPO to Improve LLM Fine-tuning with MonsterAPI | Sparsh Bhasin | Jan 12, 2025 | 955 | - |
CPU vs. GPU: Key Differences & Uses Explained | Sparsh Bhasin | Jan 17, 2025 | 1261 | - |
Cloud vs. On-Premises: Choosing the Best Deployment Option for LLMs | Sparsh Bhasin | Jan 21, 2025 | 952 | - |