586 |
Uncensor any LLM with abliteration |
2024-06-13 |
323 |
MonadGPT – What would have happened if ChatGPT was invented in the 17th century? |
2023-11-24 |
252 |
LLM in a Flash: Efficient LLM Inference with Limited Memory |
2023-12-20 |
240 |
Microsoft Phi-2 model changes licence to MIT |
2024-01-06 |
238 |
Falcon 180B |
2023-09-06 |
229 |
OpenLLaMA 13B Released |
2023-06-18 |
214 |
Hugging Face Releases Agents |
2023-05-10 |
197 |
Space secrets leak disclosure |
2024-06-01 |
185 |
BigCode Project Releases StarCoder: A 15B Code LLM |
2023-05-04 |
181 |
Best 7B LLM on leaderboards made by an amateur following a medium tutorial |
2024-01-05 |
168 |
Llama 3 8B is almost as good as Wizard 2 8x22B |
2024-04-19 |
167 |
Nvidia releases NVLM 1.0 72B open weight model |
2024-10-02 |
165 |
StackLlama: A hands-on guide to train LlaMa with RLHF |
2023-04-06 |
163 |
Explaining the SDXL Latent Space |
2024-02-05 |
152 |
Hugging Face and Google partner for AI collaboration |
2024-01-25 |
131 |
Mistral-8x7B-Chat |
2023-12-10 |
131 |
A CC-By Open-Source TTS Model with Voice Cloning |
2024-11-04 |
127 |
FineWeb: Decanting the web for the finest text data at scale |
2024-06-02 |
115 |
Yi-34B-Chat |
2023-11-24 |
107 |
GPT-3.5 and Wolfram Alpha via LangChain |
2023-01-18 |
105 |
The Falcon has landed in the Hugging Face ecosystem |
2023-06-05 |
103 |
HuggingChat: Chat with Open Source Models |
2024-02-21 |
102 |
Hugging Face and AWS partner to make AI more accessible |
2023-02-21 |
101 |
HuggingFace Training Cluster as a Service |
2023-09-05 |
95 |
More than 80 AI models from Qualcomm |
2024-02-28 |
95 |
Segmind Stable Diffusion – A smaller version of Stable Diffusion XL |
2023-10-25 |
94 |
LLaMA-Pro-8B |
2024-01-06 |
93 |
HuggingChat |
2023-04-25 |
88 |
Yarn-Mistral-7B-128k |
2023-11-11 |
82 |
Apple/OpenELM: Efficient Open-Source Family Language Models |
2024-04-24 |
78 |
Sparse LLM Inference on CPU: 75% fewer parameters |
2023-10-19 |
75 |
YouTube-Commons: Audio transcripts of 2,063,066 YouTube videos, CC-By license |
2024-04-18 |
73 |
Switch Transformers C – 2048 experts (1.6T params for 3.1 TB) (2022) |
2023-11-20 |
66 |
Multimodal Neurons in Pretrained Text-Only Transformers |
2023-08-04 |
66 |
Show HN: Simply Reading Analog Gauges – GPT4, CogVLM Can't |
2024-01-22 |
61 |
HuggingChat – ChatGPT alternative with open source models |
2023-12-15 |
58 |
MSFT's WizardLM2 models have been taken down |
2024-04-16 |
58 |
OpenLLaMA 7B Training Completed to 1T Tokens |
2023-06-07 |
57 |
Phi-2 |
2023-12-13 |
56 |
Dolphin-2_6-Phi-2 |
2023-12-24 |
55 |
Alibaba releases 72B LLM with 32k context length |
2023-11-30 |
54 |
LiteLlama-460M-1T has 460M parameters trained with 1T tokens |
2024-01-07 |
52 |
Fine-Tuning LLMs to 1.58bit |
2024-09-18 |
51 |
LLaMA 3 70B Llamafiles |
2024-04-19 |
425 |
Llama-3.3-70B-Instruct |
2024-12-06 |
348 |
A Replacement for BERT |
2024-12-19 |
52 |
Train faster static embedding models with sentence transformers |
2025-01-15 |
394 |
Open-R1: an open reproduction of DeepSeek-R1 |
2025-01-28 |
227 |
Kokoro WebGPU: Real-time text-to-speech 100% locally in the browser |
2025-02-07 |