Hungry Hungry Hippos (H3) and its creators, Dan Fu and Tri Dao, have developed a language modeling architecture that performs comparably to transformers while admitting much longer context length, making it suitable for tasks such as audio processing and biological applications. Their approach uses state space models, which are inspired by old concepts from control theory but have been adapted for deep learning. The H3 model achieves impressive results on large benchmark tests, often rivaling or surpassing transformer-based models. When combined with one or two attention layers, the blended architecture shows even more promising results. The researchers believe that state space methods could be more efficient during inference, which is a crucial concern for deploying these models in products. Applications of H3 include code generation, video processing, and biological applications, as well as interactive AI workflows and automatic slide generation. These new architectures will require interaction between users and the system, making long-range context increasingly important.