Company
Date Published
June 11, 2024
Author
Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou
Word count
1422
Language
English
Hacker News points
2

Summary

Together MoA introduces a novel approach to harness the collective strengths of multiple LLMs, leveraging their diverse capabilities and insights to improve state-of-the-art quality. By adopting a layered architecture where each layer comprises several LLM agents, MoA effectively integrates diverse models, resulting in a more robust and versatile combined model. The reference implementation, Together MoA, significantly surpasses prior leader GPT-4o on AlpacaEval 2.0, achieving a score of 65.1% with only open-source models. This approach is based on the collaborativeness of LLMs, where an LLM tends to generate better responses when presented with outputs from other models. MoA categorizes its roles into proposers and aggregators, proposing initial reference responses and synthesizing them into high-quality responses through multiple layers. The study demonstrates that integrating a wider variety of inputs from different models significantly enhances the output, highlighting the value of leveraging diverse perspectives and capabilities that different models offer. Together MoA method significantly outperforms strong closed-source models in terms of accuracy and quality, making it an exciting approach for enhancing AI systems.