Company
Date Published
Author
Sarah Welsh
Word count
6763
Language
English
Hacker News points
None

Summary

- The paper presents a study of the composability of various interventions applied to large language models (LLMs). - Composability is important for practical deployment, as it allows multiple modifications to be made without requiring retraining from scratch. - The authors find that aggressive compression struggles with composing well with other interventions, while editing and unlearning can be quite composable depending on the technique used. - They recommend expanding the scope of interventions studied and investigating scaling laws for composability as future work.