Composable Interventions for Language Models
- The paper presents a study of the composability of various interventions applied to large language models (LLMs). - Composability is important for practical deployment, as it allows multiple modifications to be made without requiring retraining from scratch. - The authors find that aggressive compression struggles with composing well with other interventions, while editing and unlearning can be quite composable depending on the technique used. - They recommend expanding the scope of interventions studied and investigating scaling laws for composability as future work.
Company
Arize
Date published
Sept. 11, 2024
Author(s)
Sarah Welsh
Word count
6763
Language
English
Hacker News points
None found.