/plushcap/analysis/symbl-ai/can-conversational-feature-transfer-in-llms-help-detect-deception

Can Conversational Feature Transfer in LLMs Help Detect Deception?

What's this blog post about?

Large Language Models (LLMs) have shown impressive capabilities in sentiment analysis and emotion detection. However, their learning and interpretation of language differs significantly from human language acquisition. This study aims to explore whether LLMs trained with multimodal features effectively utilize those features when processing data from a single modality. The research focuses on comparing general LLMs against specialized LLMs on multimodal data, specifically Llama-2-70B and its fine-tuned version for human conversation data (Llama-2-70B-conversation). The study uses deceptive communication as a challenging use case to evaluate the multimodal transfer of skills in LLMs. The results show that conversation+text models outperform unimodal text models in identifying deceptive communication, such as sarcasm, irony, and condescension. Additionally, emphasizing conversational features in prompts yields mixed results, with slight improvements in accuracy and precision but a decline in recall. The findings suggest that multimodal feature transfer occurs in LLMs, improving their performance on specific tasks that may require multimodal training. Further research is being conducted to investigate the effect of other modalities associated with human conversation data on the feature transfer phenomenon in LLMs and overall accuracy of challenging tasks for today's large language models.

Company
Symbl.ai

Date published
July 16, 2024

Author(s)
Kartik Talamadupula

Word count
815

Hacker News points
None found.

Language
English


By Matt Makai. 2021-2024.