Long-Context LLMs Meet RAG
Oct 18, 2024 ·
15m 35s
Download and listen anywhere
Download your favorite episodes and enjoy them, wherever you are! Sign up or log in now to access offline listening.
Description
📈 Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG This paper explores the challenges and opportunities of using long-context language models (LLMs) in retrieval-augmented generation (RAG) systems....
show more
📈 Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG
This paper explores the challenges and opportunities of using long-context language models (LLMs) in retrieval-augmented generation (RAG) systems. While increasing the number of retrieved passages initially improves performance, the authors find that it eventually degrades due to the introduction of irrelevant information, or "hard negatives." To address this, the paper proposes three methods for enhancing the robustness of RAG with long-context LLMs: retrieval reordering, RAG-specific implicit LLM fine-tuning, and RAG-oriented LLM fine-tuning with intermediate reasoning. The paper also investigates the impact of various factors related to data distribution, retriever selection, and training context length on the effectiveness of RAG-specific tuning.
📎 Link to paper
show less
This paper explores the challenges and opportunities of using long-context language models (LLMs) in retrieval-augmented generation (RAG) systems. While increasing the number of retrieved passages initially improves performance, the authors find that it eventually degrades due to the introduction of irrelevant information, or "hard negatives." To address this, the paper proposes three methods for enhancing the robustness of RAG with long-context LLMs: retrieval reordering, RAG-specific implicit LLM fine-tuning, and RAG-oriented LLM fine-tuning with intermediate reasoning. The paper also investigates the impact of various factors related to data distribution, retriever selection, and training context length on the effectiveness of RAG-specific tuning.
📎 Link to paper
Information
Author | Shahriar Shariati |
Organization | Shahriar Shariati |
Website | - |
Tags |
Copyright 2024 - Spreaker Inc. an iHeartMedia Company