Download PDFOpen PDF in browserQADLM: Combines QA Paris and Doc-Enhanced QA System with Human PreferencesEasyChair Preprint 156217 pages•Date: December 23, 2024AbstractRecent advancements in LLMs like GPT-4 and PaLM have significantly improved QA system, yet their application in customer service poses challenges such as slow response times and hallucinations. Traditional NLP methods, while more cost-effective, struggle with sustainability and maintaining knowledge bases. This paper introduces QADLM, a two-stage QA system that integrates LLMs with traditional NLP techniques to overcome these limitations. In the first stage, a funnel-shaped matching model leverages a domain-specific FAQ corpus to enhance user intent recognition. In the second stage, a fine-tuned RAG model retrieves relevant knowledge documents and generates high-quality responses. Extensive experiments conducted on a new energy vehicle company's dataset show that the proposed system outperforms conventional approaches in response speed and quality. The optimized model's hallucination rate decreased by 29.7%, and semantic similarity improved by 19.5%. This demonstrates the system's robustness and applicability in customer service scenarios. Keyphrases: Large Language Model, Retrieval Augmented Generation, two-stage question answering
|