Download PDFOpen PDF in browser

QADLM: Combines QA Paris and Doc-Enhanced QA System with Human Preferences

EasyChair Preprint 15621

7 pagesDate: December 23, 2024

Abstract

Recent advancements in LLMs like GPT-4 and PaLM have significantly improved QA system, yet their application in customer service poses challenges such as slow response times and hallucinations. Traditional NLP methods, while more cost-effective, struggle with sustainability and maintaining knowledge bases. This paper introduces QADLM, a two-stage QA system that integrates LLMs with traditional NLP techniques to overcome these limitations. In the first stage, a funnel-shaped matching model leverages a domain-specific FAQ corpus to enhance user intent recognition. In the second stage, a fine-tuned RAG model retrieves relevant knowledge documents and generates high-quality responses. Extensive experiments conducted on a new energy vehicle company's dataset show that the proposed system outperforms conventional approaches in response speed and quality. The optimized model's hallucination rate decreased by 29.7%, and semantic similarity improved by 19.5%. This demonstrates the system's robustness and applicability in customer service scenarios.

Keyphrases: Large Language Model, Retrieval Augmented Generation, two-stage question answering

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15621,
  author    = {Xuewen Zhang and Juyi Qiao and Junming Jiao},
  title     = {QADLM: Combines QA Paris and Doc-Enhanced QA System with Human Preferences},
  howpublished = {EasyChair Preprint 15621},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser