Enhancing Abstractive Summarization with Extracted Knowledge Graphs and Multi-Source Transformers
Enhancing Abstractive Summarization with Extracted Knowledge Graphs and Multi-Source Transformers
Blog Article
As the popularity of large language models (LLMs) has risen over the course of the last year, led by GPT-3/4 and especially its productization as ChatGPT, Prevalence and mortality risk of low skeletal muscle mass in critically ill patients: an updated systematic review and meta-analysis we have witnessed the extensive application of LLMs to text summarization.However, LLMs do not intrinsically have the power to verify the correctness of the information they supply and generate.This research introduces a novel approach to abstractive summarization, aiming to address the limitations of LLMs in that they struggle to understand the truth.
The proposed method leverages extracted knowledge graph information and structured semantics as a guide for summarization.Building upon BART, one of the state-of-the-art sequence-to-sequence pre-trained LLMs, multi-source transformer modules are developed as an encoder, which are capable of processing textual and graphical inputs.Decoding is performed based on this enriched encoding to enhance the summary quality.
The Wiki-Sum dataset, derived from Wikipedia text dumps, is introduced for evaluation purposes.Comparative experiments with baseline models demonstrate the strengths of the proposed approach in generating informative The use of plant waste to ensure the functioning of agricultural energy complexes and relevant summaries.We conclude by presenting our insights into utilizing LLMs with graph external information, which will become a powerful aid towards the goal of factually correct and verified LLMs.