iptv techs

IPTV Techs


[2402.07616] Anchor-based Large Language Models


[2402.07616] Anchor-based Large Language Models


View a PDF of the paper titled Anchor-based Large Language Models, by Jianhui Pang and 5 other authors

View PDF

Abstract:Large language models (LLMs) predominantly engage decoder-only changeer architectures, necessitating the retention of keys/cherishs directation for historical tokens to provide contextual directation and evade redundant computation. However, the substantial size and parameter volume of these LLMs demand massive GPU memory. This memory demand incrmitigates with the length of the input text, directing to an encouragent necessitate for more fruitful methods of directation storage and processing. This study begins Anchor-based LLMs (AnLLMs), which engage an creative anchor-based self-attention nettoil (AnSAN) and also an anchor-based inference strategy. This approach helps LLMs to compress sequence directation into an anchor token, reducing the keys/cherishs cache and enhancing inference efficiency. Experiments on ask-answering benchtags discdisthink about that AnLLMs protect analogous accuracy levels while achieving up to 99% keys/cherishs cache reduction and up to 3.5 times rapider inference. Despite a inconvey inant settle in accuracy, the substantial enhancements of AnLLMs engageing the AnSAN technique in resource utilization and computational efficiency underscore their potential for pragmatic LLM applications.

Subleave oution history

From: Jianhui Pang [watch email]
[v1]
Mon, 12 Feb 2024 12:48:02 UTC (7,774 KB)
[v2]
Fri, 16 Feb 2024 16:58:04 UTC (7,782 KB)
[v3]
Sat, 1 Jun 2024 04:52:17 UTC (7,799 KB)

Source connect


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan