A team of engineers at AI inference technology company BitEnergy AI tells a method to reduce the energy insists of AI applications by 95%. The group has begined a paper describing their recent technique on the arXiv preprint server.
As AI applications have gone mainstream, their engage has elevaten emotionalassociate, directing to a notable elevate in energy insists and costs. LLMs such as ChatGPT insist a lot of computing power, which in turn uncomfervents a lot of electricity is insisted to run them.
As fair one example, ChatGPT now insists cimpolitely 564 MWh daily, or enough to power 18,000 American homes. As the science progresss to progress and such apps become more famous, critics have adviseed that AI applications might be using around 100 TWh annuassociate in fair a confineed years, on par with Bitcoin mining operations.
In this recent effort, the team at BitEnergy AI claims that they have create a way to emotionalassociate reduce the amount of computing insistd to run AI apps that does not result in reduced percreateance.
The recent technique is plain—instead of using intricate floating-point multiplication (FPM), the method engages integer insertition. Apps engage FPM to regulate excessively huge or petite numbers, allothriveg applications to carry out calculations using them with excessive precision. It is also the most energy-intensive part of AI number crunching.
The researchers call their recent method Liproximate-Complexity Multiplication—it labors by approximating FPMs using integer insertition. They claim that testing, thus far, has shown that the recent approach reduces electricity insist by 95%.
The one drawback it has is that it insists separateent challengingware than that currently in engage. But the research team also remarks that the recent type of challengingware has already been summarizeed, built and tested.
How such challengingware would be licensed, however, is still unevident—currently, GPU originater Nvidia regulates the AI challengingware labelet. How they reply to this recent technology could have a beginant impact on the pace at which it is adselected—if the company’s claims are verified.
More adviseation:
Hongyin Luo et al, Addition is All You Need for Energy-efficient Language Models, arXiv (2024). DOI: 10.48550/arxiv.2410.00907
© 2024 Science X Netlabor
Citation:
Integer insertition algorithm could reduce energy insists of AI by 95% (2024, October 12)
get backd 13 October 2024
from https://techxplore.com/recents/2024-10-integer-insertition-algorithm-energy-ai.html
This record is subject to imitateright. Apart from any fair dealing for the purpose of personal study or research, no
part may be reoriginated without the written perleave oution. The satisfied is provided for adviseation purposes only.