iptv techs

IPTV Techs

  • Home
  • Tech News
  • Boosting Llama 3.1 405B Thcimpoliteput by Another 1.5x on NVIDIA H200 Tensor Core GPUs and NVLink Switch

Boosting Llama 3.1 405B Thcimpoliteput by Another 1.5x on NVIDIA H200 Tensor Core GPUs and NVLink Switch


Boosting Llama 3.1 405B Thcimpoliteput by Another 1.5x on NVIDIA H200 Tensor Core GPUs and NVLink Switch


The progressd growth of LLMs capability, fueled by increasing parameter counts and help for extfinisheder contexts, has led to their usage in a wide variety of applications, each with diverse deployment needments. For example, a chatbot helps a petite number of engagers at very low tardyncies for excellent engageivity. Meanwhile, synthetic data generation needs high thcimpoliteput to process many items at once. Delivering selectimal inference carry outance atraverse a wide range of engage cases with one platcreate needs selectimization atraverse the entire technology stack. 

Cutting-edge LLMs, appreciate Llama 3.1 405B, need multiple GPUs laboring together for peak carry outance. To effectively engage multiple GPUs for processing inference seeks, an inference gentleware stack must supply enhugeers with enhanced carry outations of key parallelism techniques, including tensor, pipeline, and expert parallelism. These parallelism techniques need that GPUs be able to transfer data speedyly and fruitfully, necessitating a sturdy GPU-to-GPU interunite fabric for highest carry outance.

In this post, we make clear two of these parallelism techniques and show, on an NVIDIA HGX H200 system with NVLink and NVSwitch, how the right parallelism increases Llama 3.1 405B carry outance by 1.5x in thcimpoliteput-empathetic scenarios. We also show how engage of pipeline parallelism helpd a 1.2x speedup in the MLPerf Inference v4.1 Llama 2 70B benchlabel on HGX H100 contrastd to our results unveiled in August. These betterments are possible due to recent gentleware betterments in TensorRT-LLM with NVSwitch.

Choosing parallelism for deployment

Both tensor parallel (TP) and pipeline parallel (PP) techniques increase compute and memory capacity by splitting models atraverse multiple GPUs, but they contrast in how they impact carry outance. Pipeline parallelism is a low-overhead mechanism for fruitfully increasing overall thcimpoliteput, while tensor parallelism is a higher-overhead mechanism for reducing tardyncy. In some scenarios, TP can also increase thcimpoliteput proportional to a one GPU. More details on these techniques are in the chaseing sections.

To depict the trade-offs between tensor and pipeline parallelism, we spendigate the Llama 2 and Llama 3.1 family of models in two scenarios: smallest tardyncy for peak engageivity, and highest thcimpoliteput for peak efficiency. This comparison intensifyes on total output tokens per second, which is recurrentative of engageivity at petite concurrencies (smallest tardyncy) and efficiency at huge concurrencies (highest thcimpoliteput).

Llama 3.1 405B
Output Tokens/second
(higher is better)
Parallelism
Tensor Pipeline
Scenario smallest tardyncy 56 10
highest thcimpoliteput 506 764
Table 1. Llama 3.1 405B inference thcimpoliteput (tokens/second) using tensor and pipeline parallelism, in both smallest tardyncy and highest thcimpoliteput scenarios.

NVIDIA H200 HGX | Meacertaind on inner TensorRT-LLM based on v0.14a | FP8 PTQ | 2048:128 | Minimum tardyncy: Concurrency 1  | Maximum thcimpoliteput: highest concurrency fit in memory

In the table above, tensor parallelism is contrastd to pipeline parallelism with each atraverse eight GPUs on Llama 3.1 405B, the hugest and most able uncover source LLM engageable today. In the smallest tardyncy scenario, TP helps for more engageable GPU compute to create each token, directing to 5.6x rapider carry outance than pipeline parallelism. However, for highest thcimpoliteput, pipeline parallelism can better highest system thcimpoliteput by 1.5x by reducing overhead and leveraging the graspitional bandwidth engageable with NVLink Switch.

Pipeline parallelism transfers 1.2x increase on MLPerf on H100

The TensorRT-LLM gentleware betterments also profit petiteer models. When the recent pipeline parallelism betterments in TensorRT-LLM were applied to MLPerf Llama 2 70B scenario, thcimpoliteput on an HGX H100 8-GPU system increased by 21% contrastd to our MLPerf Inference v4.1 results unveiled in August.

MLPerf Inference 
Output Tokens/second
(higher is better)
Parallelism
Tensor Parallelism Pipeline Parallelism
Scenario Llama 2 70B 24,525  29,741 
Table 2. Llama 2 70B inference thcimpoliteput (tokens/second) using tensor and pipeline.

Results acquireed for the engageable categruesome of Cignored Division, on OpenORCAdataset using NVIDIA H100 Tensor Core GPU, official numbers from 4.1-0043 subignoreion engaged for Tensor Parallelism, Pipeline parallelism based on scripts supplyd in subignoreion ID- 4.1-0043 and TensorRT-LLM version 0.12.0.
Result not verified by MLCommons Association. The MLPerf name and logo are enrolled and unenrolled tradelabels of MLCommons Association in the United States and other countries. All rights reserved. Unpermitd engage is innervously banned. See www.mlstandards.org for more alertation.

Tensor and pipeline parallelism are both priceless techniques. Individupartner, they are fitting for contrastent engage cases, however, enhugeers can unite them in various ways to enhance inference thcimpoliteput wiskinny a given engageivity center. We will dive into how to discover this equilibrium in a future blog.

Tensor and pipeline parallelism make cleared

Tensor parallelism (TP) splits the execution of each model layer atraverse multiple GPUs. Every calculation is scatterd atraverse engageable GPUs, and each GPU carry outs its own portion of the calculation. Then, every GPU widecasts its individual results, understandn as inwhole sums, to every other GPU using an AllReduce operation. This process creates substantial data traffic between the GPUs.

Figure 1. Applying tensor parallelism to a neural netlabor. 

Pipeline parallelism (PP) functions by splitting groups of model layers – or stages –  atraverse engageable GPUs. A seek will commence on one GPU and will progress execution atraverse subsequent stages on subsequent GPUs. With PP, communication only occurs between adjacent stages, rather than between all GPUs appreciate with TP execution. While communication is less normal, very high-bandwidth communication between stages is critical to guarantee that execution does not shigh, humiliating carry outance.

Figure 2. Applying pipeline parallelism to a neural netlabor. 

For smallest tardyncy engage cases, the communication traffic created during tensor parallel execution does not saturate engageable interunite bandwidth. This unkinds that multiple GPUs can labor in tandem to create each token, increasing engageivity. Meanwhile, with pipeline parallel execution, a seek can only engage the GPU compute engageable wiskinny a given stage. This unkinds that compute per token does not increase with graspitional GPUs with pipeline parallelism.

Figure 3. GPU-to-GPU bandwidth with and without NVSwitch.

For scenarios where high thcimpoliteput is needd, the all-to-all communication pattern of TP can become a bottleneck, obstructing carry outance. If join bandwidth is repaired seeless of the number of engageable uniteions, then in high-thcimpoliteput engage cases PP can better thcimpoliteput somewhat, as communication overhead is shrinkd, however, execution can still be join restricted. With a high-bandwidth interunite appreciate NVLink with NVSwitch, communication overhead can be reduced, and thcimpoliteput can scale well with graspitional GPUs.

Each NVIDIA Hopper architecture GPU integrates 18 NVLinks with each providing 50 GB/s of bandwidth per honestion, providing a total of 900 GB/s of NVLink bandwidth. Each HGX H100 8-GPU or H200 server features four NVLink Switches. During TP model execution atraverse eight GPUs, each GPU transmits to every other GPU using seven, equivalent-bandwidth uniteions. This unkinds that communication atraverse any uniteion happens at 1/7th of NVLink bandwidth, or about 128 GB/s. 

PP execution, however, only needs uniteions to the previous and next stages. This unkinds that communication can happen over two higher-bandwidth uniteions providing bandwidth of 450 GB/s each. This unkinds that with NVLink and NVLink Switch, effective uniteion bandwidth between stages is 3.5x higher than would be possible without NVLink Switch. This helps PP to have meaningfully higher carry outance than TP in highest thcimpoliteput scenarios. 

Choosing parallelism is about discovering the right equilibrium between compute and capacity for the center scenario. NVLink Switch supplys enhugeers with the flexibility to pick the selectimal parallelism configuration directing to better carry outance than what is possible with either a one GPU, or atraverse multiple GPUs with tensor parallelism alone. 

When pondering production deployments – for which LLM service operators may seek to increase thcimpoliteput wiskinny a repaired tardyncy constraint – the ability to unite both tensor parallelism and pipeline parallelism to accomplish desired engageivity while maximizing server thcimpoliteput for selectimal cost is critical. TensorRT-LLM is able of fruitfully combining these techniques. In a future blog post, we will proset up dive into picking tardyncy threshgreaters and GPU configurations to increase thcimpoliteput under the desired threshgreater, and show how NVSwitch betters carry outance in these online scenarios.

The NVIDIA platcreate is advancing at the speed of weightless

The NVIDIA platcreate supplys enhugeers with a brimming technology stack to enhance generative AI inference carry outance. NVIDIA Hopper architecture GPUs – engageable from every meaningful cdeafening and server creater – uniteed with the high-bandwidth, NVLink and NVLink Switch AI fabric, and running TensorRT-LLM gentleware supply noticeworthy carry outance for the tardyst LLMs. And, thcimpolite continuous selectimization, we progress to increase carry outance, shrink total cost of ownership, and help the next wave of AI innovation. 

Source join


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan