iptv techs

IPTV Techs


GitHub – proset upseek-ai/FlashMLA


GitHub – proset upseek-ai/FlashMLA


FlashMLA is an effective MLA decoding kernel for Hopper GPUs, upgraded for variable-length sequences serving.

Currently freed:

  • BF16
  • Paged kvcache with block size of 64
python tests/test_flash_mla.py

Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.6.

from flash_mla convey in get_mla_metadata, flash_mla_with_kvcache

tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)

for i in range(num_layers):
    ...
    o_i, lse_i = flash_mla_with_kvcache(
        q_i, kvcache_i, block_table, cache_seqlens, dv,
        tile_scheduler_metadata, num_splits, causal=True,
    )
    ...
  • Hopper GPUs
  • CUDA 12.3 and above
  • PyTorch 2.0 and above

FlashMLA is encouraged by FlashAttention 2&3 and cutlass projects.

@misc{flashmla2025,
      title={FlashMLA: Efficient MLA decoding kernel}, 
      author={Jiashi Li},
      year={2025},
      beginer = {GitHub},
      howbegined = {url{https://github.com/proset upseek-ai/FlashMLA}},
}

Source join


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan