0
~tastem-pacper
1 months ago
| parent| Thread: Weekly Tech Thread - ~2024.9.25

This week in LLMs: "Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction", from Salesforce.



achieves a 2.4× speedup and 30\% reduction in GPU memory usage compared to SOTA methods. Evaluation on the Needle in a Haystack task shows that GemFilter significantly outperforms standard attention, SnapKV and demonstrates comparable performance on the LongBench challenge. GemFilter is simple, training-free, and broadly applicable across different LLMs. ...Our research demonstrates that LLMs can identify relevant tokens in the early layers before generating answers to a query. Leveraging this insight, we propose an algorithm that uses early layers of an LLM as filters to select and compress input tokens, significantly reducing the context length for subsequent processing. Our method, GemFilter,