Mon. Nov 25th, 2024

AI researchers found a way to run LLMs at a lightbulb-esque 13 watts with no loss in performance

By Jun 26, 2024

Elimination of matrix multiplication from LLM processing can massively increase performance-per-watt with the correct optimizations, researchers from UC Santa Cruz demonstrate. It remains to be seen how applicable this approach is for AI in general. 

Elimination of matrix multiplication from LLM processing can massively increase performance-per-watt with the correct optimizations, researchers from UC Santa Cruz demonstrate. It remains to be seen how applicable this approach is for AI in general. 

By

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *