LLMs just got 10 times faster and 10 times more efficient with a technique.
How over is it for codecels right now?
Bernd
Sat, 08 Mar 2025 18:21:17 GMT
No. 25595056
>>25595126
so you can run local much easier nao ?
Bernd
Sat, 08 Mar 2025 18:21:51 GMT
No. 25595061
I just want something I can run on my piece a shit computer
Bernd
Sat, 08 Mar 2025 18:23:57 GMT
No. 25595071
Turns out you can just predict the entire text instead of the next word and as you would expect, it's much faster.
Bernd
Sat, 08 Mar 2025 18:25:46 GMT
No. 25595083
Why are these guys always so jewey?
Bernd
Sat, 08 Mar 2025 18:32:45 GMT
No. 25595126
>>25595153
>14 iterations vs 75 iterations
<10x faster
This is your brain on AI
>>25595056
Local models are hard to run primarily because of memory requirements. This only claims to decrease processing time.
Bernd
Sat, 08 Mar 2025 18:52:51 GMT
No. 25595273
>10 times faster and 10 times more efficient
so that means they're 100 times more better?