Want to see more threads and images? Ask Bernd!
Bernd Sat, 08 Mar 2025 18:17:47 GMT No. 25595035 [Kohl] [Report thread]
ClipboardImage-1741457761.png
2.05 MB, 1200x675
llm-generates-the-entire-output-at-once-worlds-first-diffusion-llm-x1rd3nhlice.mp4
27.43 MB, 1280x720
LLMs just got 10 times faster and 10 times more efficient with a technique. How over is it for codecels right now?
Total posts: 7, files: 0 (Drowned at Sun, 09 Mar 2025 13:53:27 GMT)
Bernd Sat, 08 Mar 2025 18:21:17 GMT No. 25595056 >>25595126
so you can run local much easier nao ?
Bernd Sat, 08 Mar 2025 18:21:51 GMT No. 25595061
I just want something I can run on my piece a shit computer
Bernd Sat, 08 Mar 2025 18:23:57 GMT No. 25595071
Turns out you can just predict the entire text instead of the next word and as you would expect, it's much faster.
Bernd Sat, 08 Mar 2025 18:25:46 GMT No. 25595083
Why are these guys always so jewey?
Bernd Sat, 08 Mar 2025 18:32:45 GMT No. 25595126 >>25595153
>14 iterations vs 75 iterations <10x faster This is your brain on AI >>25595056 Local models are hard to run primarily because of memory requirements. This only claims to decrease processing time.
Bernd Sat, 08 Mar 2025 18:36:35 GMT No. 25595153 SÄGE!
>>25595126 /thread
Bernd Sat, 08 Mar 2025 18:52:51 GMT No. 25595273
>10 times faster and 10 times more efficient so that means they're 100 times more better?
Thread interest score: 2.9 Thread size: 25.29 kB