Want to see more threads and images? Ask Bernd!
Bernd Thu, 20 Feb 2025 17:41:18 GMT No. 25486255 [Kohl] [Report thread]
ClipboardImage-1740073228.png
183.95 kB, 814x798
Everyone that used it deserves the consequences.
Total posts: 24, files: 5 (Drowned at Fri, 21 Feb 2025 22:18:51 GMT)
Bernd Thu, 20 Feb 2025 17:43:16 GMT No. 25486263 SÄGE! >>25486285
copypasted twitter 89iq ragebait thread #72616381917
Bernd Thu, 20 Feb 2025 17:45:55 GMT No. 25486272 >>25486279
He, isn't that something you run locally? I mean they opened it, right? Not like the company that calls itself open, but not giving you anything.
Bernd Thu, 20 Feb 2025 17:46:43 GMT No. 25486277 >>25486279
Just selfhost?
Bernd Thu, 20 Feb 2025 17:47:17 GMT No. 25486278
Of course you're giving data to China with the one hosted by DeepSeek itself
Bernd Thu, 20 Feb 2025 17:47:52 GMT No. 25486279 >>25486293 >>25486310 >>25486313
>>25486255 >*turns on VPN* problems weren't >>25486272 >>25486277 There is an online API which is the full thing and some cope local models that aren't actually Deepseek but just other models refined on Deepseek data. The actual full sized model is too large for anyone who isn't a millionaire to self-host.
Bernd Thu, 20 Feb 2025 17:49:07 GMT No. 25486285
>>25486263 I actually got this from a news article
Bernd Thu, 20 Feb 2025 17:49:42 GMT No. 25486286
dd7895c93b8d9d4abddf242057f65e092d87a780b45dfc1bc34aac86035635c3.gif
38.19 kB, 618x640
Not my problem, I use the local models only. Thank you based China and France for best local models
Bernd Thu, 20 Feb 2025 17:49:49 GMT No. 25486287
oh no! not my data!
Bernd Thu, 20 Feb 2025 17:50:06 GMT No. 25486293 >>25486296
>>25486279 >groundbreaking chiner "technology" was a nothingburger/scam disappointed but not surprised
Bernd Thu, 20 Feb 2025 17:51:04 GMT No. 25486296 >>25486345
>>25486293 The groundbreaking part is that it takes far less compute to get similar results compared to other LLMs, but it's still much more than you can handle with a home PC.
Bernd Thu, 20 Feb 2025 17:52:31 GMT No. 25486301
ClipboardImage-1740073906.png
163.88 kB, 1378x1058
I only used it to test it on math and physics
Bernd Thu, 20 Feb 2025 17:53:59 GMT No. 25486310 >>25486327
>>25486279 >The actual full sized model is too large for anyone who isn't a millionaire to self-host. Why even post if you have no clue? You can run the full model on 2-3k hardware.
Bernd Thu, 20 Feb 2025 17:54:57 GMT No. 25486313
bernd_shrugged.jpg
32.12 kB, 560x407
>>25486279 Self hosted model is good enough. Ive build a 500gb swap partition and runned it on my computer. It just works. Although, to be honest, its not that usefull. Its not like you need privacy to ask an IA how the fuck can you can code some shit on python. I do understand, however, that are morons throwing a fuckton of sensible documents related on their work on it and asking the shit to do stuff like reducing the size of pdf's or something, but this was a problem before the ia.
Bernd Thu, 20 Feb 2025 17:58:27 GMT No. 25486327 >>25486345
>>25486310 Post a 2-3k build that can fit a 685B model.
Bernd Thu, 20 Feb 2025 18:02:37 GMT No. 25486345 >>25486353
>>25486296 The groundbreaking part is that the full model, r1, which is as good as chat gpt, the 671B model can be run locally on very cheap hardware. You can't run it on a random pc, but with a homeserver where you get 200-500 gbs of ram in, you can run it. You can buy such a machine for 2-3k used, or build it new for 5k. You are 100% wrong. >>25486327 Motherboard: Gigabyte MZ73-LM0 or MZ73-LM1. We want 2 EPYC sockets to get a massive 24 channels of DDR5 RAM to max out that memory size and bandwidth. CPU: 2x any AMD EPYC 9004 or 9005 CPU. LLM generation is bottlenecked by memory bandwidth, so you don't need a top-end one. Get the 9115 or even the 9015 if you really want to cut costs. RAM: This is the big one. We are going to need 768GB (to fit the model) across 24 RAM channels (to get the bandwidth to run it fast enough). That means 24 x 32GB DDR5-RDIMM modules. Example kits: https://v-color.net/products/ddr5-ecc-rdimm-servermemory?variant=44758742794407 https://www.newegg.com/nemix-ram-384gb/p/1X5-003Z-01FM7 Case: You can fit this in a standard tower case, but make sure it has screw mounts for a full server motherboard, which most consumer cases won't. The Enthoo Pro 2 Server will take this motherboard. PSU: The power use of this system is surprisingly low! (<400W) However, you will need lots of CPU power cables for 2 EPYC CPUs. The Corsair HX1000i has enough, but you might be able to find a cheaper option: https://www.corsair.com/us/en/p/psu/cp-9020259-na/hx1000i-fully-modular-ultra-low-noise-platinum-atx-1000-watt-pc-power-supply-cp-9020259-na Heatsink: This is a tricky bit. AMD EPYC is socket SP5, and most heatsinks for SP5 assume you have a 2U/4U server blade, which we don't for this build. You probably have to go to Ebay/Aliexpress for this. I can vouch for this one: https://www.ebay.com/itm/226499280220 Total cost: $6,000 https://threadreaderapp.com/thread/1884244369907278106.html
Bernd Thu, 20 Feb 2025 18:04:19 GMT No. 25486353 >>25486371 >>25486509
>>25486345 >running it on CPU + RAM This is going to be much slower than using a proper GPU cluster.
Bernd Thu, 20 Feb 2025 18:07:25 GMT No. 25486371 >>25486398
64uzj46ukj46uk7.mp4
1.17 MB, 320x342
>>25486353 So already changing goalpost? You get 8 tokens per second. Its enough for private use. See video, thats the speed you get.
Bernd Thu, 20 Feb 2025 18:11:25 GMT No. 25486398 >>25486429
>>25486371 That is excruciatingly slow compared to the 4k+ tokens/second you'd get with GPUs. You won't get any work done compared to just using a VPN to access the API.
Bernd Thu, 20 Feb 2025 18:15:47 GMT No. 25486429
>>25486398 You claimed only millionaires can self host the model. Now you shift the goalpost and say its too slow for you. Not just rich people can host it. You can host it yourself in the cloud, on 5k hardware, or even on more expensive hardware if you feel like it. Point is that you can host it yourself unlike chat gpt. You can host your own version encrypted in the cloud, where nobody can see what you type, and do it for literally cents.
Bernd Thu, 20 Feb 2025 18:29:45 GMT No. 25486509
>>25486353 I already done this. Check it out; https://1500chan.org/fvm/res/3111.html
Bernd Thu, 20 Feb 2025 19:58:09 GMT No. 25486873
no scandal when wectern coprs do same things
Bernd Thu, 20 Feb 2025 20:41:30 GMT No. 25487053
In my few tests, DeepSeek beat Grok 3. If the Chinks collect my data, I don't care nearly as much as the NSA collecting it.
Bernd Thu, 20 Feb 2025 20:52:59 GMT No. 25487085
>>25486255 What consequences?
Bernd Thu, 20 Feb 2025 22:18:51 GMT No. 25487434
1677361466864102.png
19.69 kB, 191x255
I don't care if my data helps refine open source models and is protected from being seen by Western intelligence agencies
Thread interest score: 5.6 Thread size: 89.74 kB