I don’t understand why everyone’s freaking out about this.
Saying you can train an AI for “only” 8 million. It is a bit like saying that it’s cheaper to have a bunch of university professors do something than to teach a student how to do it. Yeah and that is true, as long as you forget about the expense of training the professors in the first place.
It’s a distilled model, so where are you getting the original data from if not for the other LLMs?
If you can make a fast, low power, cheap hardware AI, you can make terrifying tiny drone weapons that autonomously and networklessly seek out specific people by facial recognition or generally target groups of people based on appearance or presence of a token, like a flag on a shoulder patch, and kill them.
Unshackling AI from the data centre is incredibly powerful and dangerous.
They implied it wasn’t something that could be caught up to in order to get funding, now ppl that believed that finally get that they were bsing, thats what they are freaking out over, ppl caught up for way cheaper prices on a moden anyone can run open source
Right but my understanding is you still need Open AIs models in order to have something to distill from. So presumably you still need 500 trillion GPUs and 75% of the world’s power generating capacity.
The message that OpenAI, Nvidia, and others which bet big on AI delivered was that no one else could run AI because only they had the resources to do that. They claimed to have a physical monopoly, and no one else would be able to compete. Enter Deepseek doing exactly what OpenAI and Nvidia said was impossible. Suddenly there is competition and that scared investors because their investments into AI are not guaranteed wins anymore. It doesn’t matter that it’s derivative, it’s competition.
Yes I know but what I’m saying is they’re just repackaging something that openAI did, but you still need openAI making advances if you want R1 to ever get any brighter.
They aren’t training on large data sets themselves, they are training on the output of AIs that are trained on large data sets.
I don’t understand why everyone’s freaking out about this.
Saying you can train an AI for “only” 8 million. It is a bit like saying that it’s cheaper to have a bunch of university professors do something than to teach a student how to do it. Yeah and that is true, as long as you forget about the expense of training the professors in the first place.
It’s a distilled model, so where are you getting the original data from if not for the other LLMs?
If you can make a fast, low power, cheap hardware AI, you can make terrifying tiny drone weapons that autonomously and networklessly seek out specific people by facial recognition or generally target groups of people based on appearance or presence of a token, like a flag on a shoulder patch, and kill them.
Unshackling AI from the data centre is incredibly powerful and dangerous.
The other LLMs also stole their data, so it’s just a last laugh kinda thing
They implied it wasn’t something that could be caught up to in order to get funding, now ppl that believed that finally get that they were bsing, thats what they are freaking out over, ppl caught up for way cheaper prices on a moden anyone can run open source
Right but my understanding is you still need Open AIs models in order to have something to distill from. So presumably you still need 500 trillion GPUs and 75% of the world’s power generating capacity.
The message that OpenAI, Nvidia, and others which bet big on AI delivered was that no one else could run AI because only they had the resources to do that. They claimed to have a physical monopoly, and no one else would be able to compete. Enter Deepseek doing exactly what OpenAI and Nvidia said was impossible. Suddenly there is competition and that scared investors because their investments into AI are not guaranteed wins anymore. It doesn’t matter that it’s derivative, it’s competition.
Yes I know but what I’m saying is they’re just repackaging something that openAI did, but you still need openAI making advances if you want R1 to ever get any brighter.
They aren’t training on large data sets themselves, they are training on the output of AIs that are trained on large data sets.
Dead internet theory (now a reality) has become the dead AI theory.
Tis true. I’m not a real person writing this but rather a dead AI