ffhein@lemmy.worldtoLocalLLaMA@sh.itjust.works•How much gpu do i need to run a 90b modelEnglish
0·
9 days agoYou have to specify which quantization you find acceptable, and which context size you require. I think the most affordable option to run large models locally is still getting multiple RTX3090 cards, and I guess you probably need 3 or 4 of those depending on quantization and context.
I admit this is speculation, but I got the impression that Prusa is moving away from open source because they’re salty about other companies cloning their products and selling them much cheaper than the “original” parts. Proprietary parts, patents, etc. is of course worse for the user than a fully open ecosystem, but he isn’t necessarily going full anti-consumer.