• KingRandomGuy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 hours ago

    Yeah, I agree that it does help for some approaches that do require a lot of VRAM. If you’re not on a tight schedule, this type of thing might be good enough to just get a model running.

    I don’t personally do anything that large; even the diffusion methods I’ve developed were able to fit on a 24GB card, but I know with the hype in multimodal stuff, VRAM needs can be pretty high.

    I suspect this machine will be popular with hobbyists for running really large open weight LLMs.