• FooBarrington@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sure, but that’s not done with the kind of model this thread is about (separate training and inference). You’re talking about classical ML models with continuous updates, which you wouldn’t run on this kind of GPU infrastructure.