-1

My employer asked me to choose a graphic card for deep learning purposes like NLP and vision domains.

What is the difference between a graphic card with 16 GB vs 2x 8GB VRAM on same hardware, regardless of its cost?

I mean, instead of using one GPU with 16 GB VRAM, we use two GPUs with 8 GB VRAM each

My employer wants to see if there are no significant differences. It would be better for them to buy two GPU to use in multiple hardware for future purposes.

Greg Askew
  • 36,724
  • There’s a dedicated AI stack exchange site I think, where you might get an answer other than “it depends” on your use-case. AFAIK you often need a complete model to fit into VRAM and intuitively you can’t put half your model into one GPU and the other half into another and get the same results as with everything fitting in one twice as large GPU – HBruijn Mar 27 '24 at 15:05
  • 1
    Just some common sense: 2x8 means you use 2 pice - no chance to upgrade and unless you have some really low end requirements, 16gb is comically low to start with. – TomTom Mar 27 '24 at 15:05
  • 1
    @HBruijn You actually can split models - half the layers in one, half in the other, weights get moved. Standard in pretty much every runtime. – TomTom Mar 27 '24 at 15:05

0 Answers0