Because in reality, it’s only gimped for certain scenarios, and compared to the 2080TI, it’s a monster upgrade for ML!
Here’s a dirty secret: there is a ton of ML prototyping done on GeForce level cards, and not just by enthusiasts or at scrappy startups. You’ll find GeForce level cards used for ML development in workstations at Fortune 50 companies. NVIDIA would love everyone to be using A100s to do their ML work (and V100s before that,) but the market isn’t in sync with that wish. The 2080TI remains an incredibly popular card for ML even with only 11GB. Upping to 24GB, even with the artificial performance limitations for certain use-cases, enables new development opportunities and use-cases to explore.
When it comes to product stratification, the hard rule according to driver EULAs is that GeForce cards can’t be used in data centers. For serious ML development at scale, NVIDIA has their DGX lineup. In the middle are the Quadro cards, but they tend to be a poor value for ML. The cost differential with Quadro is largely due to optimizations and driver certification for use with tools like Catia or Creo (CAD/CAM use,) which don’t intersect with ML.
The Titan RTX may not have the gimped drivers, but the 3090 beats the Titan in many benchmarks nonetheless. Is the 3090 the best NVIDIA PCIe form factor card for ML? No. The A100 is still king of the crop and is the only Ampere card with HBM memory, and even the A6000 will outperform for many use-cases with 48GB of RAM. Still, the 3090 will be the optimal card for many.
I’m one of the lucky few to have a 3090 in my rig. I lead of team of volunteers doing critical AI prototyping and POC work in an industry give-back initiative, and price was not a leading factor in my decision to procure a 3090 over a Quadro. I chose the 3090 principally because I didn’t want a loud blower card in my computer (and I don’t need 48GB.) If someone donated an A100 to our efforts, I’d gladly take it, but it wouldn’t replace the 3090. It’s not a graphics card and it won’t play games, which indeed is an important value-added benefit of the 3090 :)
Here’s a dirty secret: there is a ton of ML prototyping done on GeForce level cards, and not just by enthusiasts or at scrappy startups. You’ll find GeForce level cards used for ML development in workstations at Fortune 50 companies. NVIDIA would love everyone to be using A100s to do their ML work (and V100s before that,) but the market isn’t in sync with that wish. The 2080TI remains an incredibly popular card for ML even with only 11GB. Upping to 24GB, even with the artificial performance limitations for certain use-cases, enables new development opportunities and use-cases to explore.
When it comes to product stratification, the hard rule according to driver EULAs is that GeForce cards can’t be used in data centers. For serious ML development at scale, NVIDIA has their DGX lineup. In the middle are the Quadro cards, but they tend to be a poor value for ML. The cost differential with Quadro is largely due to optimizations and driver certification for use with tools like Catia or Creo (CAD/CAM use,) which don’t intersect with ML.
The Titan RTX may not have the gimped drivers, but the 3090 beats the Titan in many benchmarks nonetheless. Is the 3090 the best NVIDIA PCIe form factor card for ML? No. The A100 is still king of the crop and is the only Ampere card with HBM memory, and even the A6000 will outperform for many use-cases with 48GB of RAM. Still, the 3090 will be the optimal card for many.
I’m one of the lucky few to have a 3090 in my rig. I lead of team of volunteers doing critical AI prototyping and POC work in an industry give-back initiative, and price was not a leading factor in my decision to procure a 3090 over a Quadro. I chose the 3090 principally because I didn’t want a loud blower card in my computer (and I don’t need 48GB.) If someone donated an A100 to our efforts, I’d gladly take it, but it wouldn’t replace the 3090. It’s not a graphics card and it won’t play games, which indeed is an important value-added benefit of the 3090 :)