There’s a lot of confusion out there about deep learning performance. How do you measure it? What should you measure?
The simple answer: PLASTER.
The not so simple reality: “Hyperscale data centers are the most complicated computers the world has ever made — how could it be simple?” NVIDIA CEO Jensen Huang explained at NVIDIA’s GPU Technology Conference earlier this year, before cramming each of the factors that drive this performance into that single acronym.
Here’s what PLASTER stands for:
- Size of Model
- Energy Efficiency
- Rate of Learning
Read the white paper “PLASTER: A Framework for Deep Learning Performance,” from Tirias Research, which puts all of these factors into context.