Kinetica Helps U.S. Post Office Solve Big Data Problem, Wins Big HPC Award
Snow, rain and gloom of night are minor concerns compared to the big data problem the U.S. Postal Service faces when coordinating delivery of more than 150 billion pieces of mail each year.
After experiencing increased delays and instances of fraud, USPS turned to a high performance computing solution from NVIDIA and Kinetica, which just earned the HPC Innovation of Excellence award from IDC, as a result of its work for the post office.
The GPU-powered offering from Kinetica, formerly called GPUdb, let USPS move from batch processing, which digests data in relatively smaller chunks, to both stream and complex event processing, which combines data from multiple sources in near real time.
The offering takes in data from over 213,000 scanning devices with 15,000-plus concurrent users at post offices and processing facilities around the country. USPS also uses geospatial technology and inferencing to accurately predict and report real-time events.
With insights gleaned from this near-immediate analysis of its operations, USPS delivered more than 150 billion pieces of mail last year, while driving 70 million fewer miles, saving 7 million gallons of fuel and preventing 70,000 tons of carbon emissions.
GPUs for Big Data Analytics Workloads
Kinetica used NVIDIA Tesla GPU accelerators to deliver 100-1,000 times faster performance at a fraction of the cost of CPU-based relational database management systems. USPS achieved 200x query performance improvements with Kinetica over its existing RDBMS.
The massive boost in performance from GPUs lets organizations take on the challenging task of managing real-time data while delivering accurate and complete reports. Massive datasets can now be gathered and presented with visual analytics in ways simply not possible on CPU-only processing.
With our DGX-1 system, launched in April, and our Tesla P100 for PCIe servers announced this week at ISC, partners like Kinetica will be able to process even more data in real time. The DGX-1 is a monster of a system – delivering throughput equal to 250 conventional servers in a single system with eight Tesla P100 GPUs and 128GB of GPU memory.
In addition to hardware, we’re driving new levels of performance with NVIDIA CUDA. The parallel computing programming model simplifies usage with unified memory to accelerate the adoption of GPU-accelerated applications. We’re also accelerating new analytics workloads with nvGRAPH for high-performance graph analytics.
Learn how to supercharge your deep learning applications with DGX-1 in our on-demand webinars.
Image credit: Brian Gaid