The developers behind MixPose, a yoga posture-recognizing application, aim to improve your downward dog and tree pose positions with a little nudge from AI.
MixPose enables yoga teachers to broadcast live-streaming instructional videos that include skeletal lines to help students better understand the angles of postures. It also enables students to capture their skeletal outlines and share them in live class settings for virtual feedback.
“Our goal is to create and enhance the connections between yoga instructors and students, and we believe using a Twitch-like streaming class is an innovative way to accomplish that,” said Peter Ma, 36, a co-founder at the MixPose team.
MixPose’s streaming video platform can be broadcasted with Jetson Nano. The live stream content can then be viewed on Android TV and mobile phones.
On Tuesday, the group was among 10 teams awarded top honors at the AI at the Edge Challenge, launched in October, on Hackster.io. Winners competed for a share of NVIDIA supercomputing prizes, as well as a trip to our Silicon Valley headquarters.
Hackster.io is an online community of developers, engineers and hobbyists who work on hardware projects. To date, it’s seen more than 1.3 million members across 150 countries working on more than 21,000 open source projects and 240 company platforms.
MixPose, based in San Francisco, taps PoseNet pose estimation networks powered by Jetson Nano to do inference on yoga positions for yoga instructors, allowing the teachers and students to engage remotely based on the AI pose estimation. It is developing networks for different yoga poses, utilizing Jetpack SDK, CUDA ToolKit and cuDNN.
Four Prized Categories
MixPose took first place in the Artificial Intelligence of Things (AIoT) category, one of four project areas in the competition that drew 2,542 registrants from 35 countries and 79 submissions completed with code and shared with the community.
MixPose demos its streaming app
The team also landed third place in AIoT for its Jetson Clean Water AI entry, using object detection for water contamination.
“It can determine whether the water is clean for drinking or not,” said 27-year-old MixPose co-founder Sarah Han.
Contest categories also included Autonomous Machines and Robotics, Intelligent Video Analytics and Smart Cities. First, second and third place winners in each took home awards.
RNNs for Reading
NVIDIA gave a single award in the category AI for Social Good, which Palo Alto High School junior Andrew Bernas took home for his Reading Eye for the Blind with NVIDIA Jetson Nano. It’s a text-to-voice device for the visually impaired that uses CNNs and RNNs to recognize both handwritten and printed text and then synthesize it into human-like speech.
“Part of the inspiration was creating a solution for my grandmother and other people with vision loss to be able to read,” said Bernas.
Andrew Bernas’ text-to-speech device for the visually impaired
AI Whacks Weeds
First-place winners also included a team from India behind the weed removal robot Nindamani, in the Autonomous Machines and Robotics category.
Nindamani’s AI-driven wee removal robot
Traffic Gets Moving
And a duo working on adaptive traffic controls took a top award for their networks used to help improve traffic flows, in the Intelligent Video Analytics and Smart Cities category.