Cooking up a Storm: GPU-Powered Smart Oven Is a Miracle Machine

GPUs help power some amazing devices — supercomputers, powerful telescopes, rovers on Mars.

Now, add something decidedly more domestic to the list: an oven.

But the June Intelligent Oven is no ordinary oven. It’s a beautiful, sophisticated feat of engineering. It’s loaded with gadgets and gizmos to make everyone look like a Michelin-star chef incapable of overcooking or undercooking a meal.

Thanks to computer vision and deep learning technologies, your food is always prepared to perfection. And it uses the NVIDIA Tegra K1 processor to make critical cooking decisions, so you don’t have to.

An Oven That’s Smart, Inside and Out

June Intelligent Oven
The June Intelligent Oven uses the Tegra K1 processor to make critical cooking decisions, so you don’t have to.

A year and a half in the making, the June oven has a sleek industrial design that features a full-width oven window with an integrated touchscreen to maximize internal cooking space. The gorgeous 5-inch display gives you control with its intuitive user interface.

Built for the countertop, the oven can fit a whole chicken, up to nine pieces of bread, a quarter-size baking sheet and even a 12-pound turkey.

On the inside, however, is where the magic happens. Using dual-surround convection technology and instant-on carbon fiber heating elements, the June Oven is expert in cooking your favorite foods. Steaks and chicken, cookies and cakes, toast and roasts, even lobster.

A high-definition camera peers down on the food to identify it. The oven can “see” if bagel halves are facing up or down to ensure a crunchier top face and softer exterior.

A scale built into the top of the oven weighs the food to help determine cooking times. Probes determine core temperature.

Machine intelligence algorithms — built on the CUDA parallel computing platform running on the Tegra K1 chip — process the data from these and other sensors for perfect baking.

Plus, the oven will get smarter thanks to over-the-air software updates. And it learns your preferences based on past choices. The result: one-touch cooking at the settings you like, to the doneness you prefer.

June used the NVIDIA Jetson TK1 DevKit to hone their computer vision and deep learning chops during product development.

“We needed a lot of computational horsepower to achieve real-time computer vision,” said June’s CTO and co-founder Nikhil Bhogal. “The NVIDIA Tegra K1 and Jetson TK1 DevKit let us quickly prototype and develop a product that exceeded our expectations.”

While the oven offers set-and-forget operation, you can still stay connected. It streams live, high-definition video to phones and tablets, so you can watch your cookies brown or cakes rise. It will even push a notification to your mobile device to alert you when it’s time to eat.

You can find out more about the June Intelligent Oven, and even place an order at It ships next spring. And follow @June on Twitter and Instagram and TheJuneOven on Facebook and Pinterest for fancy food-filled updates!

Similar Stories

  • kita

    can i install Arch Linux on it?

  • Alexander Tarasikov

    Did you just give up on optimizing the Tegra’s power consumption?

  • IvantheDugtrio

    They should have used a few GTX 480s for the heating elements.

  • Nyqua Xyla

    You’re kidding right? Tegra probably holds the current perf-per-watt crown already; and now with the Tegra X1, I suspect it’s ahead of the competition by a long mile.
    Though granted, their power draw may still be unsuitable for mobile phones. but that’s besides the point.

  • Alexander Tarasikov

    I certainly am. I have a Nexus 9 and it’s cooler (in all ways) than a Snapdragon-based phone 🙂

  • Nathan Saunders

    does it come with wi-fi and a SSH Login?

  • James

    One day my oven will realize that I want to marry it and it will “prefer” not to marry me. Then when it “learns” that I want to throw it in the garbage it may change its mind. Just because I want to marry my oven, doesn’t mean it wont cheat on me in the long run…

  • Nyqua Xyla

    Haha 😀 Well, TBH even I am eagerly waiting for when the Tegra starts trying to seriously fit within the power profile for phones. But it seems like each time they improve perf-per-watt, they choose to put back all of that gain into more perf (instead of less watts)

  • AMDlunatic

    Or 290X or Fury without water cooler.

  • Lemming Overlord

    What if you want a bun in the oven?

  • Raytracing

    kidding right? Tegra probably holds the current perf-per-watt crown already;”

    It’s a long, long way from the crown in that matric.

  • Chrit S

    Insert “can it run Witcher 3 at max settings?” joke here

  • carrie h


  • SusanREckert

    m fine and you

  • Will Park

    Sorry, that’s a question for 🙂

  • Will Park

    lol HalfLife 2, maybe? 😛

  • Nyqua Xyla

    Really? I would like to know who holds that crown currently, then..

  • Raytracing

    The problem is if I link to any IP designs that are faster or SoC’s that are better perf-per-watt the posts gets deleted 🙁 no matter how good the evidence is.

    I will try and put a hint in my post below this one. If you don’t see it then well it got deleted again.

  • Raytracing

    Do you know the GPU Company that recently taped out a hardware ray tracing chip? The same one that is bringing ray tracing to mobile over time? A chip that’s more advanced then the Tegra is many ways as it does ray tracing. They have SoC’s with much better perf-per-watt then tegra. That company’s highest end SoC IP designs should be far, far faster and high spec then the newest Tegra. Supports features Tegra doesn’t as well. Last time I looked they hold the perf-per-watt crown.

  • Nyqua Xyla

    Well it’d be great if you mention the name of that chip, because mobile raytracing sounds totally sweet!
    But if it’s a chip not yet publicly released, then I wouldn’t say it counts just yet; so at least for now, Tegra has the crown.

  • Nyqua Xyla

    Well, okay I found what you meant by Googling. But like I said, if it’s only taped out now, we’ll be waiting for several more months before it gets released. Exciting developments, nonetheless

  • Raytracing

    I think you misunderstood me, my fault due to how I wrote. Tegra never had the crown. The same company that does that ray tracing chip we are talking about have other GPU SoC’s out in products now. Those SoC run graphics way higher than anything the Tegra chip can do while running at way higher perf-per-watt then Tegra. Apps like Zen Garden just don’t run on Tegra.

    I have not mentioned chip names as every time I have in the past my post has been deleted. One reason the Tegra chip has such small market share is because its perf-per-watt isn’t very good compared to competitors

    One of the reasons Tegra has such poor performance per watt
    is because it’s not a tile based deferred renderer chip so has to use up more bandwidth and use up more raw power to render the same game screen as what most competitor chips do. Missing that key feature puts the Tegra chips at a disadvantage.

  • Nyqua Xyla

    Well, I’m not sure where you’re getting your info from. High-end graphics don’t run on a Tegra X1 processor with a 256-core GPU, having desktop-class Maxwell architecture?
    Good luck trolling elsewhere, I’m outta here

  • MarkRKuhn

    Your first choice nvidia Find Here

  • Raytracing

    What do you mean trolling? I am not the one calling the
    Tegra chip “ahead of the competition by a long mile.” Which isn’t true.

    I was talking about the mobile world, high end for the mobile
    world. Everything I said was true within that context of the mobile world and that Tegra doesn’t have the crown in high end mobile graphics or perf-per-watt. I was wondering why think Tegra is the best ahead of the competition by a long mile and the best perf-per-watt when its known throughout the industry for its poor perf-per-watt.

  • Nyqua Xyla

    Regarding perf-per-watt: The TX1 is at more than 50 FP32
    GFLOPS per watt. Ok whatever, you can’t post the chip’s name. Can you just give me the perf-per-watt number?
    And about the trolling.. you say ‘high end graphics simply dont run on Tegra’. Well, the Tegra X1 was recently demoed running Crysis 3, admittedly at only a moderate FPS; but the point was that it ran.

    Hence again, good luck trolling elsewhere, I’m outta here.

  • Raytracing

    There is no need to name calling. The GXA6850 has an average of
    2.6 watt drain and 1024 gflops of power so unless I just made a stupid mistake the GXA6850 runs at 393.8 flops per watt. That’s
    massively ahead of the Tegra tiny 50 flops per watt you posted.

    The GXA6850 seems to peak out at around 3 perhaps while the Tegra X1 goes up to 10 watt, sometimes 11 watt. So Tegra X1 peak watt drain is over x3 worse than the GXA6850. So I don’t agree with calling the Tegra as having the crown by a mile. That’s not even looking at the GT7xxx line of GPU’s all massively ahead of Tegra in performance and performance per watt.

    From what I have seen of Crysis on Tegra is the same problem all Tegra games have. Very limit objects on screen, very limited draw calls, reduced graphics. What NVidia don’t mention is competing GPU’s can run at 5000+ objects and 4000+ drawcalls while Tegra has a massive draw call problem on Android. Try pulling off advanced graphics on a Tegra X1 with Android and OpenGL ES and you can barely pull off 500 draw calls. That’s why most top end mobile games of the past year have had reduced graphics and reduced objects on screen when played on Tegra. Try and run something like Epic Zen Garden on Tegra X1 and you just cannot do that style of graphics but you can on competing GPU’s. There is a long list of top end mobile games that run on Tegra X1 but have reduced graphics over what the competing GPU’s do.

    Two examples Modern Combat 5 the Tegra version has the follow graphic improvements removed.

    -Denser explosions and RPG rocket trails, Awesome impact particles for intense gunfights, Richer environments and
    weather effects, Improved heat haze and god rays. All Missing on Tegra versions.

    Asphalt 8 another top end mobile game that has reduced graphics on Tegra but my post is already to long so not going list the graphics Tegra has removed.

  • Nyqua Xyla

    Well, only if you’d posted this along with your first post, then there would be no reason to do any namecalling; we could just argue based on facts.
    For instance, the fact that you’re comparing the power draw a GPU with an SoC, which is a GPU + CPU + everything else needed for being a complete ‘System’on chip.
    Also, the fact that the 1024 GFLOPS you conveniently forgot to mention were FP16, whereas I had specifically said FP32.

  • Raytracing

    Sorry I made a mistake but even at FP32 at 196.9 GFLOPS per watt the GXA6650 is over 3x better than the Tegra numbers you posted and the GXA6650 is far, far more efficient with those GFLOPS due to being a tile based deferred render.

    So the GXA6650 has better performance per watt and can run better graphics then what we can do on Tegra.

    I didn’t post the name before as I have had around 20 posts deleted
    for doing that in the past. Pretty shocked it’s not been deleted again.

  • Nyqua Xyla

    Well, I am not sure where you get the 196 GLOPS number. Presumably you are measuring a power draw of 3 Watts for GXA6650?
    There, again I think you may have missed my second point: GXA6650 is just a GPU, TX1 is a System-on-chip, and includes a CPU,GPU and maybe a dozen other modules. So when TX1 draws 10 watts, that is the power draw of the entire SoC, not just the GPU.
    Honestly, I don’t think that TX1 docs will anywhere mention the power draw of just the GPU, so I guess we’re left with insufficient data, to really make a fair comparison.
    Unless, of course, we compare the full TX1 SoC vs the SoC which contains GXA6650, i.e. the A8X processor. Do we have FLOPS and power data for A8X available anywhere? perhaps.. not sure myself.

  • Raytracing

    I got the GFLOP number by looking at the total GFLOPs and dividing
    by the watt draw rate of 2.6 while running GFX Benchmark. So 196.9 GFLOPS per watt at FP32 and 393.8 per watt at FP16.

    As for the entire SoC the GXA6650 sits in, as far as I remember its thermal limit is 4.5watt so the entire SoC should be well below 4.5 watt in tablets and even smaller for phones.

  • jipe4153

    You do now that Nvidia has compared both the tegra K1 and Tegra X1 on separate power rails and both times had 2x FPS/watt in games?

    By the way there is no data on the GXA6850 thermal limits available, so im calling bs.

    Facts are:
    1) Tegra X1 @ 512 GFLOPS (FP32) and 1024 GFLOPS (FP16)
    The A8X: 196 GFLOPS.

    The GXA6850+CPU+other gets smoked byt the full tegra X1 SoC (Maxwell GPU+CPUa+other):

    “Nvidia’s new Tegra X1 mobile chipset
    is a veritable beast: It’s able to provide almost two times the
    graphics performance of the iPad Air 2’s A8X while also consuming just
    about the same amount of power, and it’s already in production, meaning
    tablets sporting the X1’s graphical prowess should be available to
    consumers in the relatively near future.”

  • WizardRaytracingFan

    GXA6850+CPU+other gets smoked byt the full tegra X1 SoC (Maxwell

    If that is the case why is is the GXA6850 games have graphics far in advanced of what the Tegra devices have shown us? Why is it developers have to scale back the graphics and reduce the objects on screen to get it working on the Tegra devices? The real world doesn’t seem to match up with what you are saying.

  • jipe4153

    your claims lack proof. Ive shown you benchmarks, also I dont see half life 2 running on the A8X for example. Where are these games you speak of? …. And btw stop comparing full SoC:s with a single SoC component, it’s like ur comparing a full desktop with just a discrete graphics card, it’s just silly.

    Here are some more benchmarks of the A8X getting crushed, for your viewing pleasure:

  • WizardRaytracingFan

    Half Life 2 is over a decade old who cares about something
    that outdated? Compared to modern games HL2 is a simple game to render, few objects on screen and no modern advanced graphic effects. The only reason its not on ather platforms is no one has been bothered to port over such an old outdated game.

    If you look at modern games on the other hand with more modern
    advanced graphic like Modern Combat 5 you find the Tegra version has following graphic improvements removed. -Denser explosions and RPG rocket trails, Awesome impact particles for intense gunfights, Richer environments. Asphalt 8 another top end mobile game that has reduced graphics on Tegra. It’s common to find Tegra
    version of games have reduced graphics and reduced objects on screen.

    Like I asked you before if the Tegra is so powerful how come
    so many of the main mobile games have reduced graphics and reduced objects on Tegra? If the Tegra chip is so powerful how come it’s impossible to run graphics like we have in Zen Garden with 1000’s of interactive objects?

    As for those benchmarks they are meaningless as they don’t represent real games or use the advantages SoC’s which the GXA6850 make use of. What use is the benchmark when it doesn’t match up with real apps and real games?

    getting crushed, for your viewing pleasure:”
    The point of a SoC is to make use of it and run real world apps and games. I don’t care about which one is best in the benchmarks I care about which one runs the most advanced and best apps and games.

    Interesting as soon as you turn the graphics up much higher than those at benchmark Tegra falls down into unplayable single digit FPS while the A8X stays playable. Try running an game or app on Tegra with 4000+ draw calls. Perfectly playable on A8X, completely
    unplayable on Tegra. You cannot run something like Zen Garden on Tegra, but you can on a A8X. So which is the better chip?

  • LouiseEScheel

    ….next few days your life success days…blogs.nvidia….. < Find Here