How Machine Teaching Can Expand Reach, Effectiveness of Machine Learning

by Dave Cahill

Enterprises of all sizes are evaluating artificial intelligence for a range of use cases beyond business-to-consumer and data-centric applications.

In particular, there is a growing need for AI models that can inject greater intelligence — in the form of control and optimization — into sophisticated industrial systems. These systems take many different forms, including robotics, vehicles, factories, supply chains, logistics, warehouse operations, HVAC systems, oil exploration and resource planning.

Trying to program AI to improve control and enhance real-time decision support for multidimensional, industrial systems quickly outstrips the capabilities of generic solutions. At the core of the issue is the lack of talent or tools that can combine an organization’s subject matter expertise with complex machine learning technologies to build application-specific AI models.

Bonsai Subject matter expertise, in the form of data, models and simulations, is critical to understanding the different variables, behaviors and constraints that drive the efficient operation of industrial systems. Paired with powerful machine learning libraries and techniques, like TensorFlow and reinforcement learning, specific domain expertise can significantly improve the efficiency and prediction accuracy of produced intelligence models, as well as the automation and operational efficiency of targeted systems.

Programming AI for these systems requires a combination of human and machine intelligence:

  • Human intelligence provides critical subject matter expertise that understands the variables that yield the most efficient operation of specific systems.
  • Machine intelligence is critical for helping systems learn faster and make better predictions, but this by itself is a brute force approach that can be complex and inefficient, especially for industrial systems applications.

Throughout the history of AI, we’ve seen innovations at both extremes. Expert systems gained popularity in ‘70s and ‘80s — solely based on subject matter expertise. This approach worked for simple deductions, and had strong explainability. However, it struggled with more complex predictions, was very rigid and difficult to scale.

With machine learning today, the pendulum has swung to the other side, focusing exclusively on finding patterns within huge piles of data without relying on any subject matter expertise. What’s still missing is an approach that can combine subject matter expertise, with the massive learning horsepower of machine intelligence, making the programming and management of AI models more accessible to developers and enterprises.

Fortunately for us, industry has tackled this problem before in other technology domains. Before databases were commonplace, it was very difficult to work with data in sophisticated ways. Databases solved this problem nicely, but they didn’t do it by providing a massive toolkit to tweak and tune all the low-level database mechanics. Instead, databases shifted up the level of abstraction, allowing developers to focus on the problem they were trying to solve.

AI suffers from a very similar problem today. The low-level machine learning libraries and algorithms are very difficult to work with. To make AI more accessible, the answer is not to expose these vast, complex toolkits to developers. Just like databases did for data, we need to shift the level of abstraction. This is where machine teaching comes in.

Machine teaching provides the abstraction and tooling for developers, data scientists and subject matter experts to program domain-specific intelligence into a system. Using a special purpose programming language like the one we have built at Bonsai, developers codify the specific concepts they want a system to learn, how to teach them and the training sources required (for example, simulations or data).

Using Bonsai’s AI Platform, programs developed with this approach can then be paired with state-of-the-art machine learning libraries such as TensorFlow and techniques such as reinforcement learning to more effectively generate and train the most appropriate high-level models for use in a specific hardware or software application.

By combining state-of-the-art techniques in machine teaching and machine learning, enterprises can more efficiently build application specific AI models that increase the automation and operational efficiency of sophisticated industrial systems.

See how we make GPU-accelerated AI models easy, at the O’Reilly AI conference in New York on June 28. Or learn more in our on-demand webinar.