Let's talk
Don’t like forms?
Thanks! We'll get back to you very soon.
Oops! Something went wrong while submitting the form.
Tools In Action • 4 minutes • May 22, 2023

Introducing Matcha: open source MLOps for Azure

Matt Squire
Matt Squire
CTO

Last week we announced Matcha, an open source tool for provisioning machine learning infrastructure on Microsoft® Azure. Setting up infrastructure can be daunting if you’re not well-versed in cloud engineering, so we’ve set out to eliminate this complexity through an intuitive Python-based tool.

In this blog, I’ll introduce Matcha, explain how it works, and why we built it.

What is Matcha?

Taking a machine learning model all the way from experiment to production is a challenging endeavour. There are plenty of tools that can help at each stage of this journey, from experiment tracking through to model deployment, but setting these up requires a lot of time as well as cloud engineering knowledge.

The skills needed to set up infrastructure are very different from those needed to train good ML models. Because of this, we designed Matcha with the typical data scientist in mind. We want to take care of the infrastructure part for you, leaving you free to focus on models.

Matcha will provision a set of carefully chosen tools, all open source, to your Azure cloud environment. Right now it provisions four things:

  • MLFlow for experiment and model tracking
  • Seldon for model deployment
  • ZenML for workflow orchestration
  • Kubernetes for running training workloads, as well as for hosting everything else.

And we plan to add model monitoring and data version control tooling in the near future.

In addition to provisioning, there’s an important educational aspect to Matcha. If it were just a tool for provisioning infrastructure, without good examples of how to use that infrastructure, then its value would be limited. That’s why we’ve put a lot of emphasis on including a set of well-engineered examples to go with it, covering common machine learning use-cases. These examples will help you make the most out of Matcha.

How does it work?

Matcha is a command-line tool, written in Python, and installed as a PIP package like so:<pre>

<code>pip install matcha-ml

</code></pre>

The starting point for doing anything with Matcha is to provision your infrastructure, which you can do by running:<pre>

<code>matcha provision

</pre></code>

When you do this, you’ll be asked a few questions, such as what region you want to provision to, and what name to give all the resources that it creates. Matcha then gets to work setting up the core tools; Seldon, MLFlow, ZenML, Kubernetes.

Once provisioning is complete, you can use the Matcha get command to query your infrastructure. This command is how you can connect your model training code up with your provisioned infrastructure.  For instance, you can get hold of MLFlow’s tracking URL by running:<pre>

<code>matcha get experiment-tracker

</pre></code>

And you’ll get back a reply that looks something like:<pre><code>Experiment tracker
     - flavour: mlflow
     - tracking-url: https://some-url
</code></pre>

The tracking URL can then be used within your model training code in order to log experiments to MLFlow.

Give it a try

The best place to start using Matcha is to follow our guide to deploying your first model. In this guide, we walk you through how to provision your first machine learning infrastructure to Azure, and then use that infrastructure to train and deploy a model from our examples repository.

For a deeper dive into using and contributing to Matcha, see:

What’s next

Matcha is currently in alpha release. We’re really grateful to our friends at ZenML and Seldon for their support and collaboration in this release, and now we’re opening this up to the world for feedback. If you encounter a bug, or find a missing feature, please let us know by raising an issue on Github.

We've put a lot of thought into what our users — data scientists, ML engineers, etc — need from their infrastructure, and we came up with 5 key pieces of functionality that are absolute musts:

  • A place to track, version, and manage datasets.
  • A place to track experiments and models assets.
  • Scalable compute for running training workloads, with the option to use GPUs.
  • Somewhere to deploy and serve models in a way that scales with your application needs.
  • The ability to monitor models for things like drift and bias.

Of those we currently support experiment tracking, training, and deployment, with plans to add data versioning and monitoring later. Additionally, we’re looking to add a Python API, as an alternative to the command line tool, allowing integration of Matcha directly into ML workflows.

You can view our public roadmap here.

Over the coming weeks we’ll be making a number of releases adding features and making improvements. At the same time, we’ll also publish further content explaining more of how Matcha works, and how to get the most out of it.

Share this article