Case study

Use case

Growing better MLOps: faster onboarding through unified experiment tracking

Automating experiment tracking to reduce onboarding times from weeks to days

Company

Fotenix helps farmers make smarter, more profitable decisions through computer vision – predicting yield, detecting disease, and optimising crop health from multi-spectral images.

https://www.fotenix.tech/
Headquarters
Salford, Greater Manchester
Industry
Agri-tech

Growing fruit and veg isn’t the first place that comes to mind for cutting edge MLOps, but that’s exactly where our work with Fotenix took us...

The Challenge

Modern agriculture is a very complex field. Every farm is a living system, and the smallest choices can ripple through a season, nudging crops toward success or trouble. Farmers have to decide when and where to apply chemicals, understand the yield potential and traits of what is growing, and spot stress or disease early enough to act.

To support those decisions, Fotenix needs to adapt its models to each new farm and crop quickly, which makes fast, trustworthy experimentation essential.

Their proprietary image-processing and machine learning pipeline was already producing results in production, but because every farm and crop looked slightly different, Fotenix’s models had to be re-tuned each time. Every client onboarding would include many tuning iterations. Every iteration meant running the full pipeline, manually collecting metrics from multiple sources, experimenting separately in notebooks, and repeating the process. It was slow, hard to track and reproduce, and difficult to see which changes actually improved results.

Fotenix’s engineers needed a faster, clearer way to understand how different parameters affected pipeline outcomes, without trawling through the databases and notebooks. They also wanted to add in the MLOps goodness to help them scale.

With experiment tracking in place, each tuning cycle becomes reproducible and easy to compare. Teams can see what changed, share what they learn, and avoid repeating work when a new farm comes onboard.

However, Fotenix’s use case was unique.

For a single model, MLflow would usually be enough for experiment tracking, but, for Fotenix they needed something more flexible as their work spans several stages of a complex pipeline: their setup called for side-by-side comparisons of artefacts from different points in the process, each produced under different configurations, meaning they had to line up image masks, segmentation outputs, and the final ML classification to see which changes genuinely improved the pipeline and which made little difference.

Our Solution

For Fotenix, we worked around their incredibly bespoke pipeline, taking a “brown-field development” approach.

Our work focused on three key areas:

  • Seamless integration: Plugging into the existing codebase with minimal disruption to current workflows.
  • Extensive coverage: From the common ML performance metrics, and hyperparameters, to domain-specific visuals such as segmentation masks and task-specific graphs.
  • Extensible foundation: Enabling Fotenix’s engineers to build on the framework as their pipeline evolved.

After a few iterations, we reached a setup that reliably surfaces the information they need to make decisions.

The Results

With unified experiment tracking in place, Fotenix’s data scientists can now easily compare experiments, visualise outputs, and understand how tuning affects model performance — all in one place.

New client projects can now be onboarded in days instead of weeks, with less manual work and higher confidence in the results.

The new framework also laid the groundwork for future improvements: reproducibility, auditability, and better collaboration between engineers.

We also found MLflow capabilities are more than what’s outlined in the basic tutorials. It can infact handle very complex things!

How We Built It

Leaning into our Fuzzy Labs forward deployed engineering principles, we ran regular pair-programming sessions with Fotenix’s engineers and data scientists, straight from the start, refining integrations and ensuring the tooling solved their real challenges.

This way of collaboration meant that we could understand their current and intended workflows and see where the gaps are, after all it’s them who are going to use the solution. It also allowed us to share our knowledge of MLOps, and help Fotenix adopt the principles of it quicker — shaping the system together helped mutual understanding of what we were trying to achieve.

Technology-wise, we built the solution using the MLflow library, chosen for its flexibility and strong community support. Which in turn highlights the power of open source solutions.

Why It Matters

For AI teams in production, the ability to run fast, transparent experiments is critical.

By automating experiment tracking and centralising visibility, Fotenix can now iterate quicker, deploy more confidently, and focus their energy on advancing agricultural intelligence — not on manual tracking. What was once slow and hard to reproduce has become a reliable, scalable system that turns research pipelines into real business value.

They’re now free to focus on what they do best: transforming images of plants into decisions that shape harvests. And behind the scenes, they’ve got an experiment-tracking backbone that can now grow with them, season after season.