Growing fruit and veg isn’t the first thing people associate with MLOps. But that’s where our work with Fotenix took us.
Fotenix helps growers monitor crops using imaging and machine learning. They specialise in indoor farming and polytunnel environments - picture long rows of strawberries under plastic - with cameras that take daily “crop walks”. Those images are used to spot pest, disease, and quality issues early, so harvest time has fewer nasty surprises.
The Challenge
Fotenix’s image-processing pipeline was already in production. The problem was variability. Every grower has a different polytunnel setup, so models needed retuning for each new customer.
Their engineers needed a faster way to understand how parameter changes affected results. Standard experiment tracking is good at logging parameters and metrics. It’s less helpful when the “result” you care about is a chain of intermediate visual outputs.
Fotenix’s pipeline spans multiple stages of a computer vision workflow. To judge whether a change genuinely helped for a specific site, the team needed to compare intermediate artefacts across runs, side by side, and trace what they were looking at back to the exact settings that produced it.
Our Solution
We took a brown-field approach, integrating with what was already running rather than forcing a redesign.
We focused on three areas:
- Seamless integration: plugging into the existing codebase with minimal disruption.
- Wide coverage: tracking metrics and hyperparameters alongside domain-specific artefacts such as segmentation masks and task-specific graphs.
- An extensible base: making it easy for Fotenix to extend the framework as the pipeline evolves.
The result is a setup that makes it easy to compare runs and trace outcomes back to the parameters that produced them, giving engineers the information they need to tune with confidence.
The Results
With tailored experiment tracking in place, Fotenix’s team can review key outputs in one place and understand how tuning choices affect behaviour across the pipeline, without manually gathering evidence from multiple sources.
New customer projects can now be onboarded in days rather than weeks. Less time is spent recreating past work, and more time is spent making informed decisions about what to change next.
The shared visibility also makes experiments easier to reproduce and simplifies how results are reviewed across engineering and data science, reducing friction as the team iterates.
How We Built It
We worked closely with Fotenix’s engineers and data scientists, pairing regularly to understand how results were actually reviewed and where decisions slowed down.
The solution was built on MLflow, chosen for its flexibility and ecosystem. By modelling intermediate visual artefacts alongside numerical measurements, we were able to adapt a familiar tool to fit a more complex, multi-stage pipeline.
Why It Matters
When you're running ML models in the real world, you can't improve what you can't measure. Fotenix can now iterate quicker and deploy with more confidence. What was once slow and hard to reproduce is now repeatable and dependable. Less time gathering evidence, more time making decisions.
That frees the team to focus on what they do best: turning images of plants into decisions that shape harvests. Behind the scenes, an experiment-tracking backbone now supports that work and can grow with them, season after season.



.png)
.png)

.jpeg)
