Technical

Artificial Intelligence

Netflix Introduces Configurable Metaflow for Scalable ML Workflows

Netflix releases Metaflow 2.13 with Config objects, enabling teams to manage thousands of ML workflows through external configuration files instead of hardcoded values.

Netflix releases Metaflow 2.13 with Config objects, enabling teams to manage thousands of ML workflows through external configuration files instead of hardcoded values.

Netflix releases Metaflow 2.13 with Config objects, enabling teams to manage thousands of ML workflows through external configuration files instead of hardcoded values.

NewDecoded

Published Dec 23, 2025

Dec 23, 2025

3 min read

Image by Outerbounds

Configuration Without Code Changes

Netflix has released Metaflow 2.13 with a new Config object that allows machine learning teams to configure workflows through external files rather than modifying code. The feature addresses a recurring pain point for teams managing thousands of unique ML flows across diverse use cases, from content recommendation systems to subtitle ranking algorithms. Configs are resolved and persisted at deployment time, enabling teams to adjust resource requirements, scheduling parameters, and application settings without touching the codebase. The new system complements existing Metaflow artifacts and parameters by introducing a third timing model. While artifacts persist at task completion and parameters resolve at run start, configs lock in when flows are deployed to production. This deployment-time resolution makes configs particularly powerful for configuring decorators like @resources and @schedule, which previous solutions couldn't easily address. Teams previously relied on workarounds using JSON-typed parameters or custom parsers, often with significant implementation pain.

Real-World Impact at Netflix

Netflix's internal Metaboost tool demonstrates the practical value of configurable workflows. The CLI system manages ETL workflows, ML pipelines, and data warehouse tables through a unified interface. Using Metaflow configs with a binding system, Netflix's Content ML team can now create model variants by simply swapping configuration files. When a new content metric emerges, practitioners create the first predictive model by changing the target column in a TOML file rather than rewriting pipeline code. The configuration system supports advanced patterns including validation through Pydantic, cascading file hierarchies with OmegaConf or Hydra, and dynamic generation from external services. When combined with Metaflow's Runner and Deployer APIs, teams can deploy hundreds of flow variants for large-scale experiments. All configurations are automatically versioned and stored as artifacts alongside data, models, and execution environments, ensuring reproducibility without manual packaging.

Immediate Availability

The feature is available now through standard installation with pip install -U metaflow. Documentation and executable examples are available through the official Metaflow resources. Netflix credits Outerbounds for collaboration on testing and example development.

Decoded Take

Decoded Take

Decoded Take

This release signals a maturation of MLOps tooling toward treating configuration as a first-class concern rather than an afterthought. While competitors like Kubeflow and AWS SageMaker require complex YAML configurations or SDK abstractions, Netflix's approach keeps configuration human-readable while maintaining the flexibility to integrate with existing tools like Hydra. The timing is particularly relevant as organizations struggle with ML technical debt, where hardcoded parameters and brittle deployment processes create maintenance nightmares. By separating configuration from code at the infrastructure level, Netflix provides a pattern that addresses what Gartner identifies as a key MLOps challenge: managing model variants and experiment tracking at scale. The real innovation isn't the config file itself but the thoughtful integration with decorator systems and automatic versioning, which removes friction from the experimentation-to-production pipeline.

Share this article

Related Articles

Related Articles

Related Articles