Pre-Deployment Testing of MLFlow PyFuncs | Sriharsha Tikkireddy | Sep, 2024

SeniorTechInfo
2 Min Read

Have you ever found yourself in a situation where the predefined flavors in Mlflow just don’t cut it for your custom model deployment needs? Well, fear not, because creating a custom “pyfunc” model in Mlflow might just be the solution you’re looking for.

Let’s dive into the specifications of crafting a custom model:

        
            import mlflow.pyfunc
from mlflow.pyfunc import PythonModelContext
class CustomModel(mlflow.pyfunc.PythonModel):
def load_context(self, context: PythonModelContext):
# raise ValueError("some bug getting artifacts or importing stuff")
return
def predict(self, context: PythonModelContext, model_input, params=None):
return model_input*2

As you venture into the realm of custom models, there are common pitfalls to watch out for. Let’s explore four areas where things can go wrong.

  1. Dependency issues that cause failed container builds.
  2. Bugs in the load_context method leading to server crashes.
  3. Errors in handling input data types within the predict method.
  4. Missing environment variables crucial for Databricks serving deployments.

Stay tuned as we unravel solutions to mitigate the first three issues, while the fourth may require some Databricks-specific troubleshooting.

Let’s kick things off by exploring a temporary model and getting it registered.

        
            import mlflow.pyfunc
class CustomModel(mlflow.pyfunc.PythonModel):
def load_context(self, context):
# raise ValueError("some bug getting artifacts or importing stuff")
return
def predict(self, context, model_input, params=None):
return model_input*2
with mlflow.start_run() as run:
model = CustomModel()
mlflow.pyfunc.log_model(
"model",
python_model=model,
)
run_uri = f"runs:/{run.info.run_id}/model"
run_uri

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *