Have you ever found yourself in a situation where the predefined flavors in Mlflow just don’t cut it for your custom model deployment needs? Well, fear not, because creating a custom “pyfunc” model in Mlflow might just be the solution you’re looking for.
Let’s dive into the specifications of crafting a custom model:
import mlflow.pyfunc
from mlflow.pyfunc import PythonModelContext
class CustomModel(mlflow.pyfunc.PythonModel):
def load_context(self, context: PythonModelContext):
# raise ValueError("some bug getting artifacts or importing stuff")
return
def predict(self, context: PythonModelContext, model_input, params=None):
return model_input*2
As you venture into the realm of custom models, there are common pitfalls to watch out for. Let’s explore four areas where things can go wrong.
- Dependency issues that cause failed container builds.
-
Bugs in the
load_context
method leading to server crashes. -
Errors in handling input data types within the
predict
method. - Missing environment variables crucial for Databricks serving deployments.
Stay tuned as we unravel solutions to mitigate the first three issues, while the fourth may require some Databricks-specific troubleshooting.
Let’s kick things off by exploring a temporary model and getting it registered.
import mlflow.pyfunc
class CustomModel(mlflow.pyfunc.PythonModel):
def load_context(self, context):
# raise ValueError("some bug getting artifacts or importing stuff")
return
def predict(self, context, model_input, params=None):
return model_input*2
with mlflow.start_run() as run:
model = CustomModel()
mlflow.pyfunc.log_model(
"model",
python_model=model,
)
run_uri = f"runs:/{run.info.run_id}/model"
run_uri