Lab 3 - Deploy your model

In this lab we are going to deploy the model wrappend in an API in an Azure Container Instance and sending data to it with postman.

Deploy to an Azure Container Instance

Download the scoring script

inference_folder = "./inference"
inference_script_url = ""
inference_script_download_path = os.path.join(inference_folder,"")
if not os.path.exists(inference_folder):
urllib.request.urlretrieve(inference_script_url, filename=inference_script_download_path)

Create an environment file

myenv = CondaDependencies.create(pip_packages=['azureml-defaults ', 'torch', 'torchvision','pillow==5.4.1'])
with open("inference/myenv.yml","w") as f:
myenv = Environment.from_conda_specification(name="myenv", file_path="inference/myenv.yml")

Create an Inference config

inference_config = InferenceConfig(entry_script="inference/", environment=myenv)

Create a deployment config

deploy_config = AciWebservice.deploy_configuration(
cpu_cores = model.resource_configuration.cpu,
memory_gb = model.resource_configuration.memory_in_gb,
description='Simpson Lego Classifier')

Deploy the model to an ACI

aci_service = Model.deploy(ws,
models = [model],
inference_config = inference_config,
deployment_config = deploy_config,
overwrite = True)
print("Scoring endpoint:",aci_service.scoring_uri)

This step can take up to 10 minutes

You can find the deployment location from your model back under the model:

Scoring URL

Test the model in the API

Post an image to the endpoint

The easiest way to test your scoring endpoint is the code below.

image_uri = ""
result ={ "url": image_uri}))

Use Postman

  • Get the scoring uri

    print("Scoring endpoint:",aci_service.scoring_uri)
  • Create a new request in Postman

  • Send a raw body with the JSON below

    { "url": ""}
Scoring URL

Try other images