# Lab 3 - Deploy your model

In this lab we are going to deploy the model wrappend in an API in an Azure Container Instance and sending data to it with postman.

## 1. Deploy to an Azure Container Instance

### Download the scoring script

```
inference_folder = "./inference"

inference_script_url = "https://raw.githubusercontent.com/hnky/DevelopersGuideToAI/master/amls/resources/score.py"
inference_script_download_path = os.path.join(inference_folder,"score.py")
if not os.path.exists(inference_folder):
    os.mkdir(inference_folder);
urllib.request.urlretrieve(inference_script_url, filename=inference_script_download_path)
```

### Create an inference environment

```
inference_env = Environment(name="simpsons-inference")

conda_dep = CondaDependencies()
conda_dep.add_pip_package("azureml-defaults")
conda_dep.add_pip_package("torch")
conda_dep.add_pip_package("torchvision")
conda_dep.add_pip_package("pillow==5.4.1")

inference_env.python.conda_dependencies=conda_dep
```

### Create an Inference config

```
inference_config = InferenceConfig(
    entry_script="inference/score.py", 
    environment=inference_env 
)
```

### Create a Azure Container Instance deployment config

```
deploy_config = AciWebservice.deploy_configuration(
    cpu_cores = model.resource_configuration.cpu, 
    memory_gb = model.resource_configuration.memory_in_gb,
    description='Simpson Lego Classifier'
)
```

### Deploy the model to an ACI

```
aci_service = Model.deploy(ws, 
                name="simpsons-pt-aci", 
                models = [model], 
                inference_config = inference_config, 
                deployment_config = deploy_config, 
                overwrite = True)

aci_service.wait_for_deployment(show_output=True)
print("Scoring endpoint:",aci_service.scoring_uri)
```

> *This step can take up to 10 minutes*
>
> You can find the deployment location from your model back under the model: [https://ml.azure.com](https://ml.azure.com/model/list)&#x20;
>
> *Note: if you don't see it immediately refresh the tab*

![Scoring URL](/files/-MF1Ae9rxXCz53zU--BJ)

## 2. Test the model in the API

### Post an image to the endpoint

The easiest way to test your scoring endpoint is the code below.

```
image_uri = "https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Bart.jpg"
result = aci_service.run(input_data=json.dumps({ "url": image_uri}))
print(result)
```

### Use Postman

* Download [Postman to your local machine](https://www.postman.com/)
* Get the scoring uri

  ```
  print("Scoring endpoint:",aci_service.scoring_uri)
  ```
* Create a new request in Postman
  * POST request
  * Put scoring URL in call
* Send a raw body with the JSON below. *Make sure JSON is selected in orange next to the raw radio button*

  ```
  { "url": "https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Bart.jpg"}
  ```

![Scoring URL](/files/-MF1Ae9tI0By-KI-k0bf)

### Try other images

```
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Krusty.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Bart.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Flanders.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Homer.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Lisa.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/marge.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Milhouse.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/MrBurns.jpg
https://raw.githubusercontent.com/hnky/dataset-lego-figures/master/_test/Wiggum.jpg
```

**End**


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://workshops.henkboelman.com/developers-guide-to-azure-ai/azure-machine-learning/lab-3.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
