Historically, developers don’t take any serious responsibilities on application’s data. I mean, they used to some point in the past, but certainly not over the past five, ten years. Database’s technologies evolution and ORMs (Object Relational Mapper) efficiency on top of it, ended up creating a whole generation of developers that didn’t need to understand what’s happening under-the-covers.

Personally, I see advantages and disadvantages on developers taking that path with the main advantage being how focused and specialized developers can be on writing a good code for the app, avoiding “distractions” with data. I know, I know, there are multiple push-backs possible here, but for the sake of articles’ context, let us keep that discussion aside for a moment, shall we?

The fact is: interestingly enough, there is an undergoing movement in the market nowadays that is pushing developers back again to data centric scenarios; Machine Learning algorithms.

As more and more modern applications evolve towards to understand final users preferences with the goal of predicting the next buy, the next content to be consumed, the next disease to be developed, and so forth, Machine Learning algorithms have been deeply infused in applications of all kinds. Of course, those algorithms are always iterating on top of data, main reason by which developers have been required to get back at it.

So, the series of articles I’m about to write up (this is just the first one) is all about it: an architect specialized in Apps Modernization (with a career built around development of apps) with some good experience on the data side but very little experience in the Machine Learning space which was challenged by his customer to help them with the creation of an AI-powered solution, described below.

The scenario

Through its Health department, Americas University (https://web.americasuniversity.net) wanted to create an online, high-scalable, AI-based and multichannel consumable engine for Diabetes Prediction.

After couple years surveying diabetic people and collecting data, AU’s Health department considers this is the right time for them to do a good usage of the dataset they were able to build by providing an open diabetes predictor to the community. Obviously, it does not intend to be people’s doctor replacement. Rather, it should serve as an advisor for triage purposes.

Upon certain information provided by the patient (Age, Gender, Body Mass Index, Blood Pressure, Serum 1-6, and current diabetes level – Y) the engine should be able to predict what the diabetes rate should look like in 12 months from the date of the inquire.

Additionally, the engine should be able to provide further recommendations (whatever it might be: medicine, exercises, so on) to patients projected to be running over the acceptable threshold that separates diabetic people from non-diabetic people.

The engine should be published in a high-scalable infrastructure, should be consumed exposed through APIs so whatever external channel could take advantage of it. Bot is the preferred way to consume the engine, as far as AU’s health team would like to begin with.

Proposed architecture and components

Americas University is an Azure’s customer so we are going to rely on Microsoft’s cloud technologies to get the work done.

As you might imagine, a solution like this can look like a Lego, right? I mean, based on the scenario introduced early on, there are different services that we’ll need to bring along to deliver the integrated solution required. That’s why I’m going to break up the whole solution’s implementation in three different articles.

Characteristics we can notice upfront:

  • Diabetes prediction has to be built from scratch as Microsoft Cognitive Services doesn’t offer an specialized API for for that purpose.
  • There is a recommendation piece in there, which could be a good fit for Recommendations API in Azure (we need to validate that).
  • There is, obviously, a Dataset involved. It does mean we’re gonna need some storage capability for that purpose.
  • There is also a need for compute, right? Meaning that we need a place for us to run the models we’re going to build.
  • Once we’re satisfied with the results our model is returning, we need, somehow to publish it in the format of Restful API, so that different channels can consume.
  • AU’s Health department would like to have a Bot serving as interface between patients and the predictor in Azure.

Figure 1 introduces the architecture we’re proposing to meet solution’s requiments.

Figure 1. Proposed architecture

Brief explanation about above’s architecture.

  • In conjunction with Americas University’s Health team we decided that the Bot we’re going to build to support the solution is going to be available to both internal (AU’s internal staff) and external World. Internal folks will be able to interact with the application through Microsoft Teams. The community will be able to get predictions and recommendations via AU’s website, which will ultimately, host the Bot.
  • Americas University does subscribe Power Platform, which gives them access to Power Virtual Agent service. That’s undoubtedly, the easiest and quickest way to build Bots in Azure platform so, we’re going to take advantage of it.
  • To build out the recommendations layer, we’re going to rely on Recommendations API, se built-in deployment in Azure. You can see more of it in here.
  • To host the Diabetes Predictor API in a public accessible and high-scalable way, we’re going to leverage Azure Kubernetes Service (AKS).
  • Last but not least, we have Azure Machine Learning Studio (or simple ML Studio). That’s the managed service that we are going to use to authoring our model. Also, we’re going to hook up both storage and compute resources (including the productive the AKS cluster) to this environment to support the solution. This is what we’re going to build throughout the rest of this article.

Creating a new Azure ML Studio instance

Let’s start by building up the core of the system. I’m referring to Diabetes Predictor model itself. As I have already stated, we’re going to construct on top Azure Machine Learning service, a Platform as Service offering for AI scenarios in Azure that is ideal for scenarios where models need to built from scratch (which is definitely our scenario here).

As a first step, let’s create a new instance of the service in Azure through Azure Portal (https://portal.azure.com). Once in there, follow the steps listed below to get your environment up and running.

  1. Sign in to the Azure portal by using the credentials for your Azure subscription.
  2. In the upper-left corner of the Azure portal, select the three bars, then “+ Create a resource”.
  3. Use the search bar to find Machine Learning. Once you find it, select “Machine Learning”.
  4. In the Machine Learning pane, select “Create” to begin.
  5. Provide the following information to configure your new workspace:
    1. Workspace name: Enter a unique name that identifies your workspace.
    2. Subscription: Select the Azure subscription that you want to use.
    3. Resource group: Use an existing resource group in your subscription, or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.
    4. Location: Select the location closest to your users and the data resources to create your workspace.
  6. After you’re finished configuring the workspace, select “Review + Create”.
  7. Select “Create” to create the workspace.
  8. To view the new workspace, select “Go to resource”.

At the end of that process, you should be seeing the blade that shows up the information related to Azure ML instance, as you can see through Figure 2. Once in there, give a click on “Lunch Studio” to get into ML Studio.

Figure 2. Azure Machine Learning Studio instance successfully created

Setting up the environment and pipeline

At this point, you should’ve been able to successfully get into ML Studio and should be seeing a screen pretty similar to the one depicted by Figure 3.

Figure 3. Azure Machine Learning Studio welcome screen

There are different concepts in here that is worth exploring. For the sake of conciseness, I will defer to these set of docs the explanation of all the concepts related to Azure ML Studio and will only focus on the creation of the assets we need to get our Diabetes Predictor up and running.

The very first thing we need to have operational is our dataset, otherwise, we won’t be able to do anything. Azure ML requires us to register any eventual dataset we’re going to be working with in the context of the workspace. So, let’s tackle it.

Head over “Datasets” tab, under “Assets” section in the left-hand menu. Once in there, give a click on “+ Create dataset”. The context menu opens up. Select the option “From Open Datasets”. This is going to fire off a screen listing dozens of datasets publicly available for usage. “Sample: Diabetes” should be one of the featured ones. Just pick it and click “Next”.

Figure 4. Setting up our new dataset

Next, we have to give our dataset a name. In my case, I’m calling it “au-ds-diabetes”. Then, all you have to do is give button “Create” a click. It should take only a couple seconds for ML Studio to get the dataset properly registered. At the end, you should be able to see your newly registered dataset listed, as Figure 5 showcases.

Figure 5. Diabetes dataset properly registered

If you click on dataset’s name, you will be able to see details related to that dataset, including looking into data’s distribution, suggestions on how to consume that dataset via Jupyter notebooks, so on so forth. I always recommend taking a look into those to familiarize yourself with the data you’re working with. Figure 6 depictures one of those possibilities.

Figure 6. Exploring the dataset

Next, we need to spin up a compute resource. That’s going to be the one we’re going to use to run our pipeline + train our new model later on. You can pick and choose among creating a single virtual machine, a cluster of virtual machines, a cluster of Kubernetes or even bring your own existing compute resources. For us to begin with, as I said, I will create a new virtual machine, directly from the ML Studio’s interface.

Again, head back to the left hand menu and select “Compute” option. Once in there, select the tab “Compute instances” on the top menu, and then, give a click to “+ New”. That’s going to bring up a new instance creation screen. Name it properly, choose between regular CPU or GPU enabled, pick the size that best match your needs and then click “Next: Advanced Settings”. Figure 7 displays my configuration.

Figure 7. Basic configuration for the compute instance

Next, we can hook up some additional configuration to it, like defining schedule for turning this instance on and off, enable SSH access, connect it to an existing virtual machine, and more. I won’t do anything in that screen and just hit “Create”. Please, allow couple minutes for the machine to deploy and start. If everything went well, you should be able to see the newly created instance listed under “Compute” > “Compute instances”.

Figure 8. Virtual machine up and running

Now, we’re ready to start authoring our pipeline and ML model. Let’s head over to designer feature in Azure ML (“Designer” option at the left hand menu) to get it done. We could also leverage Jupyter notebooks to get it done. Rather, I will take the UI-based experience perform the job moving forward.

This (Figure 9) is what you should be seeing at this point if you headed to right place in the platform.

Figure 9. Pipeline’s design section

As you can see in there, what the Designer feature does is basically provide you the ability to build pipelines in a visual way. At this level, you can understand pipelines as a series of steps that can be put together towards to produce, as result, something bigger. In our case, this “something bigger” will be the diabetes level prediction.

Because diabetes prediction is something unique, we’re going to need to build out that from scratch. Because of that, I invite you to give a click on “+ Easy-to-use prebuilt modules”. That is going to bring us into an empty pipeline space, as you can see through Figure 10. There is where we are going to build out our sequence of steps that will end up with a diabetes prediction.

Figure 10. Empty pipeline

These are the four main areas highlighted by Figure 10:

  • Pipeline’s title: autos-suggestive. This is where we give this pipeline a name. To edit it, just click over the pre-filled text.
  • Pre-built actions: this is a menu throughout which Microsoft makes available dozens of pre-made actions for you to apply to your pipeline. Data manipulation, ML algorithms, loops, all that stuff already there ready for you to use. To use it, just drag and drop into the canvas.
  • Canvas: the place where we literally connect the dots and create the sequence of steps of our pipeline.
  • Pipeline’s settings: contextual pipeline’s settings. Depending on what you’re manipulating at the time, that box adjust for you to go after the setting of the active component. If nothing is selected in canvas, then pipeline’s overall settings are going to be available.

Now we’re familiar with the environment itself, it is time for us to go over some basic and critical configuration.

  1. First, we’re going to set up a proper name. In my case, I’m calling it “au-diabetes-predictor-pipeline”.
  2. Then, I’m going to select the compute instance we created early on so that, my pipeline has a place to run. To do so, just select it in the dropdown under the label “Select Azure ML compute instance”.
  3. Next, I will set up a proper “Draft name” under settings. This is going to easily identify our pipeline later on. I’m giving it the same name of the pipeline, “au-diabetes-predictor-pipeline”.

At the end of this process, my pipeline look like the one presented by Figure 11.

Figure 11. Pipeline proper configured

Authoring Diabetes pipeline

Thinking about the logical set of steps that needs to be in place for us to be able to predict diabetes rates to a given person, the very first step is about understanding the dataset. When we do so, couple of aspects come to mind. They are:

  • Looking into dataset’s structure, we can notice that the current diabetes rate information is sitting at “Y” column. So we know by a fact that this is the value we want to predict.
  • Analysis of the dataset also tell us the information we need to collect towards to make the prediction. They are: Age, Gender, Body Mass Index (BMI), Blood Pressure (BP), Serum 1-6, and the current measure of diabetes (Y).

With that in mind, we can start building our prediction pipeline.

First thing we need to do, obviously, is to get our dataset in. That is the starting point for everything so, that’s what I’m going to do in the first place. For this, open up the actions menu at left side of canvas and, in the search field, give a search for “au-ds-diabetes”. Because we registered that dataset before hand, it will show up there as a selectable option. Pick the box, drag and drop it into canvas. You should be seeing something similar to what the Figure 12 is currently showing up.

Figure 12. Dragging and dropping “au-ds-diabetes” dataset into pipeline’s canvas

Data is now available for us. Next, we need to select the columns we want to work with down the road. That’s why we need to add a new action from the menu on the left called “Select columns in the dataset”. Same way we did for the dataset, give a search for that element. Once it pops-up, drag and drop it in the canvas, under the dataset box. Then, drag and drop the small round underneath dataset’s all the way down to the small round on top of the “Select Columns” box. This will create a row connecting the two actions.

Once you did that, click on the “Select Columns” box. Notice that settings box on the right is going to adjust, enabling you to select which columns you want to work with. Give a click on the link “Edit column”. A new modal will pop-up. Through the dropdown shown in front of “Include” configuration, select the option “All columns”, as in our case, it applies for us. Then, click “Save”.

Your configuration should look like what’s presented by Figure 13.

Figure 13. Selecting columns

At this point our pipeline already allows us to look into the data, right? Think, we have our diabetes dataset already set up, and have specified the columns in there that will be available to any next step coming in. Now, what happens if some of the information in the dataset is missing? Obviously, it will influence the training we’re going to do in the future, bringing some damage to the prediction process itself.

That’s why we need a process that cleans up missing data in our dataset. What this action does is, upon some parameters we configure upfront, it goes line-by-line and column-by-column, deleting every entry that lacks expected data.

To add the action who does that, we need to look up for “Clean Missing Data” in action’s menu, on the left. As you did with previous steps, just drag and drop it into canvas aligned to the previous data. Once you do, connect the “Select Columns in Dataset” box to the “Clean Missing Data” one. Then, click on the box “Clean Missing Data”. Notice that the settings window on the right updates its content. Your configuration should look like the one presented by Figure 14.

Figure 14. Cleaning up missing data

I won’t get into the specifics of each configuration’s option provided by “Clean Missing Data” to keep it simple. If you’re interested on knowing more on that regard, you should give this document a read.

Now that we have stabilized and trustable data, we can start the work of building and training our predictor model. In that sense, we’re going to bring side-by-side two new actions in to canvas. A “Linear Regression” and a “Split Data”. After searching for those and drag and dropping and arranging it side-by-side into canvas, you should ne seeing something pretty similar to what the Figure 15 presents.

Figure 15. Adding “Linear Regression” and “Split Data” actions

“Linear regression” is the ML algorithm we’re going be training towards towards to make the prediction. Linear regression attempts to establish a linear relationship between one or more independent variables and a numeric outcome, or dependent variable, which in our case, is going to be the current diabetes rate represented by the “Y” column in our dataset.

“Split Data” is going to be used to divide our diabetes dataset into two distinct chunks of data. We do this for the sake of comparison, meaning that we will use one of the new sets for training and the second one, as comparison mass against results generated by the training process.

Selecting the action “Split Data” brings out the configuration window on the right. In that window, for the option “Fraction of rows in the first output dataset”, we’re going to set the value “0.7”, indicating that we’re going to use 70% of the rows for training purpose and the remaining 30% for results comparison and model evaluation. “Split Data” configuration box should look like Figure 16.

Figure 16. Configuring Split Data action

Next, connect the “cleanest dataset” point from “Clean Missing Data” action to “Split Data”. Your connection should look like the one presented by Figure 17.

Figure 17. Connecting “Clean Missing Data” to “Split Data”

Now that we have both the data and algorithm for training, we need to effectively perform the training action. That’s why we need to drag and drop the “Train Model” action into canvas, centralized right below “Linear Regression” and “Spit Data”. Once you do, connect “Linear Regression” connection point to the first connection point on “Train Model”. Then, connect the first connection point under “Split Data” to the second connection point on “Train Model”. At the end, you should be seen something pretty similar to what is depicted by Figure 18.

Figure 18. Connecting “Linear Regression” and “Split Data” to “Train Model”

Last but not least, we need to inform the trainer, which column is going to be used as parameter for the training (“Y” in our case). To do so, select the action “Train Model”. On the configuration window on the right, click on the link “Edit column”. It will fire off an additional window that allow us to select the column in the dataset to be used as parameter. Select “Y” and then, Save. Figure 19 shows that selection.

Figure 19. Selecting column for the training

Once the training gets completed, we need to have a way to score the results so that, we can compare later on and see if prediction’s results are acceptable for our purposes. That’s why we’re going to drag and drop a new action into canvas: “Score Model”. After having it showing up into canvas, just connect the exit of “Train Model” to the first entry of “Score Model”. Then, connect the second exit of “Split Data” to the second entry of “Score Model”. At the end, your canvas should look like what’s presented by Figure 20.

“Score Model” will then compare the results gathered by in the temporary training result dataset against the 30% remaining of the original dataset and attribute it a score. That score can be used later on for model’s evaluation.

Figure 20. Adding “Score Model” and connecting both “Split Data” and “Train Model” to it

Finally, we need to find a way to measure how efficient our model is. Considering that we have a score in place for our training, we need to add then an “Evaluation Model”, that is capable of taking on the existing score and generating an evaluation for the results. After drag and drop the action into canvas, connect the exit of “Score Model” to the entry point of “Evaluate model”. Your final canvas should look like the once presented by Figure 21.

Figure 21. Complete pipeline

Running the pipeline

We are finally in good shape to run our pipeline and see what comes out of it. To do so, we need first to submit our pipeline for execution by giving a click on “Submit” button, placed at the top-right corner of the screen. This will bring up a new modal window allowing you to select either a new experiment or an existing one. Select “Create new” and give it a name. In my case, I named it “au-diabetes-predictor-experiment”. Figure 22 depicts that process.

Figure 22. Selecting the experiment

That action is going to queue up our pipeline to be executed in first available spot in the compute resource previously defined (in our case, a virtual machine called “au-vm-ml-ds3v2”). Once it gets precedence and falls into a run, you’ll see canvas reflect it. Figure 23 shows up our pipeline in execution.

Figure 23. Pipeline running

The conclusion of the run opens up lots of new possibilities for us to go through. First, we can go and verify the results generated by “Evaluate Model”. To do so, just give a right-button click on the exit data point of that action and then, select “Preview data”. It will load up results in a new modal window, as depicted by Figure 24.

The “Coefficient_of_Determination” is the information really important here. That data give us the information of how trustable that prediction is. Usually, prediction over 50% is considered reliable so, we’re good to go with ~57%.

Figure 24. Evaluation results

Next, as you might have noticed, we can see details of pipeline’s run by giving a click on “View run overview” link, placed at the top-right of the screen. By doing so, you will be able to see a new screen with all sort of details related to the pipeline run, including logs, which can eventually become really important when it comes to troubleshooting problematic scenarios. Figure 25 shows up that screen.

Figure 25. Verifying details of the run

Creating an inference environment for test purposes

The environment is now set up. We have also a run that went through smoothly with acceptable results from AU’s health team. Now, everything is pointing us to go over some real test, with actual data. For that, we need to create what Azure Machine Learning Studio calls “Inference pipeline”.

To do so, give a click on a new button that appeared after our run successfully finished: “Create inference pipeline”. It shows up right beside the “Submit” button, at the top-right corner of the screen. Once you do, the tool is going to ask you to choose between “Real-time inference pipeline” or “Batch inference pipeline”. Because we want real-time results to fresh data, we will pick the first one.

Once you do, after couple seconds, you will notice that Azure ML Studio created a new sort of canvas. It leverages the model we just authored and adds two new actions to it, one at the beginning side-by-side with the dataset called “Web Service Input”, and another one at the bottom of it, side-by-side with “Evaluate Model” called “Web Service Output”. Figure 26 highlights that new canvas.

Figure 26. Real-time inference pipeline

The “Web Service Input” action automatically inserted by Azure ML into the inference environment refers to a new capability added to our model related to the ingestion of external inputs of data to be processed by the pipeline we just created.

In other hand, the “Web Service Output” action refers to the ability of returning data to external callers based-upon the inputs it received. That is, we’re doing nothing but adding interaction capabilities to this pipeline.

The very next thing to do, give a click on “Submit”. Once you do, Azure ML will ask you to select either an existing experiment or create a new one. Because we already created one, we’ll just select the existing one. Then, click “Submit” again to fire off the run. After couple seconds, you will see the pipeline running again (Figure 27).

Figure 27. Running the inference pipeline

After the successful run (which means our inference pipeline is ready to receive external data and process it) we are ready to deploy it and go through some real tests. That’s why that top-right “Deploy” button (Figure 27) is strategically positioned up there. This is what we’re going to do next.

Go ahead and hit “Deploy”. Once you do, Azure ML will load up a modal window that asks you a couple of details, as you can see through Figure 28.

Figure 28. Deployment configuration for our inference test environment

First, you’re asked to choose between deploy a new real-time endpoint or replace an existing real-time endpoint. Because we don’t have any pre-existing endpoint, we will opt-in for a new endpoint creation. You can understand an endpoint as the entry point for a given API that is going to be deployed.

Then, we’re asked to give this endpoint a name. I’m suggestively naming it “diabetes-predictor-test-endpoint”.

Next, we need to select the compute type. Here we can pick and choose between “Azure Kubernetes Service (AKS)” and “Azure Container Instance (ACI)“. That selection indicates “where” the inference API is going to be running on. Usually, AKS is a better choice for companies looking for high-scalable and orchestrated cluster of containers. Great for production workloads. Rather, ACI is usually a better choice for small dev/test environments (which is our first use-case here), reason by which we’re going to have it selected.

Under “Advanced” section we will make no changes and only hit “Deploy”. Figure 29 showcases the configuration I’ve gone through.

Figure 29. Setting up the inference environment

Immediately after hitting the “Deploy” button, Azure ML’s UI notifies you (Figure 30) about the progress of the deployment. You can also follow along through the “Endpoints” tab, on the main menu in the left.

Figure 30. Notification of deployment in progress

Behind the scenes, what Azure ML Studio is doing on your behalf is:

  1. Generating a new container image for the model we authored.
  2. Publishing that image into an instance of Azure Container Registry previously tied to this ML workspace.
  3. Provisioning a new Azure Container Instance in Azure.
  4. Pulling the image from container registry.
  5. Making it ready for public consumption (enabling ports, validating probes, and so on).

After couple minutes, you should be able to see your endpoint successfully created by heading to “Endpoints” section. Figure 31 shows up endpoint’s details and states it is ready to receive HTTP calls. It even gives you the REST endpoint to it before hand.

Figure 31. The inference endpoint ready to go

Notice that on the very top of the section, we have four options for us to browse through. Details, Test, Consume and Deployment Logs. They’re self-explanatory so I won’t waste time describing it.

Now that we’re ready to do some testing, let’s give a click to “Test” option. Once you do, will should be seeing a screen pretty similar to what is presented by Figure 32.

Figure 32. The testing section

As you can see, Azure ML Studio created a pretty neat form for you to populate with real data and send it over to our Diabetes Predictor. It even populates it with some data coming out of the existing dataset.

To do your test, just replace the existing values with your own and hit “Test” button. The result of the prediction will be shown on the right side, under a new variable automatically created and populated by our engine, called “Scored Labels”, as you can see through the Figure 33.

Also, notice the usage of the word “WebServiceInput0”. If you remember, one of the actions automatically added to our inference pipeline was exactly a “Web Service Input”. This proves me right about that component being added there to provide communication with the external world. It basically transforms that engine into a RESTFull API.

Figure 33. Showing the results of the prediction

Publishing the productive version of Diabetes Predictor

Everything looks good. AU’s health department researchers are satisfied with the results provided by the engine we just created so now, they want to make that publicly available, in production.

Good news is, the process of deploying a productive version of the inference API is pretty much the same as we did to publish the test one. Rather than selecting Azure Container Instance for the compute resource, we’re going to look at AKS for that purpose, as we will need to be highly-scalable soon. Also, we’re going to create a new inference endpoint, as the new is the one we’re going to publicly expose for consumption.

From now on, I’ll consider that you already have a AKS cluster in-place in your subscription. If that is not the case, follow the steps described in this tutorial to deploy yours.

Head back to your “Designer” section in Azure ML Studio. Once in there, give a click to the inference pipeline “au-diabetes-predictor-pipeline-real time inference”. Again, hit “Deploy”, select “Deploy a new real-time endpoint”, select “Azure Kubernetes Service” as compute type and then, select the AKS cluster you previously created (in my case, it is called “au-aks-dev-a2mv2”). At the end, hit “Deploy”.

Figure 34. Configuring the productive environment

You know what happens next. After a couple minutes, you should be able to see you productive endpoint available under the “Endpoints” section in the Studio.

Figure 35. The productive endpoint created

Once you get in that environment, you can see again the “Test” tab. You can proceed the same way as did before to try out the new environment from the inside.

To showcase you another possibility though, I’m going to consume that API via Postman this time. To do so, I will hit the tab “Consume”. Figure 36 shows up the screen you should be to see when you do so. It brings all the information you need to consume the API, including some code examples on how to consume it.

Figure 36. How to consume the productive API

To consume the API, you basically need to comply with the following requirements:

  1. Hit the right endpoint.
  2. Provide a primary key in the header (“Bearer {Primary Key}“) to authenticate requests.
  3. Provide the payload containing the data you want to validate in the body of the request.

Request’s body should look like the piece of code presented below. The values I’m using are figurative. You can replace them by yours for a real test.

Figure 37 shows the final configuration I have in my Postman to perform de call. The response is the same payload previously sent plus the “Scored Label” parameter filled in with the actual prediction.

Figure 37. Calling the API and received the prediction back

Done. Our model is ready to go and publicly available to external world. In the next article, we’re going to create a Bot together that will consume that prediction and serve as bridge between final users and our Diabetes Predictor. Stay tunned!

Hope you like it. Enjoy!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *