-
Notifications
You must be signed in to change notification settings - Fork 0
3. BrainLab API
Want to showcase your project? We got that cover for you. The BrainLab API is a project deploying in the BrainLab production.
Since each project/API will require a different environment and will be developed separately, it makes the most sense that each of the APIs will run on its own Docker container.
You can imagine that each of the containers serves a different set of APIs.
container_A
- get_student_name
- get_student_info
container_B
- get_building_name
- get_building_info
...
If we want to serve student information, we will spawn container_A.
And, if we want to serve building information, we will spawn container_B.
These containers do not depend on each other.
Therefore, if we want to stop serving the building information, it is just as easy as stopping container_B.
To access the service, each container must occupy one port.
container_A -> port 8000
- get_student_name
- get_student_info
container_B -> port 8001
- get_building_name
- get_building_info
...
It won't take long until we forget which port serve which service.
Besides, if one day we decide to stop serving building information, should we reuse port 8001 for another service?
Instead, we could use a proxy to help us fix this issue.
We choose Traefik for that matter. There are other products that can sort out this issue, namely, apache, nginx, etc. We choose Traefik because it has an automatic discovery feature that makes deploying a new service simple.
Traefik --> container_A
--> container_B
--> container_C
...
Traefik handles an incoming request and correctly relays the message to the designated container. But how? Our solution uses a domain name to make that decision. So it could be something like this.
Traefik --> container_A [service-A.host.com]
--> container_B [service-B.host.com]
--> container_C [service-C.host.com]
...
When you access service-A.host.com, Traefik sends you to container_A.
When you access service-B.host.com, Traefik sends you to container_B.
...
One benefit we get from this is the SSL/TLS can be done in Traefik.
Thus, each API developer only needs to focus on developing the API and not configuring the server.
An API is, in fact, a web server that only responds with JSON strings (aka, REST API). Nothing is stopping you from writing an entire HTML web and deploying it that way, as long as it is accessible via HTTP. In this tutorial, we stick with the traditional REST API. Thus, you will need to interact with a backend/server-side script and a frontend/client-side script.
The scenario here is, I have an image classification model that is trained and I want to share/deploy this model so other people can be amazed at my achievement.
The model here is developed and trained in MLflow environment, however, a model trained from vanilla PyTorch or even sklearn is no different as long as you get your environment right.
Here is my model in MLflow.
With MLflow, I got three suggestions.
- The code for prediction would be
import mlflow
logged_model = 'runs:/f74f23c4618e44388b47ade3f6d3563f/model'
# Load model as a PyFuncModel.
loaded_model = mlflow.pyfunc.load_model(logged_model)
# Predict on a Pandas DataFrame.
import pandas as pd
loaded_model.predict(pd.DataFrame(data))- The
pipneeded to be installed are
mlflow==2.4
cffi==1.15.1
cloudpickle==2.2.1
defusedxml==0.7.1
dill==0.3.6
numpy==1.24.3
requests==2.31.0
torch==2.0.1
torchvision==0.15.2
tqdm==4.65.0
and
- The version of
Pythonand other packages are
python: 3.10.6
build_dependencies:
- pip==22.0.2
- setuptools==59.6.0
- wheel==0.37.1
With these suggestions, I create my Docker image as follows.
FROM python:3.10.6
EXPOSE 80
WORKDIR /root/code
RUN pip3 install pip==22.0.2
RUN pip3 install setuptools==59.6.0
RUN pip3 install wheel==0.37.1
# Let's install the FastAPI first.
RUN pip3 install fastapi==0.100.0
RUN pip3 install "uvicorn[standard]==0.23.1"
RUN pip3 install python-multipart==0.0.6
# Now you build your environment here
RUN pip3 install cffi==1.15.1
RUN pip3 install cloudpickle==2.2.1
RUN pip3 install defusedxml==0.7.1
RUN pip3 install dill==0.3.6
RUN pip3 install numpy==1.24.3
RUN pip3 install requests==2.31.0
RUN pip3 install torch==2.0.1
RUN pip3 install torchvision==0.15.2
RUN pip3 install tqdm==4.65.0
RUN pip3 install mlflow==2.4
# For Dev purpose
RUN pip3 install ipykernel
CMD tail -f /dev/null
Here, I use FastAPI as a framework for developing my API.
You can use any other framework if you like them more.
Follow this by docker-compose.yaml.
version: "3.7"
services:
api:
build:
context: .
dockerfile: .Dockerfile
volumes:
- ./code:/root/code # Here is the app
- ./logs:/root/logs # I keep logs here
- ./cache:/root/cache # I keep the cahce here
- ./.vscode-server:/root/.vscode-server # For storing VScode extension duing development
ports:
- 9000:8080 # Map the port so you can access itFrom here, you can use VScode to develop the API.
The flow would be similar to using VScode to write Python code.
After you attach the VScode to the container, find the path /root/code and write your API app here.
Now we write the basic FastAPI app
In /root/code, create main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello World"}Now, in the terminal, runs
uvicorn main:app --host 0.0.0.0 --port 8080 --reloadThen, go to your browser and type localhost:9000.
You should get this result printing in the browser.
{"message": "Hello World"}This means you have successfully created an API.
Now, I will add a new API for people to use my model.
From /root/code/main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello World"}
# Note the `.post`
@app.post("/predict/")
async def post_predict(file: UploadFile):
# Load model from `MLflow`
import mlflow
model_uri:str = 'runs:/f74f23c4618e44388b47ade3f6d3563f/model'
model = mlflow.pyfunc.load_model(model_uri=model_uri)
# Read the input image
from torchvision import transforms
from PIL import Image
import io
# Super quick dirty way to read file and cast into PILimage
image = Image.open(io.BytesIO(file.file.read()))
preprocess = transforms.Compose([
transforms.Resize((256,256)),
transforms.CenterCrop((224,224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5], std=[0.5]),
])
image = preprocess(image).numpy().reshape(1,1,224,224)
predict = MODEL.predict(image).argmax()
answer = dict({})
answer['status'] = 'ok'
answer['code'] = 200
answer['predict'] = int(predict)
return answerYou can not try out the new API predict because of POST method and you also need to upload an image.
For this, you will need to have a sort of API testing application like postman.
However, one benefit of FastAPI is it has a built-in swagger which gives you both documentation and API tester.
To access swagger, go to your browser and access localhost:9000/docs.
From here you can test out the new predict API. Give it a try with this image.

We will not go in-depth with developing a good API, so we will stop here.
Now, you can not ask people to use postman or access swagger if they want to use your model.
Here we will develop a JavaScript and jQuery script and put it on BrainLab website.
I assume that most apps will likely be in some sort of form.
You could spend time writing a form in HTML and developing JavaScript to send data to your API.
Here I introduce you the FormBuilder, a jQuery-based library that helps you write a form.
You can use this library online https://formbuilder.online.
The output of using this would be a JSON string format that can be used to generate a form in the front end.
For my API, what I need is (1) uploading an image as a file, (2) a button to click submit, and (3) a place to put my prediction.
After drag-n-drop what I need, I click the button [{...}] which gives me JSON string for rendering.
Here is what my JSON looks like.
[
{
"type": "file",
"required": false,
"label": "File Upload",
"className": "form-control",
"name": "file",
"access": false,
"subtype": "file",
"multiple": false
},
{
"type": "button",
"label": "Submit",
"subtype": "button",
"className": "btn-default btn",
"name": "button",
"access": false,
"style": "default"
},
{
"type": "paragraph",
"subtype": "output",
"label": "The result is: ",
"access": false
}
]Next, I will develop a JavaScript that both (1) generates a form for the user and (2) send the form and update the prediction result via jQuery.
Please note that there will be multiple ways you can achieve this, if you search the solution from Google, you can end up lost if you don't know what you are doing.
But I encourage you to do it anyway.
I was once lost and still lost today.
Or you could ask ChatGPT to help you develop the JavaScript and let it confuses you instead.
Anyhow, here is my HTML page with JavaScript.
<!DOCTYPE html>
<html>
<head></head>
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery.js"></script>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery-ui.min.js"></script>
<script src="https://formbuilder.online/assets/js/form-render.min.js"></script>
<script>
var container = document.getElementById('fb-render');
var original_output = null;
jQuery(function ($) {
// This is where the form is render.
var formData = `[
{
"type": "file",
"required": false,
"label": "File Upload",
"className": "form-control",
"name": "file",
"access": false,
"subtype": "file",
"multiple": false
},
{
"type": "button",
"label": "Submit",
"subtype": "button",
"className": "btn-default btn",
"name": "button",
"access": false,
"style": "default"
},
{
"type": "paragraph",
"subtype": "output",
"label": "The result is: ",
"access": false
}
]`
var formRenderOpts = {
container,
formData,
dataType: 'json'
};
var formInstance = $(container).formRender(formRenderOpts);
// This is where the jQuery is used to send the data.
const submit = document.getElementById("button");
submit.addEventListener(
"click",
async () => {
const thisForm = document.getElementById('fb-render');
var formData = new FormData(thisForm);
var input_file = document.querySelectorAll('input[type=file]')[0].files[0];
formData.append("file", input_file, input_file.name);
console.log(formData)
const response = await fetch('http://localhost:9000/predict/', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(result => {
console.log(result)
if (original_output == null) {
original_output = $("output").text()
}
$("output").text(original_output + result.predict)
})
},
false
);
});
</script>
</body>
</html>To make it super easy, you can create any file with .html as an extension.
Then any browser can open this file and you can test your JavaScript.
Let's spend a little time understanding what this HTML does.
HTML is a document format.
You can think of HTML as a nested object where each object is defined by a <tag></tag>.
This is how the document is nested.
<root_tag>
<tag_A>
<tag_A_1></tag_A_1>
<tag_A_2></tag_A_2>
</tag_A>
<tag_B>
<tag_B_1></tag_B_1>
<tag_B_2></tag_B_2>
</tag_B>
</root_tag>The indentation does not matter. It is there to help you read the document.
At the core of our HTML, this is what we have.
<!DOCTYPE html>
<html>
<head></head>
<body></body>
</html>The meaning of a tag is universal. You should look it up if you want to go deep into HTML development.
Here what we are focusing on is developing a form using FormBuilder.
In HTML, content usually lives inside the <body></body>.
...
<body>
<!-- This is content -->
</body>
...The first content inside our body is <form></form>.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
</body>
...This <form></form> has two attributes (1) id and (2) enctype.
A <form></form> usually enclose the HTML form with object-like text input.
However, at the core of HTML, the attributes have no specific meaning.
Instead of id, you can use identifier, index, etc.
Of cause, only if you know what you are doing.
The id has a meaning when we introduce CSS and JavaScript.
There is a thing called selector where we can query tags using both tag and attribute.
Here in JavaScript, we can select the form using id like this.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script>
var container = document.getElementById('fb-render');
</script>
</body>
...Or using jQuery like this.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script>
var container = $('#fb-render')[0];
</script>
</body>
...Then we import the library we want to use.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery.js"></script>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery-ui.min.js"></script>
<script src="https://formbuilder.online/assets/js/form-render.min.js"></script>
<script>
var container = $('#fb-render')[0];
</script>
</body>
...Now, we are going to write a function that has no name and execute it immediately.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery.js"></script>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery-ui.min.js"></script>
<script src="https://formbuilder.online/assets/js/form-render.min.js"></script>
<script>
var container = $('#fb-render')[0];
// Below is a jQuery function that has no name and execute immediately
jQuery(function ($) {
});
</script>
</body>
...If you hate this, you can convert it to a normal JavaScript function.
And execute this function once the document is loaded.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery.js"></script>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery-ui.min.js"></script>
<script src="https://formbuilder.online/assets/js/form-render.min.js"></script>
<script>
var container = $('#fb-render')[0];
function my_app(){
// render the form
}
// This will be executed once the document is done loading
$(document).ready(
my_app();
);
</script>
</body>
...This is the art of writing front-end script and I have nowhere near good at it. I will stick with the execute the code immediately version.
Then, I will define what my form should look like using the JSON string I obtained from the Online FormBuilder.
And render the form into the <form></form> object.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery.js"></script>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery-ui.min.js"></script>
<script src="https://formbuilder.online/assets/js/form-render.min.js"></script>
<script>
var container = $('#fb-render')[0];
// Below is a jQuery function that has no name and execute immediately
jQuery(function ($) {
// This is where the form is render.
var formData = `[
{
"type": "file",
"required": false,
"label": "File Upload",
"className": "form-control",
"name": "file",
"access": false,
"subtype": "file",
"multiple": false
},
{
"type": "button",
"label": "Submit",
"subtype": "button",
"className": "btn-default btn",
"name": "button",
"access": false,
"style": "default"
},
{
"type": "paragraph",
"subtype": "output",
"label": "The result is: ",
"access": false
}
]`
var formRenderOpts = {
container,
formData,
dataType: 'json'
};
var formInstance = $(container).formRender(formRenderOpts);
});
</script>
</body>
...At this point, if you open this HTML code, you should get a vanila form. The form will work but the button is not.
Here we stop to discuss the two ways of sending the form.
Way 1: When you click the button, the form is sent to the backend. The backend processes the form and returns with an entire HTML page. What the user sees is a blink of a page refresh.
Way 2: When you click the button, the form is sent to the backend but this time specifically with fetch.
The backend (our API) processes the form and returns with a JSON string.
fetch receives the answer and put the result into the HTML page.
This way, the page won't refresh. The user can scroll and do other things on the page while waiting for the prediction result.
What we are doing now is the second way.
Thus, we need to write fetch that will trigger once the button is pressed.
...
<body>
<form id="fb-render" enctype="multipart/form-data"></form>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery.js"></script>
<script src="https://ait-brainlab.github.io/vendor/jquery/jquery-ui.min.js"></script>
<script src="https://formbuilder.online/assets/js/form-render.min.js"></script>
<script>
var container = $('#fb-render')[0];
// Below is a jQuery function that has no name and execute immediately
jQuery(function ($) {
...
const submit = document.getElementById("button");
// The API call
submit.addEventListener(
"click",
async () => {
const thisForm = document.getElementById('fb-render');
var formData = new FormData(thisForm);
var input_file = document.querySelectorAll('input[type=file]')[0].files[0];
formData.append("file", input_file, input_file.name);
console.log(formData)
const response = await fetch('http://localhost:9000/predict/', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(result => {
console.log(result)
$("output").text(result.predict)
})
},
false
);
});
</script>
</body>
...The way we do here is event trigger/listener.
Basically, we bind a function to an event, in this case, a click.
submit.addEventListener() takes three arguments (1) the event (which is click), (2) the function to execute, and (3) option capture which I will not discuss here.
Let's isolate the function.
async () => {
// Getting the form
const thisForm = document.getElementById('fb-render');
var formData = new FormData(thisForm);
// append the image into the form
var input_file = document.querySelectorAll('input[type=file]')[0].files[0];
formData.append("file", input_file, input_file.name);
// log for debug
console.log(formData)
// actual call to API with `fetch`
const response = await fetch('http://localhost:9000/predict/', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(result => {
console.log(result)
$("output").text(result.predict)
})
}async here is used to tell JavaScript that it does not need to wait for this function to finish.
Inside this function, we build a form based on the HTML form and user input.
Then, we send the form to an API using fetch.
await means wait for this to complete which is the opposite of async.
.then is another way of writing await.
I can rephrase this fecth function like this.
...
response = await fetch('http://localhost:9000/predict/', {
method: 'POST',
body: formData
});
response = await response.json();
console.log(reponse);
$("output").text(reponse.predict);
...Now, if you try this version of the code, you will see that the text "The result is: " will be replaced by "7". What I want is for the result to be appended to the original "The result is: ". But if we only append the result, every time a user makes a new prediction, the new result will append the current one. The solution here is to remember the original "The result is: " and replace the entire text with "The result is: <new prediction>". For this small touch, I add the following code.
...
var container = document.getElementById('fb-render');
// ###### New change #######
var original_output = null;
// ###### New change #######
jQuery(function ($) {
// This is where the form is render.
...
// This is where the jQuery is used to send the data.
const submit = document.getElementById("button");
submit.addEventListener(
"click",
async () => {
const thisForm = document.getElementById('fb-render');
var formData = new FormData(thisForm);
var input_file = document.querySelectorAll('input[type=file]')[0].files[0];
formData.append("file", input_file, input_file.name);
console.log(formData)
const response = await fetch('http://localhost:9000/predict/', {
method: 'POST',
body: formData
})
.then(response => response.json())
.then(result => {
console.log(result)
// ###### New change #######
if (original_output == null) {
original_output = $("output").text()
}
$("output").text(original_output + result.predict)
// ###### New change #######
})
},
false
);
});
...Wow, that is very long. If you are lost, the entire working code is given above.
Writing a front-end/client script requires you to imagine how the user interacts with the page/form. Object by Object.
Good luck!!!
At this point, you should have two things. (1) The API in Docker and (2) the JavaScript that calls to your API.
In this section, we will first discuss how to roll out the Docker image via DockerHub. Then, discuss what to do to make sure your API will run in production or simulate a Traefik in your local machine.
This is custom footer
The custom sidebar