As a Data Scientist, you’ve created a successful model after investing a lot of time and effort. You want to make it available to other teams in your company so that it can be integrated into their applications. However, this can be a challenging process, so you plan to use tools to simplify it.
You understand that simply creating an effective model is not enough to ensure its success. It’s important to integrate it into production quickly rather than leaving it on your computer where its potential is limited. This requires careful planning and tactics to ensure a prompt launch and rapid adoption of the model.
The purpose of this blog is to guide you through the entire process of deploying a model into production using a comprehensive end-to-end use case. The initial focus will entail providing a broad outline of Fast API, followed by an in-depth exploration of its functionality through the development of an API as a concrete example.
What exactly does an API provide?
An Application Programming Interface (API) serves as a facilitator between two autonomous applications, enabling them to seamlessly communicate and interact with one another. For developers seeking to make their program available and easily integrable for others, building an API that acts as an entry point to their service is a highly effective approach. This API acts as a gatekeeper, allowing developers to interact with the program via HTTP queries without the need to view or understand the underlying codebase or install any additional software. In essence, the API abstracts the program’s inner workings, thereby simplifying the overall integration process for other developers and end-users alike, and providing a layer of abstraction that promotes ease of use and accessibility.
Why do we go for FastApi?
Perhaps you’re already familiar with Django, Flask, and other frameworks. Fast API, on the other hand, shines out when it comes to rapid creation of RESTful microservices.This can be illustrated by the study of benchmark analysis by techempower.
The preferred framework for creating scalable, reliable, and high-performance APIs in a production environment is FastAPI. Recently, FastAPI has grown substantially in popularity and user adoption, especially among web developers, data scientists, and machine learning engineers. Because of its simple syntax, FastAPI is easy to use. It has a syntax similar to Flask’s, so the transition should be simple if you’re considering switching from Flask to FastAPI. FastAPI does not include an integrated web server as Flask does. FastAPI was created mainly to construct APIs.
Read More : 3 Prerequisites to Understand How Machine Learning Works
Implementing our Usecase into Practice
Here I will implement the latest yolov7 implementation with FastApi and docker. As we all know YOLO family is trained based on the coco dataset, which contains 80 classes.YOLOv7 is the fastest and most accurate real-time object detection model for computer vision tasks.YOLOv7 provides significantly improved real-time object detection accuracy without increasing the inference costs.
Clone the official git repository of yolov7
We aim to provide an endpoint in FastApi where we can select the particular class from yolov7 for detection and upload the image. The response will be the count of particular class IDs detected, and the time taken for that will be given in the response.The FastApi code will be like this
Here the detect function will do the yolov7 detection, and as an argument, the class id will be passed.
The command line argument to run this will be uvicorn main:app –reload. The result will be like this
Line 1 shows that the unicorn server is running on the localhost (http://127.0.0.1) port 8000.
The URL http://127.0.0.1:8000/docs provides a complete dashboard for interacting with our API. Below is the result.
If you want to try out this give the class id and the image and click on execute. The response will be shown like this
Here I gave the class id as zero, which helps to detect the persons in an image and the result is shown in response as 10 persons.
By looking at the API Response body section, we can see its result as follow
Deployment into Docker Container
It’s time to deploy our API into a Docker container now that it is ready. The objective of containerization is to make our API more secure and portable so that it can operate uniformly and consistently on any platform (including the cloud).
Below is the content of the Dockerfile.dep for our app.
Build the dockerdep file using the command
docker build -t yolo-dep:latest -f ./Dockerfile.dep .
The Dockerfile contains
Use this command to build the docker image
docker build -t yolo-model:latest .
To run the docker container use the command
docker run -p 8000:8000 yolo-model:latest
Thus you deployed your Fast API application on docker.
Also Read : Types of Machine Learning Algorithms
Upon investigating the various functionalities and capabilities of FastAPI, a highly beneficial and versatile framework that facilitates the development of WebAPIs, we were able to unearth several fascinating and captivating features. Armed with this newfound knowledge, we proceeded to employ FastAPI to function as an API for our machine learning model, harnessing the framework’s full potential to craft an exceptional and seamless API. Additionally, leveraging the benefits and versatility of FastAPI, we were able to construct an entire Docker container, marking yet another remarkable achievement in our quest to deploy a fully-functional and robust machine learning model into production.
If you are looking to deploy your machine learning model with FastApi and Docker, contact Neoito today. Our team can help you build advanced applications that leverage the power of these technologies. Whether you have a specific project in mind or are looking for guidance and support, Neoito can help.
Don’t hesitate to reach out and see how we can work together to bring your ideas to life.
What are the benefits of using FastAPI for deploying machine learning models?
FastAPI is a high-performance web framework that offers several benefits for deploying machine learning models. Some of the key benefits include its fast performance, automatic generation of API documentation, and easy integration with data science libraries such as PyTorch and TensorFlow.
How can Docker be used to deploy machine learning models?
Docker is a containerization platform that allows you to package your machine learning model along with its dependencies and deploy it in a self-contained environment. By using Docker, you can ensure that your model will run consistently across different environments, making it easier to deploy and scale.
What are some best practices for deploying machine learning models with FastAPI and Docker?
Some best practices for deploying machine learning models with FastAPI and Docker include using environment variables to manage configuration, using a separate Dockerfile for production, and implementing proper security measures such as rate limiting and authentication.
Can FastAPI and Docker be used to deploy models trained in different languages or frameworks?
Yes, FastAPI and Docker can be used to deploy models trained in different languages or frameworks as long as the model can be served through a REST API. However, some configuration changes may be necessary to ensure compatibility with FastAPI and Docker.