A lightweight Python adapter that allows your custom AI library to seamlessly integrate and run inside the AIOZ-AI-Node environment.
This package defines standard interfaces for input, output, and file objects, ensuring your AI task can communicate properly with the AIOZ-AI-Node system.
Since your AI library will run on the AIOZ-AI-Node environment, below are listed all the libraries and corresponding versions available in the AIOZ-AI-Node environment. On your local machine, you can create a new virtual environment and install all the libraries below:
pip install -r requirements.txt
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
pip install xformers==0.0.23 --index-url https://download.pytorch.org/whl/cu118Your AI library should contain a run() entrypoint, which the AIOZ-AI-Node system will call automatically.
from .run import runThis file defines that the run() function in run.py as an attribute of the my_ai_lib and that I can call it by my_ai_lib.run()
Define your input object and output object inherit from InputObject, OutputObject
Example input / output object:
from pathlib import Path
from typing import Any, Union, Literal
from aioz_ainode_adapter.schemas import InputObject, OutputObject, FileObject
class MyInput(InputObject):
input_image: str
example_param: Any
class MyOutput(OutputObject):
text: str
output_image: FileObjectIn aioz_ainode_adapter library we def ine 3 Object type: InputObject, OutputObject, FileObject (define base on pydantic.BaseModel)
- InputObject: Define the format for input when the AIOZ-AI-Node system sends to your AI library. This object has two default param:
deviceandmodel_storage_directory.
| Attribute | Type | Desc |
|---|---|---|
device |
Choice | This is a device for your AI model, it has three options ["cuda", "cpu"] |
model_storage_directory |
String | This is a directory storing your model weights |
NOTE: If your AI library needs to provide a directory path where your AI model weights are stored, that path must be obtained from model_storage_directory. Because the AIOZ-AI-Node system will specify this.
-
OutputObject: Define the format for output when your AI library sends to the AIOZ-AI-Node system.
-
FileObject: Define the format for the file, if your output has a file. This object has two fields:
dataandname.
| Attribute | Type | Desc |
|---|---|---|
data |
Choice | This is a file data, this attr allows three formats: io.BufferedReader, Path (local file path) and URL |
name |
String | This is a file name |
Example for creating FileObject from local file.
output_file = FileObject(data=open("file/path.txt", "rb"), name="output.txt")NOTE:
If your input params have a file, it must be a local file path or URL.
And if your output has a file, it must be a FileObject.
def do_ai_task(
input_image: Union[str, Path],
example_param: Any,
model_storage_directory: Union[str, Path],
device: Literal["cpu", "cuda", "gpu"] = "cpu",
*args, **kwargs) -> Any:
"""Define AI task: load model, pre-process, post-process, etc ..."""
# Define AI task workflow. Below is an example
text = "This is the AI task result"
output_image = open("wiki/aioz.png", "rb") # io.BufferedReader
return text, output_image
def run(input_obj: InputObject) -> OutputObject:
my_input = MyInput.model_validate(input_obj.model_dump())
print(f"Input: {my_input}")
# do something
text, output_image = do_ai_task(
input_image=my_input.input_image,
example_param=my_input.example_param,
model_storage_directory=my_input.model_storage_directory,
device=my_input.device
)
# make output object
# # create a file object if the output is a file
output_file = FileObject(data=output_image, name="output_image.png")
output_obj = MyOutput(text=text, output_image=output_file)
return output_objIn which the run() function is a mandatory function and is the main function, you are not allowed to change the name.
And do_ai_task() is your function to define your AI-task workflow, you can rename this function, and do anything you want to.
NOTE:
- Output from run() function must be a OutputObject.
- If you need to import external code or utility modules from within your project, please use relative imports, for example:
from .lib import lib_aThis ensures the code runs correctly inside the sandboxed environment.
Example
import my_ai_lib
from aioz_ainode_adapter.schemas import InputObject
def main():
input_obj = InputObject(input_image="wiki/aioz.png", example_param="example")
output_obj = my_ai_lib.run(input_obj)
print(f"Output: {output_obj}")
if __name__ == '__main__':
main()In this file my_ai_lib.run() receives an InputObject and returns an OutputObject.
After run demo.py you can see console like that.
Input: type='InputObj' device='cuda' model_storage_directory='models' input_image='wiki/aioz.png' example_param='example'
Output: type='OutputObj' text='This is the AI task result' output_image=FileObject(type='FileObj', data=<_io.BufferedReader name='wiki/aioz.png'>, name='output_image.png')
This repo is shared with terms of Creative Commons Attribution 4.0 International (CC BY 4.0) @ AIOZ Pte Ltd