Closed Coopyrightdmin closed 1 year ago
I am pleased to announce that the issue outlined in this ticket regarding the customization of the containerization process has been resolved.
Solution Implemented:
We've integrated a Pydantic configuration model at each module level. This offers you the flexibility to set specific configurations as per the project's requirements. With this approach, each module can now be launched independently. If you wish to run the global pipeline, simply configure the central configuration file. Here's an overview of this file:
---
python_version:
- 3
- 11
encoding: utf-8
code_analyzr:
script: ""
functions_to_analyze: ""
ignore: ""
fast_apizr:
module_name: main
api_filename: app.py
dockerizr:
docker_image: alpine
docker_image_tag: alpine3.18
server:
workers: 2
timeout: 60
host: 0.0.0.0
port: 5001
Problem Statement
There is a varying need for containerization configurations depending on the underlying framework and system requirements, such as TensorFlow, GPU support, etc. We need a solution that customizes the containerization process to cater to these specific requirements without manual intervention.
Benefits
Additional context
Different frameworks and tools have varying system requirements. For instance, TensorFlow projects might need GPU support for training deep learning models. A one-size-fits-all approach to containerization may not be efficient or even functional for all use cases. An adaptive containerization process is therefore essential.
Priority/Severity