Open PragatheeshAshok opened 3 days ago
Thank you @PragatheeshAshok for bringing this up.
The issue is due to missing instructions in the README to initialize 6ml/6ml/server.py
for running predictions locally.
This is on me , give me some time, I'll provide you instruction to set that locally and fix this issue .
As earlier, I found out that the issue was because the ML route wasn’t initialized, and even after initialization, it wasn't hitting the correct route. I've fixed it and pushed the changes to the localhost branch localhost. Just follow the updated README.md to set it up again.
here are the key changes in the installation instructions that were missing earlier:
Ensure Python 3.8+ is installed.
Install pip for installing Python dependencies.
Install Python dependencies for the ./6ml/6ml
server:
cd ./6ml/6ml
pip install -r requirements.txt
Start the ML server:
cd ./6ml/6ml
python server.py
Let me know if you run into any other issues!
Description: When accessing the results page, an error message, "failed to fetch predictions. please try again," consistently appears. This prevents the predictions from being displayed correctly.
Details:
Error Message: "failed to fetch predictions. please try again." Location: Results page Branch: Cloned branch (not on personal fork) Logs: No relevant logs observed in the frontend or backend consoles. Environment:
Operating System: Windows 11 Node.js Version: 20.10.0 Additional Notes: Please let me know if there are specific files or components to review for debugging. I will also check for any related open issues, especially those concerning chatbot functionality, as they may provide relevant solutions or insights.
Thank you!