Closed victoralfaro-dotcms closed 2 months ago
IQA: Failed Updating the model at dotAI App level for the system host is not down streaming the changes to demo.dotcms.com, leaving an old config for the latter.
Last edit on dotAI App for SYSTEM_HOST
Internal Models for demo.dotcms.com not updated with lasts updates to SYSTEM_HOST
Failed IQA. If you only configure the App for System Host and set a different model for it, the default model of demo.dotcms.com is still used, so this means there's no way to set a system wide model.
Important note: after restarting dotCMS the value/model set for System Host starts to be used for the requests
In the given scenario the model gpt-3.5-turbo-16k
is used instead of gpt-3.5-turbo
QA Comment
List of test cases created for this card. https://github.com/dotCMS/core/issues/29567#issuecomment-2287084990
Note from QA:
These changes could affect part of the AI functionality, which is why we need to split the work for the QA regression.
Based on this, we estimate that a quick regression is needed in some key identified areas. @josemejias11 has already shared a CSV file with the tests that need to be covered by the team. Please coordinate the distribution, @dsilvam . Anyways here is the direct link: https://shorturl.at/dov6V
Additionally, we will focus on the regression of the sub-actions to ensure everything continues working normally and that nothing was affected by the changes.
Once all the mentioned work is done, we'll be ready to release. Thanks for your help, team!
IQA Findings:
model name
, tokens per minute
, API per minute
, max number of tokens
, and completion model flag
are being taken from the config fields and sent in the request with the new values.com.dotcms.ai.debug.logging
with true
to see the requests in the server log.https://github.com/user-attachments/assets/d4d0e073-e288-479f-a966-caaacc3964ae
Image Model Names
. Approved: Tested on trunk_bffd903, Docker, macOS 14.5, FF v126.0.1
Parent Issue
https://github.com/dotCMS/core/issues/28813
User Story
As a Java developer, I want to be able to relocate the hardcoded OpenAI models in our code to the dotAI application, specifically as parameters for each field that conform a model: model names, tokens per minute, API per minute, max number of tokens, and if it's a completion model flag. The changes will include:
dotAI.yml
Application descriptor filemodels
endpoint.Consider that
Model Names
field is actually a comma delimited list of models. This will need to happen for both text and image models and changes in thedotAI.yml
file will result in having UI changes in the Apps portlet that QA will see when testing.Acceptance Criteria
dotAI.yml
Application descriptor file includes the fields: model names, tokens per minute, API per minute, max number of tokens, and completion model flag.dotAI.yml
instead of the code.dotAI.yml
.dotCMS Version
master
Proposed Objective
Core Features
Proposed Priority
Priority 2 - Important
External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_todefine
Assumptions & Initiation Needs
dotAI.yml
format and structure are predefined and agreed upon.Quality Assurance Notes & Workarounds
dotAI.yml
configurations are correctly read and applied in various scenarios.gpt-3.5-turbo-16k
,dall-e-3
andtext-embedding-ada-002
respectively. Add the following at theAuto Index Content Config
field:Blog
content, go to theSEO
tab and verify that AI content is generated by hitting the button for SEO metadata fieldDev Tools
->dotAI
, atManage Embeddings/Indexes
tab, atIndex Name
specify:aiBlog
, atContent to Index by Query:
field specify:+contentType:blog
and build index based on that. Verify that there are index created/updated.