jekalmin / extended_openai_conversation

Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.
834 stars 108 forks source link

History function exceeds context length #157

Open mgc8 opened 4 months ago

mgc8 commented 4 months ago

First of all, thanks for this awesome integration, it turns HA into an actual Assistant!

I am using a few custom functions to add more functionality, among them the get_history one defined here.

The function works well, but in some instances (e.g. if I ask about a sensor with a very long history), it returns so many results that it overflows the context length and I get an error from the API (which is fortunate, otherwise the costs form that many tokens could quickly add up!). It would be good to have a way to limit the size of the ns.result returned to a specified maximum (e.g. 2046/4096/etc.), truncating the results to the most recent historical values up to that point. That would solve the issue nicely, while avoiding overflows or extreme costs from the API.

jekalmin commented 4 months ago

Thanks for reporting an issue.

Maybe we can apply paging, but I'm quite not sure if gpt can respond that there are more data. I added last property to tell gpt that there are more data, but it seems not satisfactory.

- spec:
    name: get_history
    description: Retrieve historical data of specified entities.
    parameters:
      type: object
      properties:
        entity_ids:
          type: array
          items:
            type: string
            description: The entity id to filter.
        start_time:
          type: string
          description: Start of the history period in "%Y-%m-%dT%H:%M:%S%z".
        end_time:
          type: string
          description: End of the history period in "%Y-%m-%dT%H:%M:%S%z".
        page:
          type: integer
          description: The page number
        page_size:
          type: integer
          description: The page size defaults to 10
      required:
      - entity_ids
      - page
      - page_size
  function:
    type: composite
    sequence:
      - type: native
        name: get_history
        response_variable: history_result
      - type: template
        value_template: >-
          {%- set ns = namespace(result = [], list = [], last = True) %}
          {%- for item_list in history_result %}
              {%- set ns.list = [] %}
              {%- for item in item_list %}
                  {% if (page-1) * page_size < loop.index and loop.index <= page * page_size %}
                    {%- set last_changed = item.last_changed | as_timestamp | timestamp_local if item.last_changed else None %}
                    {%- set new_item = dict(item, last_changed=last_changed) %}
                    {%- set ns.list = ns.list + [new_item] %}
                  {% endif %}
              {%- endfor %}
              {%- set ns.result = ns.result + [ns.list] %}
              {%- if ns.last == True and ns.list | length == page_size %}
                {%- set ns.last = False %}
              {%- endif %}
          {%- endfor %}
          {{ dict(result=ns.result,last=ns.last) }}
mgc8 commented 4 months ago

Hi @jekalmin , thank you for the quick reply!

I applied the changes you suggested, and can confirm that it solves the immediate problem -- i.e. the Assistant now provides an answer from those sensors, despite there being a lot of records there, and there appear to be no more API errors. When asked, it responds that it only has data for "today", starting from 00:00. I find that to be quite sufficient for the needs of this tool, of course it would be amazing if it could be smart enough to realise there's more data and construct its own queries based on different time periods, but that may be a bit too ambitious. It's better to limit the responses than to send 150.000+ tokens to GPT-4 each time ;)

I am not very familiar with the templating system in use here and the variables/functions available, so I'm afraid I can't help with extending the functionality, but so far it's working pretty good as it is. Thanks again!

jleinenbach commented 3 months ago

I sought a solution from ChatGPT because my GPT Assist was overlooking the most recent entries. Following the advice received, I made a simple yet effective modification by adding | reverse to specific lines in my code:

{%- for item_list in history_result | reverse %}
{%- for item in item_list | reverse %}

By implementing these changes, the output now accurately reflects the latest activities, such as when querying the last activity of a sensor.

jleinenbach commented 3 months ago

To mitigate token length issues, transitioning to GPT-4-turbo-preview is an option. Nonetheless, extensive datasets can still lead to reaching the token limit. This was my experience when calculating the house's energy consumption (don't do that), after which I had to restart Home Assistant due to persistent error messages.

mgc8 commented 3 months ago

To mitigate token length issues, transitioning to GPT-4-turbo-preview is an option. Nonetheless, extensive datasets can still lead to reaching the token limit. This was my experience when calculating the house's energy consumption (don't do that), after which I had to restart Home Assistant due to persistent error messages.

Already using the latest model, which has a 128k tokens context, but the data sent by Home Assistant was regularly exceeding that... The above changes do a good job of limiting the query size, although of course at the disadvantage that it can't process older data points. A "smarter" prompt might be able to solve that, but so far it's working quite nicely.