SmartManoj / Kevin

⚡ Kevin: Code Quick, Create Fast
MIT License
7 stars 0 forks source link

Set num_ctx config #27

Closed SmartManoj closed 3 months ago

SmartManoj commented 3 months ago

https://github.com/OpenDevin/OpenDevin/pull/2927

fixes https://github.com/OpenDevin/OpenDevin/issues/2924

fixes https://github.com/OpenDevin/OpenDevin/issues/3287

SmartManoj commented 3 months ago

@CodiumAI-Agent /review

CodiumAI-Agent commented 3 months ago

PR Reviewer Guide 🔍

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Key issues to review

Hardcoded Dependency
The code assumes the presence of specific attributes (`max_input_tokens` and `max_output_tokens`) in the `config` object only when the model starts with 'ollama'. This could lead to issues if the config does not include these attributes or if the model naming convention changes. Missing Validation
There is no validation to check if `max_input_tokens` or `max_output_tokens` are integers or within a valid range before using them in the calculation. This could lead to runtime errors or incorrect configurations.
SmartManoj commented 3 months ago

@CodiumAI-Agent /ask

This could lead to issues if the config does not include these attributes or if the model naming convention changes.

it ensures its values.

There is no validation to check if max_input_tokens or max_output_tokens are integers

mypy ensures that a range is not needed here.

CodiumAI-Agent commented 3 months ago

Ask

This could lead to issues if the config does not include these attributes or if the model naming convention changes. it ensures its values. > There is no validation to check if max_input_tokens or max_output_tokens are integers mypy ensures that a range is not needed here.

Answer:

The concerns raised in the PR questions are valid. Let's address them one by one:

  1. Missing Attributes in Config:

    • Issue: If max_input_tokens or max_output_tokens are not present in the configuration, the code will raise an AttributeError.
    • Solution: Add a check to ensure these attributes exist in the configuration before using them.
  2. Model Naming Convention:

    • Issue: If the model naming convention changes, the condition self.config.model.startswith('ollama') might not work as intended.
    • Solution: Ensure that the model naming convention is well-documented and consistent. Alternatively, consider using a more robust method to identify the model type.
  3. Validation of max_input_tokens and max_output_tokens:

    • Issue: There is no validation to ensure that max_input_tokens and max_output_tokens are integers.
    • Solution: Add validation to check if these values are integers before performing arithmetic operations.

Here is an improved version of the code with these considerations:

if self.config.model.startswith('ollama'):
    max_input_tokens = getattr(self.config, 'max_input_tokens', None)
    max_output_tokens = getattr(self.config, 'max_output_tokens', None)

    if max_input_tokens is not None and max_output_tokens is not None:
        if isinstance(max_input_tokens, int) and isinstance(max_output_tokens, int):
            litellm.OllamaConfig.num_ctx = max_input_tokens + max_output_tokens
        else:
            raise ValueError("max_input_tokens and max_output_tokens must be integers")
    else:
        raise AttributeError("max_input_tokens and max_output_tokens must be defined in the config")

Explanation:

  1. Attribute Check: getattr(self.config, 'max_input_tokens', None) and getattr(self.config, 'max_output_tokens', None) are used to safely retrieve the attributes. If they do not exist, None is returned.
  2. Type Validation: isinstance(max_input_tokens, int) and isinstance(max_output_tokens, int) ensures that both values are integers.
  3. Error Handling: Raises appropriate errors if the attributes are missing or if they are not integers.

This approach ensures that the code is more robust and less prone to runtime errors.