Closed mrheinen closed 2 days ago
Here are some key observations to aid the review process:
β±οΈ Estimated effort to review: 4 π΅π΅π΅π΅βͺ |
π§ͺ PR contains tests |
π Security concerns Command Injection: The changes to the command injection prompt in 'pkg/backend/responder/llm_prompts.go' might make it less restrictive. This could potentially allow for command injection if not properly sanitized or validated. Careful review of the prompt and its usage is necessary to ensure it doesn't introduce security vulnerabilities. |
β‘ Recommended focus areas for review Potential Race Condition The concurrent execution of LLM requests might lead to race conditions when writing to the shared 'res' variable. Consider using a thread-safe approach for combining results. Error Handling The CompleteMultiple function returns early on the first error encountered. Consider collecting all errors and returning them along with successful results. Prompt Security The updated command injection prompt might be less restrictive. Ensure it doesn't allow for potential security vulnerabilities through command execution. |
Failed to generate code suggestions for PR
Explore these optional code suggestions:
Category | Suggestion | Score |
Enhancement |
Add a timeout to the parallel completion process___ **Consider adding a timeout or context with deadline to the CompleteMultiple call toensure it doesn't run indefinitely if one of the completions takes too long.** [pkg/backend/responder/llm_responder.go [49-70]](https://github.com/mrheinen/lophiid/pull/70/files#diff-c3e93e5adb1e4d4cdb55ce0b82481329eb36fe0db2b23559380252ff22c5969dR49-R70) ```diff func (l *LLMManager) CompleteMultiple(prompts []string) (map[string]string, error) { + ctx, cancel := context.WithTimeout(context.Background(), l.completionTimeout) + defer cancel() + result := map[string]string{} - p := pool.New().WithErrors().WithMaxGoroutines(l.multiplePoolSize) + p := pool.New().WithErrors().WithMaxGoroutines(l.multiplePoolSize).WithContext(ctx) for _, prompt := range prompts { p.Go(func() error { localPrompt := prompt ret, err := l.Complete(localPrompt) if err != nil { return err } result[localPrompt] = ret return nil }) } err := p.Wait() return result, err } ``` - [ ] **Apply this suggestion** Suggestion importance[1-10]: 8Why: Adding a timeout to the CompleteMultiple function is crucial for preventing potential deadlocks or long-running operations. This significantly improves the reliability and responsiveness of the system. | 8 |
Improve error handling for cases with no commands___ **Consider handling the case where no commands are found more gracefully. Instead ofreturning early with a fallback string, you could log a warning and continue processing with an empty result.** [pkg/backend/responder/llm_responder.go [43-47]](https://github.com/mrheinen/lophiid/pull/70/files#diff-c3e93e5adb1e4d4cdb55ce0b82481329eb36fe0db2b23559380252ff22c5969dR43-R47) ```diff commands := util.SplitCommandsOnSemi(promptInput) if len(commands) == 0 { - slog.Debug("no commands found", slog.String("input", promptInput)) - return strings.Replace(template, LLMReplacementTag, LLMReplacementFallbackString, 1), nil + slog.Warn("no commands found, continuing with empty result", slog.String("input", promptInput)) + commands = []string{""} } ``` - [ ] **Apply this suggestion** Suggestion importance[1-10]: 6Why: The suggestion improves error handling by continuing execution with an empty command instead of returning early. This enhances robustness and provides more flexibility in handling edge cases. | 6 | |
Possible issue |
β Use a thread-safe map for concurrent result storage___ **Consider using a sync.Map instead of a regular map for the result inCompleteMultiple to avoid potential race conditions when writing results concurrently.** [pkg/llm/llm_manager.go [49-70]](https://github.com/mrheinen/lophiid/pull/70/files#diff-079422eef64450f8a3d97a65b202d2a89a3850d6dd9282c9a8f930bd3731c26aR49-R70) ```diff func (l *LLMManager) CompleteMultiple(prompts []string) (map[string]string, error) { - result := map[string]string{} + var result sync.Map p := pool.New().WithErrors().WithMaxGoroutines(l.multiplePoolSize) for _, prompt := range prompts { p.Go(func() error { localPrompt := prompt ret, err := l.Complete(localPrompt) if err != nil { return err } - result[localPrompt] = ret + result.Store(localPrompt, ret) return nil }) } err := p.Wait() - return result, err + finalResult := make(map[string]string) + result.Range(func(key, value interface{}) bool { + finalResult[key.(string)] = value.(string) + return true + }) + + return finalResult, err } ``` `[Suggestion has been applied]` Suggestion importance[1-10]: 7Why: Using a sync.Map instead of a regular map eliminates potential race conditions in concurrent operations. This enhances the thread safety and reliability of the code, especially in a multi-threaded environment. | 7 |
Improve error handling for empty or missing LLM responses___ **Consider adding error handling for the case whereresMap[prompt] is empty or invalid. This could help prevent potential issues if the LLM returns unexpected results.** [pkg/backend/responder/llm_responder.go [60-65]](https://github.com/mrheinen/lophiid/pull/70/files#diff-c3e93e5adb1e4d4cdb55ce0b82481329eb36fe0db2b23559380252ff22c5969dR60-R65) ```diff for _, prompt := range promptInputs { val, ok := resMap[prompt] - if ok { + if ok && val != "" { res += val + } else { + slog.Warn("Empty or missing response for prompt", slog.String("prompt", prompt)) } } ``` - [ ] **Apply this suggestion** Suggestion importance[1-10]: 5Why: The suggestion adds a check for empty responses and logs a warning, which improves error handling and debugging capabilities. While useful, it's a minor enhancement compared to the other suggestions. | 5 |
π‘ Need additional feedback ? start a PR chat
**Action:** build |
**Failed stage:** [Build](https://github.com/mrheinen/lophiid/actions/runs/11581028370/job/32240847919) [β] |
**Failure summary:**
The action failed due to a compilation error in the Go code:pkg/llm/llm_manager.go at line 50, column 16.sync package is being used but not imported. pkg/llm/llm_test.internal.a package. |
Relevant error logs:```yaml 1: ##[group]Operating System 2: Ubuntu ... 574: [145 / 338] GoCompilePkg external/com_github_jackc_pgx_v5/pgtype/pgtype.a; 3s linux-sandbox ... (4 actions, 3 running) 575: [157 / 338] GoCompilePkg external/com_github_jackc_pgx_v5/pgtype/pgtype.a; 4s linux-sandbox ... (4 actions, 3 running) 576: [186 / 338] [Prepa] GoCompilePkg pkg/database/database_test~testmain.a ... (4 actions, 3 running) 577: [212 / 338] GoCompilePkg pkg/database/database.a; 0s linux-sandbox ... (4 actions, 3 running) 578: [226 / 338] GoCompilePkg external/org_golang_google_protobuf/internal/impl/impl.a; 0s linux-sandbox ... (4 actions running) 579: [234 / 338] GoCompilePkg external/org_golang_x_text/collate/collate.a; 1s linux-sandbox ... (4 actions running) 580: [257 / 338] GoCompilePkg external/org_golang_x_text/collate/collate.a; 2s linux-sandbox ... (4 actions running) 581: [277 / 338] GoCompilePkg external/org_golang_google_grpc/internal/transport/transport.a; 0s linux-sandbox ... (4 actions, 3 running) 582: ERROR: /home/runner/work/lophiid/lophiid/pkg/llm/BUILD.bazel:21:8: GoCompilePkg pkg/llm/llm_test.internal.a failed: (Exit 1): builder failed: error executing GoCompilePkg command (from target //pkg/llm:llm_test) bazel-out/k8-opt-exec-ST-d57f47055a04/bin/external/go_sdk/builder_reset/builder compilepkg -sdk external/go_sdk -installsuffix linux_amd64 -src pkg/llm/llm.go -src pkg/llm/llm_manager.go -src ... (remaining 37 arguments skipped) 583: Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging 584: ##[error]pkg/llm/llm_manager.go:50:16: undefined: sync 585: compilepkg: error running subcommand external/go_sdk/pkg/tool/linux_amd64/compile: exit status 2 586: Use --verbose_failures to see the command lines of failed build steps. 587: INFO: Elapsed time: 65.047s, Critical Path: 45.59s 588: INFO: 285 processes: 59 internal, 226 linux-sandbox. 589: ERROR: Build did NOT complete successfully 590: ##[error]Process completed with exit code 1. ``` |
PR Type
Enhancement
Description
llm_concurrent_requests
to control the number of concurrent requestsPRDescriptionHeader.CHANGES_WALKTHROUGH
5 files
backend_main.go
Add concurrent requests parameter to LLMManager
cmd/backend/backend_main.go
NewLLMManager
call to include a new parametercfg.Responder.LLMConcurrentRequests
main.go
Add concurrent requests to LLMManager in main
cmd/llm/main.go
NewLLMManager
call to include a new parameter for concurrentrequests (set to 5)
llm_prompts.go
Refine command injection prompt
pkg/backend/responder/llm_prompts.go
command-line commands
$?
substring and echo behaviorllm_responder.go
Implement concurrent LLM lookups for command injection
pkg/backend/responder/llm_responder.go
CompleteMultiple
method from LLMManager for parallel processingllm_manager.go
Implement concurrent LLM completions in LLMManager
pkg/llm/llm_manager.go
multiplePoolSize
field to LLMManager structCompleteMultiple
method for parallel prompt completionsNewLLMManager
to include pool size parameter4 files
config.go
Add LLMConcurrentRequests to Responder config
pkg/backend/config.go
LLMConcurrentRequests
to the Responder structLLMConcurrentRequests
to 5backend-config.yaml
Add concurrent requests config option
config/backend-config.yaml
llm_concurrent_requests
with defaultvalue 5
BUILD.bazel
Update Bazel build for responder tests
pkg/backend/responder/BUILD.bazel - Added new dependencies for responder tests
BUILD.bazel
Update Bazel build for LLM package
pkg/llm/BUILD.bazel - Added new dependency `@com_github_sourcegraph_conc//pool`
2 files
llm_responder_test.go
Add tests for command injection responder
pkg/backend/responder/llm_responder_test.go
llm_manager_test.go
Add tests for concurrent LLM completions
pkg/llm/llm_manager_test.go
CompleteMultiple
method3 files
deps.bzl
Add concurrent library dependency
deps.bzl
com_github_sourcegraph_conc
org_uber_go_multierr
go.mod
Add concurrent library to go.mod
go.mod - Added new dependency `github.com/sourcegraph/conc v0.3.0`
go.sum
Update go.sum with new dependency
go.sum
github.com/sourcegraph/conc v0.3.0