Open sweep-ai[bot] opened 12 months ago
src/utils/tiktoken-server.ts
✓
Check src/utils/tiktoken-server.ts with contents:
Ran GitHub Actions for 09d72442ebe25ea72693afd406fe601d703d1b27:
• Vercel Preview Comments: ✓
src/pages/api/generate.ts
✓
Check src/pages/api/generate.ts with contents:
Ran GitHub Actions for 30a5ea4d0bdc06e092563c96327c3e11eeb3cff2:
• Vercel Preview Comments: ✓
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
44444444444 | ❌ Failed (Inspect) | Dec 23, 2023 2:11pm | ||
emphasize | ❌ Failed (Inspect) | Dec 23, 2023 2:11pm | ||
endless-chat | ❌ Failed (Inspect) | Dec 23, 2023 2:11pm | ||
free-chat | ❌ Failed (Inspect) | Dec 23, 2023 2:11pm | ||
free-chat-personal | ❌ Failed (Inspect) | Dec 23, 2023 2:11pm |
Hi there! :wave: Thanks for opening a PR. :tada: To get the most out of Senior Dev, please sign up in our Web App, connect your GitHub account, and add/join your organization CNSeniorious000. After that, you will receive code reviews beginning on your next opened PR. :rocket:
Name | Link |
---|---|
Latest commit | 44b8b8a284c1db023694b31530e985272d082859 |
Latest deploy log | https://app.netlify.com/sites/endless-chat/deploys/6580a67726c50f00089cd7fb |
Hey there! I've created a report for the pull request based on the commit history. Let's dive in!
src/pages/api/generate.ts
:
countTokensServer
from @/utils/tiktoken-server
.minMessages
and maxTokens
constants.messages
.initOptions
with trimmed messages
.src/pages/api/generate.ts
, consider extracting the token counting and trimming logic into a separate function for better readability and maintainability. Here's a suggested refactoring:// Add this function at the top of the file
const trimMessages = (messages: ChatMessage[], maxTokens: number, minMessages: number) => {
let trimmedMessages = [...messages];
let tokenCount = countTokensServer(null, trimmedMessages).total;
while (tokenCount > maxTokens && trimmedMessages.length > minMessages) {
trimmedMessages.shift();
tokenCount = countTokensServer(null, trimmedMessages).total;
}
return trimmedMessages;
};
// Update the post function
export const post: APIRoute = async ({ request }) => {
// ...
const trimmedMessages = trimMessages(messages, maxTokens, minMessages);
// ...
};
I would rate the code a 7 out of 10. The code is generally readable and performs well. However, there are a few areas that could be refactored for better readability, such as extracting the token counting and trimming logic into a separate function. Overall, the code seems to be secure and error handling is in place.
That's it for the pull request report! Let me know if you need any further assistance. Have a great day! 😄
Name | Link |
---|---|
Latest commit | 44b8b8a284c1db023694b31530e985272d082859 |
Latest deploy log | https://app.netlify.com/sites/chat-for-free/deploys/6580a677ca656500085be314 |
PR Feedback: 👎
Description
This pull request implements token counting on the server side.
Summary
src/pages/api/generate.ts
to include token counting logic.src/utils/tiktoken-server.ts
to handle token counting and initialization of Tiktoken.Fixes #17.
🎉 Latest improvements to Sweep:
rope
library to refactor Python! Check out Large Language Models are Bad at Refactoring Code. To have Sweep refactor your code, trysweep: Refactor <your_file>.py
!💡 To get Sweep to edit this pull request, you can: