Closed random-yang closed 5 months ago
Yeah I think so.
On Thu, Jun 13, 2024 at 2:59 AM RandomYang @.***> wrote:
Is this all expected?
- Even if using gpt3.5 in combination with gpt4, the speed is still extremely slow
- There is still the characteristic of a large model - instability
- There is an effect of improvement, but occasionally there will also be negative optimization
- And the token cost is several times that of a single translation
— Reply to this email directly, view it on GitHub https://github.com/andrewyng/translation-agent/issues/10, or unsubscribe https://github.com/notifications/unsubscribe-auth/AY7DCNGRRZMCAF4QB35CVTLZHFGNLAVCNFSM6AAAAABJH2ZY6SVHI2DSMVQWIX3LMV43ASLTON2WKOZSGM2TANBVGMZDMMA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
It's too expensive
It's too expensive
Yes, so I packaged it into a coze bot, and it is still free, so other than being a little slow, it’s pretty good! If you guys interested, here is the link: https://www.coze.com/s/Zs8MEgGcR/
Hey @random-yang , thanks for your issue. Yes, I think all those points that you mentioned are expected. For clarity, however, I figured I could elaborate on them somewhat:
Closing this issue for now, due to no resolution, but if you have suggestions about how to address these points, we gladly welcome a proposal!
Wow
On Fri, Jun 28, 2024 at 5:26 PM Joaquin Dominguez @.***> wrote:
Hey @random-yang https://github.com/random-yang , thanks for your issue. Yes, I think all those points that you mentioned are expected. For clarity, however, I figured I could elaborate on them somewhat:
- We were only focused on optimizing for quality of output in this process, not latency; although, I recognize how that could be important to people/some applications.
- The non-determinism of a model isn't something we can control, but we can see to what extent it affects the application by evaluating the outputs and tweaking the process accordingly.
- Good point! It could be worth thinking about checking to what degree translations are made "worse" by reflection, which we have found tends to be a somewhat subjective term in MT.
- That is true. As mentioned, we were trying to optimize for translation quality. We hope this process can be applied with local open-source (free) models as well.
Closing this issue for now, due to no resolution, but if you have suggestions about how to address these points, we gladly welcome a proposal!
— Reply to this email directly, view it on GitHub https://github.com/andrewyng/translation-agent/issues/10#issuecomment-2197730978, or unsubscribe https://github.com/notifications/unsubscribe-auth/AY7DCNAV3YBP6X66LDB5LP3ZJXPK7AVCNFSM6AAAAABJH2ZY6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJXG4ZTAOJXHA . You are receiving this because you commented.Message ID: @.***>
Is this all expected?