Closed ddkwork closed 3 months ago
In addition, we need to add a copy button to the code snippet returned by the AI in the MD format content so that the code can be copied with one click
Almost all ready-made AI chat GUIs have a feature that supports syntax highlighting for code blocks, how should this be done?
If you want to use markdown, just call coredom.ReadMD
. You can delete the existing children of the element every time new data is added, like this:
answer := gi.NewFrame(parent)
go func() {
for {
updt := answer.UpdateStartAsync()
answer.DeleteChildren(true)
grr.Log(coredom.ReadMDString(coredom.NewContext(), frame, currentAIResponseMarkdown))
answer.UpdateEndAsyncLayout(updt)
}
}()
If you need more help with this, feel free to let me know. This ollama GUI seems like an interesting project and a good use of Cogent Core.
Thanks, if done well, maybe bring in a whole bunch of people to come over and test. But if you want to be as beautiful as newbing, it's better for you to do this
---Original--- From: @.> Date: Thu, Mar 7, 2024 13:12 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
If you want to use markdown, just call coredom.ReadMD. You can delete the existing children of the element every time new data is added, like this: answer := gi.NewFrame(parent) go func() { for { updt := answer.UpdateStartAsync() answer.DeleteChildren(true) grr.Log(coredom.ReadMDString(coredom.NewContext(), frame, currentAIResponseMarkdown)) answer.UpdateEndAsyncLayout(updt) } }()
If you need more help with this, feel free to let me know. This ollama GUI seems like an interesting project and a good use of Cogent Core.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I will consider working on the ollama GUI, and I will also work on implementing markdown code block highlighting soon.
ok
---Original--- From: @.> Date: Thu, Mar 7, 2024 13:27 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I will consider working on the ollama GUI, and I will also work on implementing markdown code block highlighting soon.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
package main
import (
"cogentcore.org/core/coredom"
"cogentcore.org/core/gi"
"cogentcore.org/core/grr"
"github.com/ddkwork/golibrary/stream"
"math/rand"
"time"
"unsafe"
)
func main() {
b := gi.NewBody("ollama gui")
answer := gi.NewFrame(b)
go func() {
file := stream.NewReadFile("D:\\workspace\\workspace\\branch\\gpt4\\ollama\\ollama\\log.log.md")
lines, ok := file.ToLines()
if !ok {
return
}
for _, line := range lines {
println(line)
updt := answer.UpdateStartAsync()
answer.DeleteChildren(true)
grr.Log(coredom.ReadMDString(coredom.NewContext(), answer, line))
answer.UpdateEndAsyncLayout(updt)
time.Sleep(time.Second)
}
}()
b.RunMainWindow()
}
Try calling answer.Update
right before answer.UpdateEndAsyncLayout
.
You should keep working on the ollama GUI if you want to, and I can help you if you get stuck. I may work on it more in the future, but I will focus on developing the framework for now.
No problem, because my machine can run Google models, although not particularly smooth, 9-16 tokens per second, barely usable, and it's okay to ask programming questions, compared to search engines full of ads. In addition, for the problem that China cannot access the Goland AI assistant, I want to integrate the local model into our own developed IDE without having to pay for it. Although I don't have access to the Goland AI assistant, I never pay either, hehe 🙂. Based on the fact that the memory is too small to use a very small model, the effect may not be as good as GPT3, once you get a machine with more than 64-128GB memory, it is not a dream to get the effect of GPT3-4, at this time, it is integrated into the code package of COGENT CORE, we right-click on the code to call the local model for code interpretation, batch translation C, CPP, JAVA, and even various scripting languages for Go language code, or interpret the code, translate comments, There are no comments, let it write comments, let the model complete the code batch refactoring... Wait, batch stuff we just need to walk through it in the click event to give it a dir. I'll design the expected layout first, and finally hand over the MD questions that the Streaming Reaction AI will reply to you 🖐️
---Original--- From: @.> Date: Fri, Mar 8, 2024 05:51 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
You should keep working on the ollama GUI if you want to, and I can help you if you get stuck. I may work on it more in the future, but I will focus on developing the framework for now.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Still not working
---Original--- From: @.> Date: Thu, Mar 7, 2024 23:47 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Try calling answer.Update right before answer.UpdateEndAsyncLayout.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
However, in order to turn these dreams into reality, we need to improve the performance issues of the code editor, as well as support for folding, syntax parsing, etc., otherwise not many people will use the editing function if the editing experience is not good. I really want more people to participate in the various tests of this framework, you see the two people in the other day, plus I have been testing non-stop, it has become much more stable. I remember the first time I learned about your project was a year or two ago, when I couldn't compile it successfully, I could only look at your demo pictures, until this year I searched for the GUI framework and accidentally saw your project again, and I couldn't run after many attempts, and suddenly one day when I was taking a shower, I thought of a trick: workspace. It's still clear when I take a shower.,I'm done with a workspace and delete the content of all modules.,Sum also deletes.,Finally go mod sync,In the end, only 5-6 errors were reported.,And then a mess of changes.,It's running.,But the first time I ran up zoom is not the feeling it is now.,It's almost 600% or so scaling.,I can only alt+f4 force close many times.,Finally, I found that the global menu zoom can solve it., Do you remember the first time you asked me remotely if I was tuned like this for every app or how did you get it? That's how that experience came about. So testing and user experience are very important, as for the panic that is fixed through testing, there are a lot of panics, not only should there be no panic, I think there should be no call panic in the code, whether panic and exit should be handed over to the user, otherwise it is likely to lose data. Also, with my suggestion that you merge all the repositories I hope it doesn't make you uncomfortable, and if I'm not doing anything well, please let me know. Oh yes, gtigen didn't panic I haven't figured it out, if I could, I really want to test it on your computer
I agree that the editing experience in Cogent Code needs to be improved, and I will work on that soon. Also, I will work more on getting rid of all of the panics. I also want more people to use the framework, and I am planning to promote it and get more people using it after I do the v1 release. I am also planning to do alpha and beta releases before the v1 release to get more feedback before the first stable version. I appreciate all of the feedback that you have given, and I would also appreciate it if you are willing to work on writing more unit tests for Cogent Core, since that is a way that you can contribute and speed up the progress of Cogent Core and the v1 release. If you are interested in doing that, you can look at the places where we are missing test coverage (look at https://raw.githack.com/wiki/cogentcore/core/coverage.html) and then write tests for those functions using the assert package. If you try writing a few tests, I can give you feedback and see whether that is a good option going forward. Of course, you do not have to write tests if you would rather work on projects using Cogent Core instead, it is just a possibility.
I can only say that I understand only 30% of your project, and valuable unit tests require you to mock and various frameworks, so at the moment I prefer to use the project to test its functional integrity and various bugs
---Original--- From: @.> Date: Fri, Mar 8, 2024 12:50 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I agree that the editing experience in Cogent Code needs to be improved, and I will work on that soon. Also, I will work more on getting rid of all of the panics. I also want more people to use the framework, and I am planning to promote it and get more people using it after I do the v1 release. I am also planning to do alpha and beta releases before the v1 release to get more feedback before the first stable version. I appreciate all of the feedback that you have given, and I would also appreciate it if you are willing to work on writing more unit tests for Cogent Core, since that is a way that you can contribute and speed up the progress of Cogent Core and the v1 release. If you are interested in doing that, you can look at the places where we are missing test coverage (look at https://raw.githack.com/wiki/cogentcore/core/coverage.html) and then write tests for those functions using the assert package. If you try writing a few tests, I can give you feedback and see whether that is a good option going forward. Of course, you do not have to write tests if you would rather work on projects using Cogent Core instead, it is just a possibility.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Ok. There are many lower-level packages with individual functions that can be tested without mocking, but that is fine.
Also, I am making major changes to the way that updating works now, so that may fix your issue.
Layout not working as expected https://github.com/ddkwork/ollamaGui
Do not put the run module button in the splits; it should always be the same size, so it should just be at the end of a column layout. Same with the "module choose" text; it should not be in a splits, just a column layout. You can increase the width of the text field by setting s.Min.X
to something larger in a Style
call. I will debug the markdown sync after I finish my updating changes and you apply the changes I stated above.
I debugged 👌🏻 for a long time this afternoon and found a problem: a lot of the reflection errors I saw before came from the null pointer parameter of the style callback function, however, I tuned it for so long and couldn't find where the structview instantiated the style
---Original--- From: @.> Date: Fri, Mar 8, 2024 23:48 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Do not put the run module button in the splits; it should always be the same size, so it should just be at the end of a column layout. Same with the "module choose" text; it should not be in a splits, just a column layout. You can increase the width of the text field by setting s.Min.X to something larger in a Style call. I will debug the markdown sync after I finish my updating changes and you apply the changes I stated above.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I made the changes as you instructed and the layout on the right still doesn't render as expected, I'm going to mock a list of tokens for you sync md now, I'll update it later, and then you can iterate over the tokens returned by the ai to debug sync md. Also, the multi-line behavior of the multi-line text editing widget seems to be out of control, and when I increase the width, only one line is displayed anyway
---Original--- From: @.> Date: Fri, Mar 8, 2024 23:48 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Do not put the run module button in the splits; it should always be the same size, so it should just be at the end of a column layout. Same with the "module choose" text; it should not be in a splits, just a column layout. You can increase the width of the text field by setting s.Min.X to something larger in a Style call. I will debug the markdown sync after I finish my updating changes and you apply the changes I stated above.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I will try running it on my computer and seeing whether I can figure out the issues.
The multiline text field is multiline in that it grows to multiple lines if necessary, but it will not do so if there is not content to fill multiple lines. Your s.Min.X.Set
call is not doing what you think; you are specifying a unit of 200, which is invalid. You should do something like s.Min.X.Em(10)
. However, I actually think that a better idea for this situation would be to just let the text field take up the whole available space, which you can accomplish with s.Max.Zero()
.
Also, you do not need to and should not call s.SetTextWrap(true)
; that happens automatically now.
ok
---Original--- From: @.> Date: Sat, Mar 9, 2024 11:06 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Also, you do not need to and should not call s.SetTextWrap(true); that happens automatically now.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Now typing in a little more will it wrap, but I need it to default to a very high height, how should this work
---Original--- From: @.> Date: Sat, Mar 9, 2024 11:05 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
The multiline text field is multiline in that it grows to multiple lines if necessary, but it will not do so if there is not content to fill multiple lines. Your s.Min.X.Set call is not doing what you think; you are specifying a unit of 200, which is invalid. You should do something like s.Min.X.Em(10). However, I actually think that a better idea for this situation would be to just let the text field take up the whole available space, which you can accomplish with s.Max.Zero().
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
token was mocked
---Original--- From: @.> Date: Sat, Mar 9, 2024 11:02 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I will try running it on my computer and seeing whether I can figure out the issues.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
working
---Original--- From: @.> Date: Fri, Mar 8, 2024 23:48 PM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Do not put the run module button in the splits; it should always be the same size, so it should just be at the end of a column layout. Same with the "module choose" text; it should not be in a splits, just a column layout. You can increase the width of the text field by setting s.Min.X to something larger in a Style call. I will debug the markdown sync after I finish my updating changes and you apply the changes I stated above.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I do not understand at all why you want to make the text input high by default; if I understand the use case correctly, typically these AI GUIs have a single line input that can wrap to multiple lines, like this for Google Gemini:
Also, again, I strongly recommend doing s.Max.X.Zero()
instead of setting a fixed width so that it adapts better on all platforms.
Well, maybe I'm confused by newbing, so let's go alone
---Original--- From: @.> Date: Sun, Mar 10, 2024 01:50 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I do not understand at all why you want to make the text input high by default; if I understand the use case correctly, typically these AI GUIs have a single line input that can wrap to multiple lines, like this for Google Gemini: Screenshot.2024-03-09.at.9.48.03.AM.png (view on web) Also, again, I strongly recommend doing s.Max.X.Zero() instead of setting a fixed width so that it adapts better on all platforms.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Bing also has a single line input at the start, with some buttons below it, but only one line of actual input:
There is absolutely no reason that the user would need multiple lines of input from the start.
Okay, I'll change it later. By the way, I've mocked the token, can you make the MD stream it? In addition, I think I need to parse the ollama official website to get the model list populated to the left view, sleep for a while and try again, I remember it was a package called goquery to retrieve the dom node of the web page, but I found that the treeview display widget I am not familiar with how to operate the new root node and the child.
---Original--- From: @.> Date: Sun, Mar 10, 2024 01:56 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Bing also has a single line input at the start, with some buttons below it, but only one line of actual input: Screenshot.2024-03-09.at.9.55.25.AM.png (view on web) There is absolutely no reason that the user would need multiple lines of input from the start.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Once you connect the app to the ollama API, we can figure out what streaming approach makes the most sense.
I've looked at the OpenAI package provided by Ollama and it only provides MD format returns, in addition, I simulated the behavior of the client and correctly streamed the MD content returned by the model under the console. As long as the left view is completed and the MD is working properly, I think I can connect to the simulated client and server communication
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:15 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Once you connect the app to the ollama API, we can figure out what streaming approach makes the most sense.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
That is good. My question is how the ollama package returns the data; is it an io.Reader
? If it is an io.Reader
, then I can give you advice on what to do.
https://github.com/ollama/ollama/blob/main/openai%2Fopenai.go#L255-L262
All my questions to the model are sent by the code here to the client to show me that the model returns MD after a long time
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:23 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
That is good. My question is how the ollama package returns the data; is it an io.Reader? If it is an io.Reader, then I can give you advice on what to do.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
In addition, the body of the model's reply is the content field of JSON, which stores MD, and the rest of the fields only record the access time, the number of tokens, etc., which is not in line with the answer hook. I feel that the body format is determined by the model rather than the server, for example, if you ask the model what time it is, it will definitely not reply to you in the MD format, but if you ask about the code, it will be MD every time. In any case, MD support is imperative
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:23 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
That is good. My question is how the ollama package returns the data; is it an io.Reader? If it is an io.Reader, then I can give you advice on what to do.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Again, I do not know how the ollama API works, but once you can get the MD results from ollama in the program, I can help you with the GUI part of things.
The model dictates the response format, and what I've determined so far is that the client only handles line breaks and whitespace. It's better to wait for your MD to support more features before getting this, and wait for a while
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:51 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
Again, I do not know how the ollama API works, but once you can get the MD results from ollama in the program, I can help you with the GUI part of things.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I do not understand. We support plenty of MD, and well more than enough for basic model functionality. Our MD support is not in any way a blocker for doing this. The only thing that needs to be resolved is how to get the response from the ollama API, which I am asking you how to do. Once you figure that out, I can tell you how to do all of the markdown syncing operations.
I'll plug directly into the client code, and you'll understand what I mean when you run it. Have you downloaded the model on your computer? Or come to my computer to watch. In the end, there is only one conclusion: the model responds to the appropriate data format according to the question, you ask about programming, it replies with md, and beyond that, label is sufficient. But 90% of the time I ask about it is programming-related.
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:58 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I do not understand. We support plenty of MD, and well more than enough for basic model functionality. Our MD support is not in any way a blocker for doing this. The only thing that needs to be resolved is how to get the response from the ollama API, which I am asking you how to do. Once you figure that out, I can tell you how to do all of the markdown syncing operations.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I think you should ask the client how to handle the process of responding rather than how the server responds, and the server can't decide. Here's how the client works, after initiating an HTTP request, Bufio instantiates a Scanner scan response.bidy, which handles line breaks and blank lines, etc., so that the body of the reply conforms to the reading style, and finally it renders the MD format
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:58 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I do not understand. We support plenty of MD, and well more than enough for basic model functionality. Our MD support is not in any way a blocker for doing this. The only thing that needs to be resolved is how to get the response from the ollama API, which I am asking you how to do. Once you figure that out, I can tell you how to do all of the markdown syncing operations.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I understand that it decides what to respond with, and I understand that it frequently responds with MD. What I do not understand is what the problem with that is. We are perfectly capable of rendering and updating MD, so I do not understand at all why that is a problem. Again, once you can get the program to get the response data, I can write all of the logic that handles the MD rendering for you.
So we just need to pass in the md render function where we traverse the body, but I can't get it to stream and save the previous render at the moment
---Original--- From: @.> Date: Sun, Mar 10, 2024 02:58 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I do not understand. We support plenty of MD, and well more than enough for basic model functionality. Our MD support is not in any way a blocker for doing this. The only thing that needs to be resolved is how to get the response from the ollama API, which I am asking you how to do. Once you figure that out, I can tell you how to do all of the markdown syncing operations.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I mocked the token just to make it easier for you to debug the sync md, or do you have to run the model every time you debug, it's too slow. You just have to iterate through the mock's token display and that's it
---Original--- From: @.> Date: Sun, Mar 10, 2024 03:08 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I understand that it decides what to respond with, and I understand that it frequently responds with MD. What I do not understand is what the problem with that is. We are perfectly capable of rendering and updating MD, so I do not understand at all why that is a problem. Again, once you can get the program to get the response data, I can write all of the logic that handles the MD rendering for you.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Did you refresh my commit?
answer := gi.NewFrame(frame)
go func() {
for _, token := range tokens {
println(token)
answer.AsyncLock()
//todo
//We need to check the token's newlines to deal with it,
//and secondly, we want to keep the previous token instead of deleting it
//answer.DeleteChildren(false)
grr.Log(coredom.ReadMDString(coredom.NewContext(), answer, token))
answer.Update()
answer.AsyncUnlock()
time.Sleep(100 * time.Millisecond)
}
I am testing it now.
I got the markdown syncing fully working and cleaned up some other things with the app; can you give me write access to the repository?
Sure, but where to set it?
---Original--- From: @.> Date: Sun, Mar 10, 2024 03:44 AM To: @.>; Cc: @.**@.>; Subject: Re: [cogentcore/core] markdown sync (Issue #929)
I got the markdown syncing fully working and cleaned up some other things with the app; can you give me write access to the repository?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Describe the feature
After some testing, Label doesn't seem to be suitable for AI chat scenarios, while Markdown does, almost all AI models return MD format, CoreDOM doesn't seem to have a way to insert text in threads, can you add this feature?
Relevant code