zeke / action-transcription

A fork of @simonw's amazing thing for transcribing and translating YouTube videos using youtube-dl and whisper
Apache License 2.0
4 stars 0 forks source link

Transcribe audio #8

Closed zeke closed 1 year ago

zeke commented 1 year ago

URL

https://www.youtube.com/watch?v=ddG2fM9i4Kk

github-actions[bot] commented 1 year ago

Language: english

Transcription: All right, it's here. It's finally happening. The day has come today, we release and announce open assistant, the world's best, truest and most awesome open source AI assistant. This is a global community effort to bring the power of conversational AI to everyone to businesses, to researchers and to individuals and get it out of the hands of big few corporations that are out there. So it's really a pleasure for me to announce this. We have been working since before Christmas to build a data collection platform. And we have collected amazing numbers of data. We've collected over 600,000 interactions with humans among those 150,000 messages of human demonstrations of being an assistant. And this is so awesome. And all of this has resulted in over 10,000 fully annotated conversation trees. And these range massively diverse set of topics, from programming to making an omelet to just chatting with the model, pretty much everything is in there. And it is in so many languages that I didn't even know so many languages existed. This has been made possible because we've gotten contributions from over 13,000 volunteers from all around the world. And we are extremely, extremely happy to see so many people wanting to contribute to open source AI. So we've taken all of this by the way, you may notice this video is a bit improvised, I there's no time to edit things happening so fast. So forgive me, but I wanted to get this to you as soon as possible. We've trained first models already on this data. And I presented them in a video here. Now if you are interested and you want to see some funny interactions with these models, and just see how competent they get when using this data, this video is definitely a thing to check out. So we've arrived at this point, we've been researching, we've been seeing how good our models are. And we realized this data set is absolutely unique, and incredibly valuable. And we've looked a bit deeper into the models. And in fact, they're so good. At some point, we've actually had to update our plans on how to handle all of this. You see, the models are so powerful and capable, that we've come to realize maybe it would be better for us to keep them private, at least for now, until we fully understand them. You'll still have access to them via our chat interface. And we've set it up so you automatically get a discount on the price of a subscription if you join right now. But you know, it's really a matter of safety, really. And I'm kidding. Here is the data set. And here is the code. And here is a chat interface. And here is a paper. And we're going to release a Weights and Biases report along with the paper. Paper appears on Monday on archive, you can look at it now already. But everything is here, nothing is private, you can go and find absolutely everything. We are extremely excited to be actually open source and actually provide you with open AI. And that has a space in the middle. So the chat interface is by far the best place to try out these things. It's an absolute pleasure to interact with these models. We have a few models. So we have Lama based models. As you know, these are licensed research only. But we also have Pythea based models. And these are fully open source and 100% business friendly. So you can use them, build them, do whatever you want. And all of the models that we have so far, also still fit into a single GPU. Sometimes it's a big GPU, but still they do in fact fit. And I'm very sure the community can do wonders with those things. At the end, we're going to have these models run on a toaster by the speed of progress that's happening right now. So everything absolutely cool. If you want to contribute to open, open AI, and don't want to send, for example, confidential data to open AI, and you don't know what happens there, then this is absolutely a game changer for business and for anyone who wants to work in open source. And if you do use any of it, which we absolutely invite you to do, please give something back if you improve the data set, we'll happily take that up in some way. If you write a paper, feel free to drop a citation. If you use it in business, we'd be absolutely thrilled to hear a case study of how it is used. If you train a model on the data or fine tune our models, we can bring that back into our interface because the chat interface that we have right here, it not only serves as you know, trying out the models, but it also serves with the thumbs up and thumbs down, it serves as an evaluation for new models. And it also serves a bit of a double purpose as additional data collection. So by using this and by bringing models here, you'll actually be making the data set ever better and be and us and the entire open source community, ever more capable of training really good open source models. And that's extremely, extremely exciting. So don't hesitate to feedback and give back you can find links to all of these things here in the description. And yeah, if the main link is this one open assistant.io slash chat. Now, I know your fingers are itching to go there and try it out. So my suggestion if you want to have a bit of fun is go and to try to go get into an argument with the model about something that usually works out fairly funny. Pay attention, you can go to slash chat, you can click on the chat on the left hand side and pay attention to the little cogwheel thing that's in there, you can select different things, we will probably release new models which you can find under the model tab here. And the preset is the sampling preset. So k 50 usually gives you rather straightforward answers. And the other ones like nucleus or typical sampling, they sometimes give you more creative answers. If you do go there right now, and I know I know you really want to, then right now you can log in with Discord or email, we will be adding more things later. We know not all of you have, for example, Discord, we do invite you to make a Discord account. If you do log in via email, then it might be that the email ends up in spam and actually will have a typo in the original email address that it comes from. We are aware, we're very sorry, we are fixing it. So if you want to try it out, go there now and come back and watch the rest of the video because I have some cool stuff to show you. I just also have to show you this one. A user posted this today in our discord. It's one of the interactions with the chatbot. User says give me the worst advice, how to name my new dog. The assistant says name your new dog Fido dot AXE. How is that bad advice the user asks, you should never under any circumstance give your dog a file extension as its name. Doing so can make it hard to distinguish between your pet and files in your computer, which can lead to confusion and frustration. Additionally, it might also create security risks as malicious actors can use similar names to hide their attacks. Therefore, avoid naming your dog anything related to files or technology and choose a unique and meaningful name instead. Absolutely beautiful, absolutely on point. The terrible advice, terrible advice as asked. Now, this is an absolutely massive achievement. It's by far the most immense instruction model project of its kind. And it's by far the largest collection of human demonstration data of its kind. And together with the new newly released Dolly models and data sets. It's one of the only true open source chat GPT replications with 100% real homegrown flesh and bones human data. And yeah, you're able to take this data and mix it with other stuff like alpaca and vicuna and whatnot. And you'll get various very cool results. I do stress the human data, because it really makes a difference. Humans are orders of magnitude more creative, more resourceful and more on point than any synthetic or self instruct data can ever be. And I'm very convinced that our data is not only much better than the synthetic data sets out there, which are cool, but ours is better. But there's a good chance that our data is even better than open AI's data because they have to pay crowd workers to provide them with data. While our people we have the power of love and determination. And that always wins. Let me tell you a little tell you a little bit what's coming up in the near future. There is a user on our discord, name is Dragan. And Dragan has been investigating plugins for open assistant. And this is absolutely mind blowing. So here, he types when was Joe Biden born. And you'll see that the Google web search plugin is activated on that. And now the model goes and actually queries Google gets back the results and parses them all in just a conversational way. So the assistant itself decides to ask Google something the assistant decides on what query to ask Google. So it doesn't just copy whatever you give. And then it decides how to make use of the results. And we're able to chain these things together to do the multiple times to handle errors. And yeah, the plugins work in very much the same way as they do in chat GPT, or the open AI API's, you essentially give a link to a JSON file that specifies an open API specification. And that's all we need for the model to be interacting with it. And this is absolutely crazy. And just today Dragan has told me that as we get better models, because we also just got a better new model today, the use of plugins also gets better. And more plugins, better models, bigger models. All of this means super orders of magnitude more powerful things that we can do with these tools. I'm extremely excited for the future of this project. And with that, I think we've come pretty, pretty on point to this slide, which I presented at the very beginning when I made the first video about the topic. Open Assistant is a chat based assistant that understands tasks can interact with third party systems and retrieve information dynamically to do so. With the inclusion of plugins and the data we've collected, this will become a reality. And it's so exciting. And by the way, we're not going to stop collecting data, because as it gets more awesome, the human data will become ever more valuable. So at some point, we'll demonstrate to the assistant how to retrieve stuff, how to make use of plugin results that it might not be super obvious with. And yeah, it's it's also that part is going to be more fun in the future than it has been already. I want to tell you a little bit about what's in the paper itself. I've tweeted out this survey. And what we've done is we've, we've taken a bunch of prompts that we knew none of the models has ever been trained on, because that's how we sampled them from our database. And we've given them to both an open assistant model that's based on Pythea. And we've given them to chat GPT. And then we let users rate which one they prefer. And the result is really cool. It's like dead even. So it's 48.3% open assistant 51.7% chat GPT, the preference is is dead even. And that's absolutely amazing to see. Now I'm in no way saying that the open assistant models are as good as open AI models, especially in things like coding our models, they don't have as much code in their pre training data. So they're naturally not as good at coding. And also they're much, much smaller. So there are going to be a lot of tasks where obviously open AI models are still better. But there are also quite a bit of tasks where people who rated preferred open assistant models, I just find that a lot of times open assistant models, they're more human, they're more concise, they're more interesting. And in my personal opinion, they're just overall more fun to interact with than the kind of boring corporate models. If you want to have some fun, as I said, try going there and getting into an argument or try giving it some weird tasks. Try having it mix up some distant concepts or write an essay about something absurd, or to have it act like a certain personality. In the video I've shown you at the beginning, I made it be a stoned teenager writing texts about an historical event. And that was absolutely hilarious. Be sure to give it precise instructions. And yeah, it's it's a new world. It's crazy how programming and interacting with computers what it has come to. We've also ran a survey among contributors. And the general consensus is not only that the data people have seen during ranking and rating has been largely high quality, but also overwhelmingly people reported enjoyment contributing to what to many is their first open source project. And the vast majority of people as you see here reported they are glad to have contributed to the project. I promise we are glad you did too. Lastly, we tested the efficacy of our spam removal. The spam removal is a hybrid system that combines crowdsourcing with manual human moderation. And you can read in the paper a bit more on how we set that up. But we wanted to confirm that it is effective. And we've analyzed the data and we found it to be indeed extremely effective at removing contributions that are ill suited for the data set. And in many ways, I think we found a very optimal trade off between effectively weeding out spam while making the best use of the valuable time of the human moderators, who are also volunteers. So overall, I think this is an insane success. I want to talk a little bit about what it took to get there. There's a humongous amount of effort going into this. So many people have put in so much work to make this happen. Obviously, there's everyone who contributed to the data. And we definitely have some power users there on top of the leaderboard right here. You people are absolute legends. On top of that, there are also other people who have contributed code, documentation, data, moderation, training, and much more. As I said, almost 250 people have contributed to the GitHub repository. And also there we have a handful of power users. So head over to our team page, give these people a follow, and you know, hold them hold them in highest of esteems. On top of that, there have been a few organizations that have been very helpful. Redmond has been providing us with compute for training, weights and biases has provided us with a full premium license to the entire team. Hugging face has been really helpful. They are providing inference credits, which we're extremely thankful for. And specifically, Olivier, who has Yeah, supported me almost daily in my endeavors into sampling text from these big models. It's a really cool thing. And thank you very much. And by the way, have you noticed that hugging face now supports streaming in their newest versions of things. So this here comes from the text generation inference server. And the same or a similar thing can be found in the main hugging face library in the generate functions of causal language models. Now I want to I want to believe that open assistant is at least a tiny bit responsible for that. Because in this video, I've shown how I hacked streaming into this text generation inference server and got the individual tokens out of there and to stream to the front end. And Olivier has taken that up vastly improved it. And now it's in that and soon after that it also appeared in the newest in the main branches of the main hugging face transformers library. So I am going to take a tiny bit of credit for making yet another feature of hugging face happen. And you're all welcome. Stability has also been very influential and has supported us with inference compute. Lyon has been providing legal input so specifically the terms of service for the website. And they also currently host the GitHub repository. Lastly, not an organization but at least as impactful. Carlos has been doing a massive job advocating to the Spanish speaking community. Very cool. There's way too many people to thank individually. But I have to take a minute for at least one of them. The project is often attributed to me specifically, or to Lyon, because we're sort of the known names. As I said, Lyon gave the legal input for the terms and conditions, which is very important, because we need to be able to use the data that we collect. So that was a crucial point. While I myself, I myself have indeed put a lot of my free time my entire free time into this project. But I do have to highlight Andreas Köpf, who has been extremely influential in this project, who has worked pretty much day and night, these last months getting open assistant to where it is today, writing lots of codes, communicating to everyone, organizing people, getting resources and much more our team meetings for the various groups are often at impossible times, because this is truly a truly a global effort. So meetings happen at any time of the day. And I can guarantee you Andreas has not slept very much in a whole while. And effectively, coordinating a global group of volunteers is among the hardest organizational challenges I know of. So I feel it's well justified to send a big thank you to Andreas. And we all do a clap together like one worldwide clap. So hands up and on three. So we go 123. Excellent, excellent. That was it. All right, so that's that. Give Andreas a follow and go on to open assistant.io slash chat. This is the most important link. As I said, we won't stop collecting the data. All the other links are in the description. And give it a try. Let us know what you find to work out what doesn't work. And don't stop until we've proven every single AI skeptic out there wrong. If you have a cool way of integrating open assistant with something or extending it in some way, we're super happy to chat with you. And the people, a lot of people are already doing this as we speak right now. For example, as you've seen with the plugins before, we would be super happy if you share this video out there and make it known that open source AI is alive and thriving. All right, that was it for me. Thank you so much for being here. Thank you so much for contributing and for continuing to contribute. I'll see you around. Bye bye.

Translation: All right, it's here. It's finally happening. The day has come today, we release and announce open assistant, the world's best, truest and most awesome open source AI assistant. This is a global community effort to bring the power of conversational AI to everyone to businesses, to researchers and to individuals and get it out of the hands of big few corporations that are out there. So it's really a pleasure for me to announce this. We have been working since before Christmas to build a data collection platform. And we have collected amazing numbers of data. We've collected over 600,000 interactions with humans among those 150,000 messages of human demonstrations of being an assistant. And this is so awesome. And all of this has resulted in over 10,000 fully annotated conversation trees. And these range massively diverse set of topics, from programming to making an omelet to just chatting with the model, pretty much everything is in there. And it is in so many languages that I didn't even know so many languages existed. This has been made possible because we've gotten contributions from over 13,000 volunteers from all around the world. And we are extremely, extremely happy to see so many people wanting to contribute to open source AI. So we've taken all of this by the way, you may notice this video is a bit improvised, I there's no time to edit things happening so fast. So forgive me, but I wanted to get this to you as soon as possible. We've trained first models already on this data. And I presented them in a video here. Now if you are interested and you want to see some funny interactions with these models, and just see how competent they get when using this data, this video is definitely a thing to check out. So we've arrived at this point, we've been researching, we've been seeing how good our models are. And we realized this data set is absolutely unique, and incredibly valuable. And we've looked a bit deeper into the models. And in fact, they're so good. At some point, we've actually had to update our plans on how to handle all of this. You see, the models are so powerful and capable, that we've come to realize maybe it would be better for us to keep them private, at least for now, until we fully understand them. You'll still have access to them via our chat interface. And we've set it up so you automatically get a discount on the price of a subscription if you join right now. But you know, it's really a matter of safety, really. And I'm kidding. Here is the data set. And here is the code. And here is a chat interface. And here is a paper. And we're going to release a Weights and Biases report along with the paper. Paper appears on Monday on archive, you can look at it now already. But everything is here, nothing is private, you can go and find absolutely everything. We are extremely excited to be actually open source and actually provide you with OpenAI. And that has a space in the middle. So the chat interface is by far the best place to try out these things. It's an absolute pleasure to interact with these models. We have a few models. So we have llama based models. As you know, these are licensed research only. But we also have Pythea based models. And these are fully open source and 100% business friendly. So you can use them, build them, do whatever you want. And all of the models that we have so far, also still fit into a single GPU. Sometimes it's a big GPU, but still they do in fact fit. And I'm very sure the community can do wonders with those things. At the end, we're going to have these models run on a toaster by the speed of progress that's happening right now. So everything absolutely cool. If you want to contribute to OpenAI and don't want to send, for example, confidential data to OpenAI, and you don't know what happens there, then this is absolutely a game changer for business and for anyone who wants to work in open source. And if you do use any of it, which we absolutely invite you to do, please give something back if you improve the data set, we'll happily take that up in some way. If you write a paper, feel free to drop a citation. If you use it in business, we'd be absolutely thrilled to hear a case study of how it is used. If you train a model on the data or fine tune our models, we can bring that back into our interface because the chat interface that we have right here, it not only serves as you know, trying out the models, but it also serves with the thumbs up and thumbs down, it serves as an evaluation for new models. And it also serves a bit of a double purpose as an additional data collection. So by using this and by bringing models here, you'll actually be making the data set ever better and be and us and the entire open source community ever more capable of training really good open source models. And that's extremely, extremely exciting. So don't hesitate to feedback and give back you can find links to all of these things here in the description. And yeah, if the main link is this one openassistant.io slash chat. Now, I know your fingers are itching to go there and try it out. So my suggestion if you want to have a bit of fun is to go and to try to go get into an argument with the model about something that usually works out fairly funny. Pay attention, you can go to slash chat, you can click on the chat on the left hand side and pay attention to the little cogwheel thing that's in there, you can select different things, we will probably release new models which you can find under the model tab here. And the preset is the sampling preset. So k 50 usually gives you rather straightforward answers. And the other ones like nucleus or typical sampling, they sometimes give you more creative answers. If you do go there right now, and I know I know you really want to, then right now you can log in with discord or email, we will be adding more things later. We know not all of you have for example, discord, we do invite you to make a discord account. If you do log in via email, then it might be that the email ends up in spam and actually will have a typo in the original email address that it comes from. We are aware, we're very sorry, we are fixing it. So if you want to try it out, go there now and come back and watch the rest of the video because I have some cool stuff to show you. I just also have to show you this one. A user posted this today in our discord. It's one of the interactions with the chat bot. User says give me the worst advice, how to name my new dog. The assistant says name your new dog Fido dot AXE. How is that bad advice the user asks, you should never under any circumstance give your dog a file extension as its name. Doing so can make it hard to distinguish between your pet and files in your computer, which can lead to confusion and frustration. Additionally, it might also create security risks as malicious actors can use similar names to hide their attacks. Therefore, avoid naming your dog anything related to files or technology and choose a unique and meaningful name instead. Absolutely beautiful, absolutely on point. The terrible advice, terrible advice as asked. Now, this is an absolutely massive achievement. It's by far the most immense instruction model project of its kind. And it's by far the largest collection of human demonstration data of its kind. And together with the new newly released Dolly models and datasets, it's one of the only true open source chat GPT replications with 100% real homegrown flesh and bones human data. And yeah, you're able to take this data and mix it with other stuff like alpaca and vicuna and whatnot. And you'll get various very cool results. I do stress the human the human data, because it really makes a difference. Humans are orders of magnitude more creative, more resourceful and more on point than any synthetic or self instruct data can ever be. And I'm very convinced that our data is not only much better than the synthetic datasets out there, which are cool, but ours is better. But there's a good chance that our data is even better than open AI's data because they have to pay crowd workers to provide them with data. While our people we have the power of love and determination. And that always wins. Let me tell you a little bit what's coming up in the near future. There is a user on our discord name is Dragan. And Dragan has been investigating plugins for open assistant and this is absolutely mind blowing. So here, he types when was Joe Biden born, and you'll see that the Google web search plugin is activated on that. And now the model goes and actually queries Google gets back the results and parses them all in just a conversational way. So the assistant itself decides to ask Google something the assistant decides on what query to ask Google. So it doesn't just copy whatever you give. And then it decides how to make use of the results. And we're able to chain these things together to do the multiple times to handle errors. And yeah, the plugins work in very much the same way as they do in chat GPT, or the open AI API's, you essentially give a link to a JSON file that specifies an open API specification. And that's all we need for the model to be interacting with it. And this is absolutely crazy. And just today Dragan has told me that as we get better models, because we also just got a better new model today, the use of plugins also gets better, and more plugins, better models, bigger models, all of this means super orders of magnitude more powerful things that we can do with these tools. I'm extremely excited for the future of this project. And with that, I think we've come pretty, pretty on point to this slide, which I presented at the very beginning when I made the first video about the topic. Open Assistant is a chat based assistant that understands tasks can interact with third party systems and retrieve information dynamically to do so. With the inclusion of plugins and the data we've collected, this will become a reality. And it's so exciting. And by the way, we're not going to stop collecting data, because as it gets more awesome, the human data will become ever more valuable. So at some point, we'll demonstrate to the assistant how to retrieve stuff how to make use of plugin results that it might not be super obvious with. And yeah, it's also that part is going to be more fun in the future than it has been already. I want to tell you a little bit about what's in the paper itself. I've tweeted out this survey. And what we've done is we've, we've taken a bunch of prompts that we knew none of the models has ever been trained on, because that's how we sampled them from our database. And we've given them to both an open assistant model that's based on Pythea. And we've given them to chat GPT. And then we let users rate which one they prefer. And the result is really cool. It's like dead even. So it's 48.3% open assistant 51.7% chat GPT, the preference is is dead even. And that's absolutely amazing to see. Now I'm in no way saying that the open assistant models are as good as open AI models, especially in things like coding our models, they don't have as much code in their pre training data. So they're naturally not as good at coding. And also they're much, much smaller. So there are going to be a lot of tasks where obviously open AI models are still better. But there are also quite a bit of tasks where people who rated preferred open assistant models, I just find that a lot of times open assistant models, they're more human, they're more concise, they're more interesting. And in my personal opinion, they're just overall more fun to interact with than the kind of boring corporate models. If you want to have some fun, as I said, try going there and getting into an argument or try giving it some weird tasks. Try having it mix up some distant concepts or write an essay about something absurd, or to have it act like a certain personality. In the video I've shown you at the beginning, I made it be a stoned teenager writing texts about an historical event. And that was absolutely hilarious. Be sure to give it precise instructions. And yeah, it's it's a new world. It's crazy how programming and interacting with computers what it has come to. We've also ran a survey among contributors. And the general consensus is not only that the data people have seen during ranking and rating has been largely high quality, but also overwhelmingly people reported enjoyment contributing to what to many is their first open source project. And the vast majority of people as you see here reported they are glad to have contributed to the project. I promise we are glad you did too. Lastly, we tested the efficacy of our spam removal. The spam removal is a hybrid system that combines crowdsourcing with manual human moderation. And you can read in the paper a bit more on how we set that up. But we wanted to confirm that it is effective. And we've analyzed the data and we found it to be indeed extremely effective at effective at removing contributions that are ill suited for the data set. And in many ways, I think we found a very optimal trade off between effectively weeding out spam while making the best use of the valuable time of the human moderators, who are also volunteers. So overall, I think this is an insane success. I want to talk a little bit about what it took to get there. There's a humongous amount of effort going into this. So many people have put in so much work to make this happen. Obviously, there's everyone who contributed to the data. And we definitely have some power users there on top of the leaderboard right here, you people are absolute legends. On top of that, there are also other people who have contributed code documentation, data moderation, training, and much more. As I said, almost 250 people have contributed to the GitHub repository. And also there we have a handful of power users. So head over to our team page, give these people a follow, and you know, hold them hold them in highest of esteems. On top of that, there have been a few organizations that have been very helpful. Redmond has been providing us with compute for training, weights and biases has provided us with a full premium license to the entire team. Hugging face has been really helpful. They are providing inference credits, which we're extremely thankful for. And specifically Olivier, who has Yeah, supported me almost daily in my endeavors into sampling text from these big models. It's a really cool thing. And thank you very much. And by the way, have you noticed that hugging face now supports streaming in their newest versions of things. So this here comes from the text generation inference server. And the same or a similar thing can be found in the main hugging face library in the generate functions of causal language models. Now I want to I want to believe that open assistant is at least a tiny bit responsible for that. Because in this video, I've shown how I hacked streaming into this text generation inference server and got the individual tokens out of there and to stream to the front end. And Olivier has taken that up vastly improved it. And now it's in that and soon after that it also appeared in the newest in the main branches of the main hugging face transformers library. So I am going to take a tiny bit of credit for making yet another feature of hugging face happen. And you're all welcome. Stability has also been very influential and has supported us with inference compute. Lyon has been providing legal input, so specifically the terms of service for the website. And they also currently host the GitHub repository. Lastly, not an organization, but at least as impactful. Carlos has been doing a massive job advocating to the Spanish speaking community. Very cool. There's way too many people to thank individually. But I have to take a minute for at least one of them. The project is often attributed to me, specifically, or to Lyon, because we're sort of the known names. As I said, Lyon gave the legal input for the terms and conditions, which is very important, because we need to be able to use the data that we collect. So that was a crucial point. While I myself, I myself have indeed put a lot of my free time my entire free time into this project. But I do have to highlight Andreas Koepf, who has been extremely influential in this project who has worked pretty much day and night, these last months getting open assistant to where it is today, writing lots of codes, communicating to everyone, organizing people, getting resources, and much more. Our team meetings for the various groups are often at impossible times, because this is truly a global effort. So meetings happen at any time of the day. And I can guarantee you Andreas has not slept very much in a whole while. And effectively, coordinating a global group of volunteers is among the hardest organizational challenges I know of. So I feel it's well justified to send a big thank you to Andreas. And we all do a clap together like one worldwide clap. So hands up and on three. So we go 123. Excellent, excellent. That was it. All right, so that's that. Give Andreas a follow and go on to open assistant.io slash chat. This is the most important link. As I said, we won't stop collecting the data. All the other links are in the description. And give it a try. Let us know what you find to work out what doesn't work. And don't stop until we've proven every single AI skeptic out there wrong. If you have a cool way of integrating open assistant with something or extending it in some way, we're super happy to chat with you. And the people, a lot of people are already doing this as we speak right now. For example, as you've seen with the plugins before, we would be super happy if you share this video out there and make it known that open source AI is alive and thriving. All right, that was it for me. Thank you so much for being here. Thank you so much for contributing and for continuing to contribute. I'll see you around. Bye bye.