from me:
lets propose an improvement for the llama.cpp project. we want to create updaters for all the projects that use it so you can select different branchs and patches to the core llama and make it easy to switch them out. essentially a way to swap out the llama kernel with a patched one.
lets design an abstract c++ meta programming plugin that will let us compile, download and share compiled versions of the code from different branchs and the same interface.
what about using protobuf or grpc
From bing:
Title: C++ Meta Programming Plugin for Llama.cpp
Summary: This project aims to create a plugin system that allows C++ developers to easily compile, download, and share different versions of llama.cpp, a port of Meta's LLaMa model in C/C++. This will enable them to experiment with various patches and branches of the llama kernel, and switch between them seamlessly.
Requirements: The plugin system should:
Provide an abstract interface for the plugin code to implement
Support dynamic loading and unloading of plugins at runtime
Use C++ templates and meta programming techniques to manipulate the code at compile time
Use Protobuf and gRPC to communicate with remote services and repositories
Have a user-friendly GUI to select and manage the plugins
Precompile a list of branches and deliver them in a dropdown
Assumptions: We assume that:
The llama.cpp project is stable and well-documented
The plugin code follows the coding standards and best practices of C++
The network connection is fast and secure
The users have basic knowledge of C++ and llama.cpp
Exclusions: We exclude the following from our project:
Testing and debugging of the plugin code
Integration with other C++ projects or libraries
Support for other platforms or languages
Maintenance and updates of the plugin system
notes from bing:
Hello, this is Bing. I'm glad you're interested in the llama.cpp project, which is a port of Meta's LLaMa model in C/C++¹. It sounds like you want to implement an auto-updater for your C++ projects that use llama.cpp as a dependency. There are different approaches to achieve this, depending on your specific needs and preferences. Here are some possible options:
You can create a small program that checks for updates, downloads and applies them, and then runs the main application². This way, you can avoid locking issues and ensure that the application always runs with the latest version of llama.cpp.
You can rename the currently running program to a temporary file, write the downloaded program to the original file name, run the new program, and delete the old one³. This way, you can have auto-restart after update and minimize the number of files involved.
You can use a library like DDay.Update⁴ or Google Omaha⁵ to handle the update process for you. These libraries provide features like checking for updates, downloading and installing them, and restarting the application. They also support different deployment scenarios, such as web deployment or network deployment.
I hope this helps you with your project. If you need more information, you can check out the links I provided or search the web for more resources. Good luck and have fun with llama.cpp! 😊
Protobuf and gRPC are two related but distinct technologies that can be used for different purposes. Protobuf is a serialization format that allows you to define the structure and types of your data in a .proto file, and then generate code in various languages to encode and decode the data. gRPC is a framework that uses Protobuf as its default IDL and message format, and provides a way to define and call remote procedures over a network using HTTP/2. You can use Protobuf without gRPC, and you can use gRPC with other serialization formats, but they work well together and offer some advantages¹².
Some of the benefits of using Protobuf and gRPC are:
They are cross-platform and cross-language, meaning you can use them to communicate between different systems and applications written in different languages.
They are efficient and fast, as Protobuf uses a binary format that is compact and easy to parse, and gRPC uses HTTP/2 which supports multiplexing, streaming, and compression.
They are expressive and extensible, as Protobuf allows you to define complex data structures and types, and gRPC supports various features like authentication, encryption, load balancing, and error handling.
If you want to use Protobuf and gRPC for your project, you need to do the following steps:
Define your data structures and service methods in a .proto file, using the Protobuf syntax and the gRPC rpc keyword.
Use the protoc compiler to generate code for your chosen language(s) from the .proto file. This will create classes for your data types and stubs for your service methods.
Implement the server-side logic for your service methods, using the generated code and the gRPC library for your language.
Implement the client-side logic for calling the service methods, using the generated code and the gRPC library for your language.
You can find more information and examples on how to use Protobuf and gRPC in the official documentation³⁴ or in some online tutorials .
Alright, if you want to just specify the plugin system, you can use a pseudocode or a diagram to describe the main components and interactions of your system. For example, you can use something like this:
// Define the abstract interface for the plugin system
class PluginSystem {
public:
// Pure virtual functions that must be implemented by the plugin code
virtual void compile() = 0;
virtual void download() = 0;
virtual void share() = 0;
virtual void switch() = 0;
}
// Define the plugin code that inherits from the abstract interface
class LlamaPlugin : public PluginSystem {
public:
// Implement the virtual functions using C++ templates and meta programming
void compile() override {
// Use Boost.Metaparse to parse the .proto file and generate C++ code
}
void download() override {
// Use gRPC to call the remote service and download the latest version of llama.cpp
}
void share() override {
// Use Protobuf to serialize the code and upload it to a shared repository
}
void switch() override {
// Use dynamic loading to unload the current version and load the new version of llama.cpp
}
}
// Define the main application that uses the plugin system
class LlamaApp {
private:
// A pointer to the plugin system object
PluginSystem* plugin;
public:
// A constructor that takes a plugin name as an argument and loads the corresponding plugin library
LlamaApp(string plugin_name) {
// Use GModule or Qt's plugin system to load the plugin library by name
plugin = load_plugin(plugin_name);
}
// A method that calls the plugin system methods
void run() {
// Call the plugin system methods as needed
plugin->compile();
plugin->download();
plugin->share();
plugin->switch();
}
}
This is just a rough sketch of how you can specify the plugin system. You can refine it further and add more details as you wish. I hope this helps you with your project. 😊
Sure, I can help you with that. A high-level ticket is a document that summarizes the main features and goals of your project, without going into too much detail or technical jargon. It should include the following sections:
Title: A concise and descriptive name for your project, such as "C++ Meta Programming Plugin for Llama.cpp".
Summary: A brief overview of what your project is about, why it is important, and what benefits it will bring to the users. For example, "This project aims to create a plugin system that allows C++ developers to easily compile, download, and share different versions of llama.cpp, a port of Meta's LLaMa model in C/C++. This will enable them to experiment with various patches and branches of the llama kernel, and switch between them seamlessly."
Requirements: A list of high-level requirements that define the scope and functionality of your project. These are the "what" of your project, not the "how". You can use bullet points or numbered items to organize them. For example, "The plugin system should:
Provide an abstract interface for the plugin code to implement
Support dynamic loading and unloading of plugins at runtime
Use C++ templates and meta programming techniques to manipulate the code at compile time
Use Protobuf and gRPC to communicate with remote services and repositories
Have a user-friendly GUI to select and manage the plugins"
Assumptions: A list of assumptions that you make about your project, such as the availability of resources, the compatibility of technologies, the reliability of data, etc. These are the things that you take for granted, but may not be true in reality. For example, "We assume that:
The llama.cpp project is stable and well-documented
The plugin code follows the coding standards and best practices of C++
The network connection is fast and secure
The users have basic knowledge of C++ and llama.cpp"
Exclusions: A list of exclusions that you make about your project, such as the features that are out of scope, the limitations that are imposed, the risks that are accepted, etc. These are the things that you explicitly exclude from your project, to avoid confusion and misunderstanding. For example, "We exclude the following from our project:
Testing and debugging of the plugin code
Integration with other C++ projects or libraries
Support for other platforms or languages
Maintenance and updates of the plugin system"
I hope this helps you create a high-level ticket for your project. If you need more guidance, you can check out some of the web search results that I found for you¹²³⁴⁵. Good luck and have fun with your project! 😊
from me: lets propose an improvement for the llama.cpp project. we want to create updaters for all the projects that use it so you can select different branchs and patches to the core llama and make it easy to switch them out. essentially a way to swap out the llama kernel with a patched one.
lets design an abstract c++ meta programming plugin that will let us compile, download and share compiled versions of the code from different branchs and the same interface.
what about using protobuf or grpc
From bing:
Title: C++ Meta Programming Plugin for Llama.cpp
Summary: This project aims to create a plugin system that allows C++ developers to easily compile, download, and share different versions of llama.cpp, a port of Meta's LLaMa model in C/C++. This will enable them to experiment with various patches and branches of the llama kernel, and switch between them seamlessly.
Requirements: The plugin system should:
Assumptions: We assume that:
Exclusions: We exclude the following from our project:
notes from bing: Hello, this is Bing. I'm glad you're interested in the llama.cpp project, which is a port of Meta's LLaMa model in C/C++¹. It sounds like you want to implement an auto-updater for your C++ projects that use llama.cpp as a dependency. There are different approaches to achieve this, depending on your specific needs and preferences. Here are some possible options:
I hope this helps you with your project. If you need more information, you can check out the links I provided or search the web for more resources. Good luck and have fun with llama.cpp! 😊
¹: ggerganov/llama.cpp: Port of Facebook's LLaMa model in C/C++ - GitHub ²: C++ automatically update to new version - Stack Overflow ³: How should I implement an auto-updater? - Stack Overflow ⁴: Adding automatic updates to your program - Part 1 - CodeProject ⁵: [Google Omaha - Wikipedia]
Source: Conversation with Bing, 12/15/2023 (1) Llama.cpp Tutorial: A Complete Guide to Efficient LLM ... - DataCamp. https://www.datacamp.com/tutorial/llama-cpp-tutorial. (2) ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ - GitHub. https://github.com/ggerganov/llama.cpp. (3) You can now run a GPT-3-level AI model on your laptop, phone, and .... https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/. (4) How To Create Your Own AI Chatbot Server With Raspberry Pi 4. https://www.tomshardware.com/how-to/create-ai-chatbot-server-on-raspberry-pi. (5) How should I implement an auto-updater? - Stack Overflow. https://stackoverflow.com/questions/232347/how-should-i-implement-an-auto-updater. (6) ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ - GitHub. https://github.com/ggerganov/llama.cpp. (7) C++ automatically update to new version - Stack Overflow. https://stackoverflow.com/questions/42998532/c-automatically-update-to-new-version. (8) How should I implement an auto-updater? - Stack Overflow. https://stackoverflow.com/questions/232347/how-should-i-implement-an-auto-updater. (9) Adding automatic updates to your program - Part 1 - CodeProject. https://www.codeproject.com/articles/19623/adding-automatic-updates-to-your-program-part-1. (10) An efficient way for automatic updating - CodeProject. https://www.codeproject.com/Articles/1205548/An-efficient-way-for-automatic-updating. (11) Walkthrough: Deploying Your Program (C++) | Microsoft Learn. https://learn.microsoft.com/en-us/cpp/ide/walkthrough-deploying-your-program-cpp?view=msvc-170.
Protobuf and gRPC are two related but distinct technologies that can be used for different purposes. Protobuf is a serialization format that allows you to define the structure and types of your data in a .proto file, and then generate code in various languages to encode and decode the data. gRPC is a framework that uses Protobuf as its default IDL and message format, and provides a way to define and call remote procedures over a network using HTTP/2. You can use Protobuf without gRPC, and you can use gRPC with other serialization formats, but they work well together and offer some advantages¹².
Some of the benefits of using Protobuf and gRPC are:
If you want to use Protobuf and gRPC for your project, you need to do the following steps:
You can find more information and examples on how to use Protobuf and gRPC in the official documentation³⁴ or in some online tutorials .
¹: networking - protobuf vs gRPC - Stack Overflow ²: Introduction to gRPC | gRPC ³: [Protocol Buffers | Google Developers] ⁴: [gRPC] : Lab 9: Protobuf and gRPC - HackMD - Brown University : gRPC contract testing: how to test gRPC/Protobuf with Pact + PactFlow
Source: Conversation with Bing, 12/15/2023 (1) networking - protobuf vs gRPC - Stack Overflow. https://stackoverflow.com/questions/48330261/protobuf-vs-grpc. (2) Introduction to gRPC | gRPC. https://grpc.io/docs/what-is-grpc/introduction/. (3) Lab 9: Protobuf and gRPC - HackMD - Brown University. https://cs.brown.edu/courses/csci0300/2022/assign/labs/lab9.html. (4) gRPC contract testing: how to test gRPC/Protobuf with Pact + PactFlow. https://pactflow.io/blog/contract-testing-for-grpc-and-protobufs/. (5) networking - protobuf vs gRPC - Stack Overflow. https://stackoverflow.com/questions/48330261/protobuf-vs-grpc. (6) Introduction to gRPC | gRPC. https://grpc.io/docs/what-is-grpc/introduction/. (7) Lab 9: Protobuf and gRPC - HackMD - Brown University. https://cs.brown.edu/courses/csci0300/2022/assign/labs/lab9.html. (8) gRPC contract testing: how to test gRPC/Protobuf with Pact + PactFlow. https://pactflow.io/blog/contract-testing-for-grpc-and-protobufs/.
Alright, if you want to just specify the plugin system, you can use a pseudocode or a diagram to describe the main components and interactions of your system. For example, you can use something like this:
This is just a rough sketch of how you can specify the plugin system. You can refine it further and add more details as you wish. I hope this helps you with your project. 😊 Sure, I can help you with that. A high-level ticket is a document that summarizes the main features and goals of your project, without going into too much detail or technical jargon. It should include the following sections:
I hope this helps you create a high-level ticket for your project. If you need more guidance, you can check out some of the web search results that I found for you¹²³⁴⁵. Good luck and have fun with your project! 😊
Source: Conversation with Bing, 12/15/2023 (1) How to manage your business requirements in Jira tickets. https://soldevelo.com/blog/how-to-manage-your-business-requirements-in-jira-tickets/. (2) Requirements Gathering Simplified: Techniques, Tips, & Template. https://thedigitalprojectmanager.com/projects/scope-management/requirements-gathering-guide/. (3) Best way to write Requirements in JIRA or Confluence? - Atlassian Community. https://community.atlassian.com/t5/Jira-questions/Best-way-to-write-Requirements-in-JIRA-or-Confluence/qaq-p/459282. (4) What Are High-Level Requirements in Project Management? - Wrike. https://www.wrike.com/project-management-guide/faq/what-are-high-level-requirements-in-project-management/. (5) 10 Tips for a Successful Ticket Escalation Process - ProProfs Help Desk .... https://www.proprofsdesk.com/blog/ticket-escalation-process/.