Open Normalnoise opened 12 months ago
If you are a newcomer and just getting started, you can follow the guidelines below to install and deploy the Computing Provider.
Build and install the Computing Provider
Step-by-step installation of the Computing Provider
v0.3.0
If you are already running version 0.2.0, please follow these steps to upgrade:
0.2.0
Step 1: Modify the configuration file config.toml
config.toml
(Optional) Remove ~[MCS].AccessToken~ from the configuration file, generate a new [MCS].ApiKey as follows:
[MCS].AccessToken
[MCS].ApiKey
[MCS] ApiKey = "" # Acquired from "https://www.multichain.storage" -> setting -> Create API Key
Step 2: Pull the latest version and compile the Computing Provider
git clone https://github.com/lagrangedao/go-computing-provider.git cd go-computing-provider git checkout v0.3.0
make clean && make make install
Note: You can also directly use precompiled binary file You can check the current version by running computing-provider -v
Note:
computing-provider -v
Step 3: Install AI Inference Dependency It is necessary for Computing Provider to deploy the AI inference endpoint
mkdir -p ~/.swan/computing export CP_PATH=~/.swan/computing ./install.sh
Step 4: Migrate the Computing Provider's config.toml to CP_PATH
CP_PATH
cp config.toml ~/.swan/computing/ cp .swan_node/private_key ~/.swan/computing/
Note: the config.toml is in the same path as computing-provider binary in the previous version
computing-provider
export CP_PATH=~/.swan/computing nohup computing-provider run >> cp.log 2>&1 &
If you encounter any problems, you can either leave a comment within the document or open an issue
How to Upgrade:
Scenario 1: Initial Deployment
If you are a newcomer and just getting started, you can follow the guidelines below to install and deploy the Computing Provider.
Build and install the Computing Provider
Step-by-step installation of the Computing Provider
Scenario 2: Upgrading to
v0.3.0
If you are already running version
0.2.0
, please follow these steps to upgrade:Step 1: Modify the configuration file
config.toml
(Optional) Remove ~
[MCS].AccessToken
~ from the configuration file, generate a new[MCS].ApiKey
as follows:Step 2: Pull the latest version and compile the Computing Provider
Step 3: Install AI Inference Dependency It is necessary for Computing Provider to deploy the AI inference endpoint
Step 4: Migrate the Computing Provider's
config.toml
toCP_PATH
computing-provider
Important
Need Helps
If you encounter any problems, you can either leave a comment within the document or open an issue