Open nicknamebounty opened 1 month ago
Well, not officially (for lots and lots of tech reasons mostly around performance, cost, and scale), but it is technically possible.
When we were looking to move to bigtable, I built a crappy version that wrote to Postgres mostly as a PoC to make sure that I abstracted things correctly. In essence, if you were to create a new struct that implemented DbClient, that should allow Autopush to work with whatever data store you wanted. (I'm not going to post up the Postgres stuff because it's horrible, old, and hasn't merged to main in over a year.)
That said, i'm a bit curious about the "cbt requires a Google account". IIRC, you can install the gcloud CLI, and then once it's installed, run the bigtable emulator by calling gcloud beta emulators bigtable start
.
Heck, you should even be able to grab the google/cloud-sdk docker image and run that locally. (We do that for our CI testing.) If you don't mind the data being a bit ephemeral (the emulator only keeps in-memory storage), it's a far easier way to get rolling.
Hi @jrconlin,
Thank you very much for your reply.
In fact the error occurs when I run the _setupbt.sh script:
-creds flag unset, will use gcloud credent
Google account is mandatory to use cbt i guess?
By using one, _./setupbt.sh:
2024/10/06 16:56:07 -creds flag unset, will use gcloud credential
2024/10/06 16:56:08 Creating table: rpc error: code = AlreadyExists desc = table "projects/test/instances/test/tables/autopush" already exists
I'm a bit lost in the documentation. After this step, which ones allows to launch autoconnect and autoendpoint?
If I basically run ./autoconnect (on /autopush-rs/target/debug/) :
Error: ApcError { kind: ConfigError(Unknown Database Error: Could not parse DdbSettings: Error(“EOF while parsing a value”, line: 1, column: 0)), backtrace: 0: <autopush_common::errors::ApcError as core::convert::From<T>>::from
at /root/autopush-rs/autopush-common/src/errors.rs:57:33
If you can guide me to get autopush up and running, thank you in advance.
I think the first error will use gcloud credent
is informational, and probably due to some dependency printing that before whatever stub process gets called.
The second one tells me that you already have the bigtable emulator running in the background. (The script is not smart and just tries to start and emulator, and then run a bunch of setup. It never checks to see if the emulator is already running or if those steps have already been run.
The final error is the most concerning. You shouldn't be seeing DdbSettings
, unless you're running older code. If you are running older code, you should probably run cargo build --no-default-features --features=emulator --features=bigtable
this will compile the code to make sure that it only uses bigtable (with the emulator).
If you're running the latest master or tag, however, it should already be using Bigtable (although you may still wish to include the --features=emulator
if you want to run local tests.
As for the setting strings, there are two configuration strings that are database specific:
db_dsn = "grpc://localhost:8086"
db_settings = "{\"message_family\":\"message\",\"router_family\":\"router\", \"table_name\":\"projects/test/instances/test/tables/autopush\"}"
db_settings
is a JSON formatted set of values that point to where things are in bigtable. These should match whatever you specified in setup_bt.sh
.
Hi @jrconlin,
Thanks for all the information and details to help me move forward step by step :)
Here's what I did on an new Ubuntu install.
Installation packages:
sudo apt-get update
sudo apt-get install build-essential libffi-dev libssl-dev pypy-dev python3-virtualenv git --assume-yes
sudo apt-get install apt-transport-https ca-certificates gnupg curl
Google Cloud CLI + bigtable + cbt:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get install google-cloud-cli
sudo apt-get install google-cloud-cli-bigtable-emulator google-cloud-cli-cbt
Rust:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
. "$HOME/.cargo/env"
autopush-rs
git clone https://github.com/mozilla-services/autopush-rs.git
cd autopush-rs
cargo build --features=emulator
export db_dsn="grpc://localhost:8086"
export db_settings="{\"message_family\":\"message\",\"router_family\":\"router\", \"table_name\":\"projects/test123/instances/test123/tables/autopush\"}"
export BIGTABLE_EMULATOR_HOST=localhost:8086
gcloud beta emulators bigtable start
I modified _setupbt.sh with test123 as projects and test123 as instances (i used test123 on db_settings en variable). Just for info, cbt is not happy with just test as project name because it want at least 6 chars (don't know if it's new).
scripts/setup_bt.sh
I checked if it's ok on bigtable instance:
cbt -instance test123 -project test123 ls
2024/10/08 19:11:54 -creds flag unset, will use gcloud credential
autopush <===== ok there is autopush table
cd target/debug
./autoconnect
i've this error again:
Error: ApcError { kind: ConfigError(Unknown Database Error: Could not parse DdbSettings: Error("EOF while parsing a value", line: 1, column: 0)), backtrace: 0: <autopush_common::errors::ApcError as core::convert::From<T>>::from
and at the end:
SENTRY_DSN not set. Logging disabled.
The installed version is the last one: 1.71.7.
I must have done something wrong in my approach :)
Hi the Team,
I would like to be able to install autopush locally without requiring an internet connection. According to the documentation, it seems that Bigtable is needed, and the _setupbt.sh script uses cbt, which will require a Google account.
Is this possible in the current state? for example, using a local sqlite database.
Thank you in advance for your help.
Nick
┆Issue is synchronized with this Jira Task