siradam / DataMining_Project

0 stars 1 forks source link

Webserver as backend for the mobile app #38

Closed lorenzznerol closed 2 years ago

lorenzznerol commented 3 years ago

API is needed to feed the mobile app with instant data from a backend. This is on top of the normal data mining task that is supposed to prepare the clustering and ML as far as possible in the backend. Data to be sent is just the exact filtered zoom and area that is needed for the app visualization.

lorenzznerol commented 3 years ago

Installed PostgreSQL using the official docker guide of the postgresql website: https://hub.docker.com/_/postgres/ I got it to work in Adminer 4.8 in the browser window.

TODO:

docker pull postgres
docker run -it -e POSTGRES_PASSWORD=postgresql postgres postgres

docker run --name postgres1 -e POSTGRES_PASSWORD=postgres -d postgres

## https://github.com/docker-library/docs/blob/master/postgres/README.md
## Example stack.yml for postgres:

## Use postgres/example user/password credentials
#version: '3.1'
#
#services:
#
#  db:
#    image: postgres
#    restart: always
#    environment:
#      POSTGRES_PASSWORD: example
#
#  adminer:
#    image: adminer
#    restart: always
#    ports:
#      - 8080:8080

## Then open in a Browser  
http://localhost:8080/

Run docker stack deploy -c stack.yml postgres (or docker-compose -f stack.yml up), wait for it to initialize completely, and visit http://swarm-ip:8080, http://localhost:8080 [this is what I took], or http://host-ip:8080 (as appropriate).

You will see the Adminer opening an access to the db that is running from the container.

bild

bild

bild

bild

We see here that plpython3 is not included in the standard image of postgres:

bild

bild

bild

bild

No choice for skipping the header line. bild


The main problems are solved, though the plpython3u extension does not yet work on the webserver, but only on the docker container, while other problems of the docker container would be solved by the webserver. It can be expected that the webserver will finally support python3u as well, and then all of this is solved,

lorenzznerol commented 3 years ago

Had a look at free-of-charge servers at the links at How to set up a website with a PostgreSQL database?:

bild

bild But Kafka is only needed for real-time access which is not this issue here, we do not need it for now, it is only needed in #19 which is pure vision, not realistic.

Heroku PostgreSQL seems to be quite a good solution for us if we really want to have a webserver.

lorenzznerol commented 3 years ago

Tried to install postgres and the older version of adminer, phppgadmin, on Linux (no docker).


## following https://www.howtoforge.com/tutorial/ubuntu-postgresql-installation/

sudo apt update
sudo apt upgrade
sudo apt -y install postgresql postgresql-contrib phppgadmin
sudo service postgresql start
service postgresql status
## Step 2 - Create New User PostgreSQL
## There should be a user already, called postgres, check this with `\du`. If not, create it
sudo -i -u postgres psql
CREATE ROLE postgres WITH SUPERUSER CREATEDB CREATEROLE LOGIN ENCRYPTED PASSWORD 'postgres';
\du
exit

cd /etc/apache2/conf-available/
sudo vim phppgadmin.conf

### Change the default path URL 'phppgadmin' with your own path as below.
## Alias /pgsqladminlogin /usr/share/phppgadmin
## !! in this project case, change to:
# Alias /marine /usr/share/phppgadmin
#
### Comment out the line '#Require local' by adding a # in front of the line and add below the line allow from all so that you can access from your browser.
# Require all granted

## Close vim using :x + Enter (or nano using 2 x Escape + x + y)

sudo apachectl configtest
sudo service apache2 restart

## Step 4 - Setup UFW Firewall
## Avoid "ERROR: You need to be root to run this script" by running `sudo -i`
sudo -i
for svc in ssh http https
do
ufw allow $svc
done

## stop root mode
exit

##does not work at this stage:
#ufw enable 
### root@DESKTOP-11CK91R:~# ufw enable
### ERROR: problem running ufw-init
### iptables-restore v1.8.4 (legacy): Couldn't load match `limit':No such file or directory
### 
### Error occurred at line: 63
### Try `iptables-restore -h' or 'iptables-restore --help' for more information.
### iptables-restore v1.8.4 (legacy): Couldn't load match `limit':No such file or directory
### 
### Error occurred at line: 30
### Try `iptables-restore -h' or 'iptables-restore --help' for more information.
### ip6tables-restore v1.8.4 (legacy): Couldn't load match `limit':No such file or directory
### 
### Error occurred at line: 30
### Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
### 
### Problem running '/etc/ufw/before.rules'
### Problem running '/etc/ufw/user.rules'
### Problem running '/etc/ufw/user6.rules'
#
### Strangly, these errors do not seem to stop the firewall from being started since:
#
#ufw status numbered

## gives exactly what is expected:
## Status: active
## 
##      To                         Action      From
##      --                         ------      ----
## [ 1] 22/tcp                     ALLOW IN    Anywhere
## [ 2] 80/tcp                     ALLOW IN    Anywhere
## [ 3] 443/tcp                    ALLOW IN    Anywhere
## [ 4] 22/tcp (v6)                ALLOW IN    Anywhere (v6)
## [ 5] 80/tcp (v6)                ALLOW IN    Anywhere (v6)
## [ 6] 443/tcp (v6)               ALLOW IN    Anywhere (v6)

## Still, this behaviour is reported on https://superuser.com/questions/590600/ufw-is-active-but-not-enabled-why
## It can be solved using `ENABLED=yes` in `/etc/ufw/ufw.conf`, see the following commands.
sudo sed -ie 's/ENABLED=no/ENABLED=yes/' /etc/ufw/ufw.conf
sudo ufw enable
## outputs "Firewall is active and enabled on system startup".

sudo ufw status numbered
#Status: active
#
#     To                         Action      From
#     --                         ------      ----
#[ 1] 22/tcp                     ALLOW IN    Anywhere
#[ 2] 80/tcp                     ALLOW IN    Anywhere
#[ 3] 443/tcp                    ALLOW IN    Anywhere
#[ 4] 22/tcp (v6)                ALLOW IN    Anywhere (v6)
#[ 5] 80/tcp (v6)                ALLOW IN    Anywhere (v6)
#[ 6] 443/tcp (v6)               ALLOW IN    Anywhere (v6)

## you will get the PostgreSQL port '5432' and the Apache port '80' is on the LISTEN state, both services is up and running
ss -plnt
#State                       Recv-Q                      Send-Q                                            Local Address:Port                                             Peer Address:Port                      Process
#LISTEN                      0                           128                                                   127.0.0.1:5432                                                  0.0.0.0:*
#LISTEN                      0                           128                                                     0.0.0.0:8080                                                  0.0.0.0:*
#LISTEN                      0                           128                                                           *:80                                                          *:*
#LISTEN                      0                           128                                                        [::]:8080                                                     [::]:*

## check apache server on localhost
http://localhost/
## same as
http://localhost:80

## go to http://localhost/marine

## in tab "Servers", you should see: PostgreSQL     localhost   5432        

## In the login field, you should see user and password

bild

Final error in the browser frontend (phppgadmin on localhost, called marine, thus http://localhost/marine:

Login disallowed for security reasons in postgresql

No working way out of this. Probably, phppgadmin is just outdated and the former solution does not work anymore. https://stackoverflow.com/questions/19204816/login-disallowed-for-security-reasons-postgresql-centos-server

I also added a nonsense user "hakase" instead of "postgres" to check whether it is just about restricted names, no way.

bild


Tried to get access to the browser using not localhost, but the public ip or the network ip (on the same computer, but then on Windows instead of Linux, which was running in parallel). This did not work.


Tried to get Adminer https://www.adminer.org/en/ to work, but I found just unclear guides how to use it outside of docker. It is just a php file which you have to put in the right folder somewhere, and I did not catch the idea here.

lorenzznerol commented 3 years ago

Possible solution for a Webserver: postgresql for time-series, called TimescaleDB.

bild

If you want to create a free webserver, registration does not want to know everything (in contrast to heroku):

bild

registration:

bild

bild

bild

bild

Example at higher scale of 2 CPU and 8 GB RAM

bild

login:

bild

Our current webserver:

bild

How to connect:

install PostgreSQL locally, then put the login details in psql.

bild

or if psql is already installed:

bild

How to really connect (with PostgreSQL + psql installed)

But it seems as if psql does not accept the command line above. Instead, it needs the following login:

Server [localhost]: cpnegve54e.kyxkfpxz40.tsdb.forge.timescale.com
Database [postgres]: tsdb
Port [5432]: 30129
Username [postgres]: tsdbadmin
Passwort für Benutzer tsdbadmin: pdq7lkycbdzzotbw

bild

Create the two tables stokes / nostokes, see https://github.com/siradam/DataMining_Project/issues/13#issuecomment-851621012.

tsdb=> CREATE TABLE trajectories_nostokes_subset_10000
tsdb-> (obs int,
tsdb(>  traj real,
tsdb(>  MPA real,
tsdb(>  distance real,
tsdb(>  land real,
tsdb(>  lat real,
tsdb(>  lon real,
tsdb(>  temp real,
tsdb(>  time timestamp,
tsdb(>  z real
tsdb(> );
CREATE TABLE
tsdb=> show tables;
ERROR:  unrecognized configuration parameter "tables"
tsdb=> CREATE TABLE trajectories_stokes_subset_10000
tsdb-> (obs int,
tsdb(>  traj real,
tsdb(>  MPA real,
tsdb(>  distance real,
tsdb(>  land real,
tsdb(>  lat real,
tsdb(>  lon real,
tsdb(>  temp real,
tsdb(>  time timestamp,
tsdb(>  z real
tsdb(> );
CREATE TABLE

Then you can import the csv, but as we are not a superuser, but on the client side, we cannot use COPY, instead, we need \COPY. With COPY (no slash), you first upload all the data to the server with other means (sftp, scp) and then do the import on the server. See https://stackoverflow.com/questions/2987433/how-to-import-csv-file-data-into-a-postgresql-table

COPY does not work:

tsdb=> COPY nostokes(obs,
tsdb(>  traj,
tsdb(>  MPA,
tsdb(>  distance,
tsdb(>  land,
tsdb(>  lat,
tsdb(>  lon,
tsdb(>  temp,
tsdb(>  time,
tsdb(>  z)
tsdb-> FROM 'C:\Users\toeft\Documents\Masterproject GEOMAR\Data\trajectories_nostokes_subset_10000.csv\trajectories_nostokes_subset_10000.csv'
tsdb-> DELIMITER ','
tsdb-> CSV HEADER;
ERROR:  must be superuser or a member of the pg_read_server_files role to COPY from a file
TIP:  Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.

But \COPY does (mind that the command must be in one line). Each imports take about 10 minutes since the upload is done with the stdin / stdout, going through each row of the csv, which takes more time than COPY would take.

All tables:

bild

And a chosen table:

bild

This server is now available at any time.

In order to create an extension, I need to be a superuser, but timescale does not make the default user "tsdbadmin" a superuser. I have asked the support for help.

tsdb=> ALTER USER tsdbadmin WITH SUPERUSER;
ERROR:  must be superuser to alter superusers

Which users are available?

                                       Liste der Rollen
  Rollenname  |                            Attribute                            | Mitglied von
--------------+-----------------------------------------------------------------+--------------
 postgres     | Superuser, Rolle erzeugen, DB erzeugen, Replikation, Bypass RLS | {}
 standby      | Replikation                                                     | {}
 tsdbadmin    | Rolle erzeugen, DB erzeugen                                     | {tsdbowner}
 tsdbexplorer |                                                                 | {}
 tsdbowner    | kann nicht einloggen                                            | {}
 tsdbquerier  |                                                                 | {tsdbadmin}

I do not have a superuser pw for the superuser called "postgres".

tsdb-> \c tsdb postgres
Passwort für Benutzer postgres:
FATAL:  password authentication failed for user "postgres"
lorenzznerol commented 3 years ago

An API to access postgreSQL could be used to run queries on the web server and get the result shown on a mobile phone (not sure about this, though). I got the hint from a comment on https://stackoverflow.com/questions/25492358/how-do-i-access-the-database-from-my-browser.

https://github.com/PostgREST/postgrest

bild

Latest documentation is at https://postgrest.org/en/stable/.

https://postgrest.org/en/stable/tutorials/tut0.html:

bild

lorenzznerol commented 3 years ago

For Android, there is Termux to use postgreSQL on the mobile phone. Perhaps that can be used to get data from a postgreSQL web server as well?

https://wiki.termux.com/wiki/Postgresql

Seen at: https://stackoverflow.com/questions/33474468/can-i-use-postgresql-in-android-phone

lorenzznerol commented 3 years ago

Created a new Webserver and the menu on TimeScale has changed, therefore some new screenshots:

bild

Details:

• Service name: tsdb-8edbcb8 • Cloud: timescale-aws-eu-central-1 • Plan: timescale-dev-only

bild

bild

Connection information

Service URI: postgres://tsdbadmin:rdbt9yr168bi80jw@tsdb-8edbcb8-marine-ad06.a.timescaledb.io:25145/defaultdb?sslmode=require

Database Name: defaultdb Host: tsdb-8edbcb8-marine-ad06.a.timescaledb.io Port: 25145 User: tsdbadmin Password: rdbt9yr168bi80jw SSLmode: require CA Certificate: Download Connection Limit: 100

CA Certificate, if needed: filename: ca.pem -----BEGIN CERTIFICATE----- MIIEQTCCAqmgAwIBAgIUPVry8oOWmxx4eHNWRAlKvdPPMSkwDQYJKoZIhvcNAQEM BQAwOjE4MDYGA1UEAwwvZDk2ZDM4MzQtNmFmMS00NmFlLWEyMjUtZGMwMDE1MDU0 YjJlIFByb2plY3QgQ0EwHhcNMjEwOTA2MDQyMDA3WhcNMzEwOTA0MDQyMDA3WjA6 MTgwNgYDVQQDDC9kOTZkMzgzNC02YWYxLTQ2YWUtYTIyNS1kYzAwMTUwNTRiMmUg UHJvamVjdCBDQTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALUjayZk aZ5yjj3vjbtyPe8okGHniMwplI/gyTuhuQcqu2rxcZPDRlkgAjgYYVwmx+pf8DVY A2xNB8Lxol+Kt2kBHW24/dBzD3UOqmDcwSgufwYtSB1Ql9vaWWtCPgod1uf31mSv BW+XyoUYY/KRrK9+GslQTn70xDL5vxFumqLe9MkAS28NdPk2Gl7aNHKrEHAMCsl9 MOEgbe6tuKFbfYjMUrkGNaYXSpEok92gvKRcYhoK+ocgS8aQEuSS6ah2bqnF8Cf0 rYyXIC3CUdkVL1/mDTTVgaJb+2k2HisIZ86/oauuYepE5zhMaVp8zrfTz2LcB9Wv 878EYO0zwiggxjI7AHMiqooZaxPlhi7vvTWaqGJkLhK0NY0sthlOqEX14MFQJKO4 9m4WH1zcntGIxOTPV3vzsOIq86SWSctkd3HipM2CXgbbSaGbRQO5SHTqomhJu7p+ TYIsL1bW17ddG1BDLXxgPbOYpX+XuxM8Z6CPUzLxv9+aSQoVGTdfFFNGEwIDAQAB oz8wPTAdBgNVHQ4EFgQUqOut8Uppgng2BLIekQ8WQOtFPc8wDwYDVR0TBAgwBgEB /wIBADALBgNVHQ8EBAMCAQYwDQYJKoZIhvcNAQEMBQADggGBAJrnDwKGvzV3FKd7 +OylOI+M/PI4MnYKDtyECTlOVmIIJzJ6IrbqhZ04gH/uGs4xGpAxmAtSwx6s9AdR U8wIkhmcb9ytWFC/hyoK7v1EDEJWidxhTRtooE01y/mdZYiTFKOL9L0f/8H0tlGf V937TBZjmp4jyINMN/urR2LSiNYYuJo2ZgiAZv8Lw8TfKbolgzqstRCUn2G1MYFg HKTehiiBMoOaixSZ3K1Kq3o7JBwAT4r1eo2rz4FDNMV37vP52K2BpA/h3259bBTE ig7lTNeb2bxUBR2gmDj6uy0UTmErfv9HBb0mtqEq3H4y1hG0vnfXH6jPf7U5LeEq MAgLOX3gFVvBjM3eTc6LZugniyNK99114CSU8vKW08A6NNRHFLIFK7+cRjv76wSo KLXEfYabqoagaw/2EhrC2UfpBfdpMYAf/FQCaRFB+lPRrm/ds6gHbh94g03UpUW1 5oFysnUa2wmcqyqv86bVcPfQfupJ383rXxtRwOPcVOEdP4eSdg== -----END CERTIFICATE-----

Since SSL is required, the above certificate needs to be used to connect to the database. I am still trying to figure that out on Windows.

bild

It seems to be rather supported on Linux only.

The connection URL and other parameters as well as the certificate are now available for everyone, see above. The database is still empty, though, I first need to find out how to connect. (TODO) DONE


Connection with root certificate for SSL encryption is now DONE

On Windows, with pgAdmin 4 v5:

bild

bild

Choose the "ca.pem" certificate:

bild

Result: bild

On Windows, with psql

Server [localhost]:  tsdb-8edbcb8-marine-ad06.a.timescaledb.io
Database [postgres]: defaultdb
Port [5432]: 25145
Username [postgres]: tsdbadmin
Passwort für Benutzer tsdbadmin: rdbt9yr168bi80jw

This only worked after setting the pgAdmin SSL properties and loading the database. The certificate seems to be saved in the system, perhaps encrypted, since I could not find it anywhere.

On Linux, with psql

Put the ca.pem certificate in ~/geomar/certificates. Then in bash, go to ~/geomar/certificates and enter:

PGSSLMODE=require PGSSLROOTCERT=ca.pem psql -h tsdb-8edbcb8-marine-ad06.a.timescaledb.io -p 25145 -U tsdbadmin -d marinedb

Once the certificate has been loaded in the environment variables like this, it seems to hold for roughly half an hour before you have to load it again into the PGSSLROOTCERT, but that could also have come because I might have overwritten the var with a wrong value in the meantime.

PGSSLMODE=require psql -h tsdb-8edbcb8-marine-ad06.a.timescaledb.io -p 25145 -U tsdbadmin -d defaultdb
lorenzznerol commented 3 years ago

API for the mobile frontend to the Postgresql Webserver backend

This is mainly the API with some very small SQL parametrisation in the insert and delete command. A parametrization prototype is given also for the select command, but I did not yet get this to work (ReferenceError: request is not defined). It should still run later in the app with npx create-react-app react-postgres and perhaps, even the Premises that were used in the original code need to be there.

The main aim was the API to connect to the server at all and get the needed endpoint (JSON) format as a result in the browser.

The certificate that you need to save at /etc/certificates/ca.pem (including the ----xyz---- from top and bottom):

-----BEGIN CERTIFICATE----- MIIEQTCCAqmgAwIBAgIUPVry8oOWmxx4eHNWRAlKvdPPMSkwDQYJKoZIhvcNAQEM BQAwOjE4MDYGA1UEAwwvZDk2ZDM4MzQtNmFmMS00NmFlLWEyMjUtZGMwMDE1MDU0 YjJlIFByb2plY3QgQ0EwHhcNMjEwOTA2MDQyMDA3WhcNMzEwOTA0MDQyMDA3WjA6 MTgwNgYDVQQDDC9kOTZkMzgzNC02YWYxLTQ2YWUtYTIyNS1kYzAwMTUwNTRiMmUg UHJvamVjdCBDQTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALUjayZk aZ5yjj3vjbtyPe8okGHniMwplI/gyTuhuQcqu2rxcZPDRlkgAjgYYVwmx+pf8DVY A2xNB8Lxol+Kt2kBHW24/dBzD3UOqmDcwSgufwYtSB1Ql9vaWWtCPgod1uf31mSv BW+XyoUYY/KRrK9+GslQTn70xDL5vxFumqLe9MkAS28NdPk2Gl7aNHKrEHAMCsl9 MOEgbe6tuKFbfYjMUrkGNaYXSpEok92gvKRcYhoK+ocgS8aQEuSS6ah2bqnF8Cf0 rYyXIC3CUdkVL1/mDTTVgaJb+2k2HisIZ86/oauuYepE5zhMaVp8zrfTz2LcB9Wv 878EYO0zwiggxjI7AHMiqooZaxPlhi7vvTWaqGJkLhK0NY0sthlOqEX14MFQJKO4 9m4WH1zcntGIxOTPV3vzsOIq86SWSctkd3HipM2CXgbbSaGbRQO5SHTqomhJu7p+ TYIsL1bW17ddG1BDLXxgPbOYpX+XuxM8Z6CPUzLxv9+aSQoVGTdfFFNGEwIDAQAB oz8wPTAdBgNVHQ4EFgQUqOut8Uppgng2BLIekQ8WQOtFPc8wDwYDVR0TBAgwBgEB /wIBADALBgNVHQ8EBAMCAQYwDQYJKoZIhvcNAQEMBQADggGBAJrnDwKGvzV3FKd7 +OylOI+M/PI4MnYKDtyECTlOVmIIJzJ6IrbqhZ04gH/uGs4xGpAxmAtSwx6s9AdR U8wIkhmcb9ytWFC/hyoK7v1EDEJWidxhTRtooE01y/mdZYiTFKOL9L0f/8H0tlGf V937TBZjmp4jyINMN/urR2LSiNYYuJo2ZgiAZv8Lw8TfKbolgzqstRCUn2G1MYFg HKTehiiBMoOaixSZ3K1Kq3o7JBwAT4r1eo2rz4FDNMV37vP52K2BpA/h3259bBTE ig7lTNeb2bxUBR2gmDj6uy0UTmErfv9HBb0mtqEq3H4y1hG0vnfXH6jPf7U5LeEq MAgLOX3gFVvBjM3eTc6LZugniyNK99114CSU8vKW08A6NNRHFLIFK7+cRjv76wSo KLXEfYabqoagaw/2EhrC2UfpBfdpMYAf/FQCaRFB+lPRrm/ds6gHbh94g03UpUW1 5oFysnUa2wmcqyqv86bVcPfQfupJ383rXxtRwOPcVOEdP4eSdg== -----END CERTIFICATE-----

Of course, you can also save it elsewhere, but then you also need to change the path in the Pool class of the code below.


Following the guide at Getting started with Postgres in your React app: an end to end example where you also find more about the code that was used here. For digging deeper into this, How to quickly build an API using Node.js & PostgreSQL might help as well.

The Webserver must be running to test this. You need to follow the guide at first and install node and express.

Then create two files, merchant_model.js and index.js

merchant_model.js:

const Pool = require('pg').Pool
const fs = require('fs');
const pool = new Pool({
  user: 'tsdbadmin',
  host: 'tsdb-8edbcb8-marine-ad06.a.timescaledb.io',
  database: 'marinedb',
  password: 'rdbt9yr168bi80jw',
  port: 25145,
  ssl: {
    ca: fs
      .readFileSync("/etc/certificates/ca.pem")
      .toString()
  }
});

const getStokes = () => {
    const obs = parseInt(request.params.obs)
    pool.query('SELECT * FROM stokes WHERE obs = $1 ORDER BY obs ASC', [obs]) 
}

// Comment this out when you only want the request on select query.
// But only this query which does not use a request can be shown in the test browser at localhost:3001
// Therefore, this is still needed for testing.
const getStokes = () => pool.query('SELECT * FROM stokes ORDER BY obs ASC') 

const createStoke = (body) => {
    const { obs, traj } = body
    pool.query('INSERT INTO stokes (obs, traj) VALUES ($1, $2) RETURNING *', [obs, traj])
}

const deleteStoke = () => {
    const obs = parseInt(request.params.obs)
    pool.query('DELETE FROM stokes WHERE obs = $1', [obs])
}

module.exports = {
  getStokes,
  createStoke,
  deleteStoke,
}

Side note: In contrast to the guide, I use an SSL connection so that I need to define fs at the start, see How to establish a secure connection (SSL) from a Node.js API to an AWS RDS.

Now, being in the project directory, you can run it with node index.js.

The index.js, a full copy of the guide, only getStokes gets run, it seems:

const express = require('express')
const app = express()
const port = 3001

const Stoke_model = require('./marine_model')

app.use(express.json())
app.use(function (req, res, next) {
  res.setHeader('Access-Control-Allow-Origin', 'http://localhost:3001');
  res.setHeader('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS');
  res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Access-Control-Allow-Headers');
  next();
});

app.get('/', (req, res) => {
  Stoke_model.getStokes()
  .then(response => {
    res.status(200).send(response);
  })
  .catch(error => {
    res.status(500).send(error);
  })
})

app.post('/Stokes', (req, res) => {
  Stoke_model.createStoke(req.body)
  .then(response => {
    res.status(200).send(response);
  })
  .catch(error => {
    res.status(500).send(error);
  })
})

app.delete('/Stokes/:id', (req, res) => {
  Stoke_model.deleteStoke(req.params.id)
  .then(response => {
    res.status(200).send(response);
  })
  .catch(error => {
    res.status(500).send(error);
  })
})
app.listen(port, () => {
  console.log(`App running on port ${port}.`)
})

If that runs, you should see

App running on port 3001.

in the command prompt, with the command still running. Then you open a browser and enter localhost:3001 to see

bild

if you do not use a where condition on the select (the test table has just 9 rows).

Here is the endpoint output in the browser for first testing with the app:

{"command":"SELECT","rowCount":9,"oid":null,"rows":[{"obs":0,"traj":0,"mpa":1,"distance":0,"land":0,"lat":43.288517,"lon":5.171321,"temp":13.421764,"time":"2017-02-28T23:00:00.000Z","z":1.0182366},{"obs":0,"traj":1,"mpa":1,"distance":0,"land":0,"lat":43.296562,"lon":4.9828978,"temp":12.984367,"time":"2017-03-04T23:00:00.000Z","z":1.0182366},{"obs":0,"traj":2,"mpa":1,"distance":0,"land":0,"lat":43.29465,"lon":4.962841,"temp":13.468207,"time":"2017-03-09T23:00:00.000Z","z":1.0182366},{"obs":0,"traj":3,"mpa":1,"distance":0,"land":0,"lat":43.280422,"lon":5.1009564,"temp":13.317587,"time":"2017-03-13T23:00:00.000Z","z":1.0182366},{"obs":0,"traj":4,"mpa":1,"distance":0,"land":0,"lat":43.294853,"lon":5.1343126,"temp":13.7182865,"time":"2017-03-18T23:00:00.000Z","z":1.0182366},{"obs":0,"traj":5,"mpa":1,"distance":0,"land":0,"lat":43.274662,"lon":5.2798667,"temp":13.834332,"time":"2017-03-22T23:00:00.000Z","z":1.0182366},{"obs":0,"traj":6,"mpa":1,"distance":0,"land":0,"lat":43.302692,"lon":5.0953283,"temp":14.2348795,"time":"2017-03-27T22:00:00.000Z","z":1.0182366},{"obs":0,"traj":7,"mpa":1,"distance":0,"land":0,"lat":43.291172,"lon":5.3015227,"temp":14.556713,"time":"2017-03-31T22:00:00.000Z","z":1.0182366},{"obs":0,"traj":8,"mpa":1,"distance":0,"land":0,"lat":43.300125,"lon":5.205144,"temp":14.499222,"time":"2017-04-05T22:00:00.000Z","z":1.0182366}],"fields":[{"name":"obs","tableID":17570,"columnID":1,"dataTypeID":23,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"traj","tableID":17570,"columnID":2,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"mpa","tableID":17570,"columnID":3,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"distance","tableID":17570,"columnID":4,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"land","tableID":17570,"columnID":5,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"lat","tableID":17570,"columnID":6,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"lon","tableID":17570,"columnID":7,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"temp","tableID":17570,"columnID":8,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"},{"name":"time","tableID":17570,"columnID":9,"dataTypeID":1114,"dataTypeSize":8,"dataTypeModifier":-1,"format":"text"},{"name":"z","tableID":17570,"columnID":10,"dataTypeID":700,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"}],"_parsers":[null,null,null,null,null,null,null,null,null,null],"_types":{"_types":{"arrayParser":{},"builtins":{"BOOL":16,"BYTEA":17,"CHAR":18,"INT8":20,"INT2":21,"INT4":23,"REGPROC":24,"TEXT":25,"OID":26,"TID":27,"XID":28,"CID":29,"JSON":114,"XML":142,"PG_NODE_TREE":194,"SMGR":210,"PATH":602,"POLYGON":604,"CIDR":650,"FLOAT4":700,"FLOAT8":701,"ABSTIME":702,"RELTIME":703,"TINTERVAL":704,"CIRCLE":718,"MACADDR8":774,"MONEY":790,"MACADDR":829,"INET":869,"ACLITEM":1033,"BPCHAR":1042,"VARCHAR":1043,"DATE":1082,"TIME":1083,"TIMESTAMP":1114,"TIMESTAMPTZ":1184,"INTERVAL":1186,"TIMETZ":1266,"BIT":1560,"VARBIT":1562,"NUMERIC":1700,"REFCURSOR":1790,"REGPROCEDURE":2202,"REGOPER":2203,"REGOPERATOR":2204,"REGCLASS":2205,"REGTYPE":2206,"UUID":2950,"TXID_SNAPSHOT":2970,"PG_LSN":3220,"PG_NDISTINCT":3361,"PG_DEPENDENCIES":3402,"TSVECTOR":3614,"TSQUERY":3615,"GTSVECTOR":3642,"REGCONFIG":3734,"REGDICTIONARY":3769,"JSONB":3802,"REGNAMESPACE":4089,"REGROLE":4096}},"text":{},"binary":{}},"RowCtor":null,"rowAsArray":false}

App

The React app on top of this is yet to come. Perhaps, it will even need the Premises that were used in the original merchant_model.js.

It will be a GUI where you can enter the obs so that you can control what you want to insert or delete (or, if also parametrised, what to delete).


DONE

After following the installation guide (remove the content of react-postgres/src dir and put App.js + a new index.js in the react-postgres/src dir), I can send the parametrised requests.

In the project dir:

npx create-react-app react-postgres

App.js to be put in the subfolder /src:

import React, {useState, useEffect} from 'react';
function App() {
  const [stokes, setStokes] = useState(false);
  useEffect(() => {
    getStoke();
  }, []);

  function getStoke() {
    fetch('http://localhost:3001')
      .then(response => {
        return response.text();
      })
      .then(data => {
        setStokes(data);
      });
  }
  function getStoke() {
    let obs = prompt('Enter stoke obs');  
    fetch('http://localhost:3001')
      .then(response => {
        return response.text();
      })
      .then(data => {
        setStokes(data);
      });
  }
  function getStoke() {
    let obs = prompt('Enter stoke obs');
    let traj = prompt('Enter stoke traj');    
    fetch('http://localhost:3001', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({obs, traj}),
    })  
      .then(response => {
        return response.text();
      })
      .then(data => {
        setStokes(data);
      });  
  }  

  function createStoke() {
    let obs = prompt('Enter stoke obs');
    let traj = prompt('Enter stoke traj');
    fetch('http://localhost:3001/stokes', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({obs, traj}),
    })
      .then(response => {
        return response.text();
      })
      .then(data => {
        alert(data);
        getStoke();
      });
  }
  function deleteStoke() {
    let obs = prompt('Enter stoke obs');
    fetch(`http://localhost:3001/stokes/${obs}`, {
      method: 'DELETE',
    })
      .then(response => {
        return response.text();
      })
      .then(data => {
        alert(data);
        getStoke();
      });
  }
  return (
    <div>
      {stokes ? stokes : 'There is no stoke data available'}
      <br />
      <button onClick={createStoke}>Add stoke</button>
      <br />
      <button onClick={deleteStoke}>Delete stoke</button>
    </div>
  );
}
export default App;

index.js to be put in the subfolder /src:

import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

ReactDOM.render(<App />, document.getElementById('root'));

Then in the /react-postgres dir:

npm start

and you get:

bild

and a browser opens:

bild

asking you to:

bild

Entering 0 here to select all obs with value 0 does not show any data, but it normally should show the full table of 9 rows since all obs are 0. (TODO) DONE

This is still almost finished. The create does not work since the full table columns would need to be inserted, not just obs and traj (I guess so).

It should be easy to find out about this since there is a Merchant table as the example that would probably work. And then it is a small step to add other columns to the parameters.


I used the full set of js files from the repository instead of using those from the guide's website. And I also added passwords to postgres user and my_user user.

By this, I could get to a working select list shown in the frontend. I still have not checked how to add parameters to the select query, but that should be quickly done. (TODO)

You need to follow Getting error: Peer authentication failed for user "postgres", when trying to get pgsql working with rails in order to give postgres a password, and if you want to use another user like my_user, give it a password as well.

Only then, you get the connection done.

I tried the full guide with the original "merchant" example to have a surely working prototype, and it worked, see as follows. From this, we can easily create another prototype on the base of the Webserver.

postgres-node

When you have added the two files from the repository:

enter image description here

to the "node-postgres" directory, being in the project directory, you can run it with node index.js.

If that runs, you should see

App running on port 3001.

in the command prompt, with the command still running. Then you open a browser and enter localhost:3001 to see the backend:

enter image description here

postgres-react

When you then follow the guide and add the files from the repository:

enter image description here

to the "react-postgres\src" directory, you can run:

npm start

to see the frontend:

enter image description here

Try the Add button:

enter image description here enter image description here enter image description here enter image description here

I could now also add the UPDATE statement as a new button with a working parametrisation. We do not need updates, but it is good for learning how the parametrisation works:

In the "/node-postgres" directory, I changed:

app.put('/merchants', (req, res) => {
  merchant_model
    .updateMerchant(req.body)
    .then((response) => {
      res.status(200).send(response);
    })
    .catch((error) => {
      res.status(500).send(error);
    });
});

The original createMerchant function:

const createMerchant = (body) => {
  return new Promise(function (resolve, reject) {
    const { name, email } = body;

    pool.query(
      'INSERT INTO merchants (name, email) VALUES ($1, $2) RETURNING *',
      [name, email],
      (error, results) => {
        if (error) {
          reject(error);
        }
        resolve(
          `A new merchant has been added added: ${JSON.stringify(
            results.rows[0]
          )}`
        );
      }
    );
  });
};

Now the new updateMerchant function:

const updateMerchant = (body) => {
  return new Promise(function (resolve, reject) {
    const { id, name } = body;
    pool.autoCommit = true;

    pool.query(
      'UPDATE merchants SET name = "$2" WHERE id = $1',
      [id, name],
      (error, results) => {
        if (error) {
          reject(error);
        }
        resolve(
          `Merchant updated with ID: ${id}, changed name to ${name}.`
        );
      }
    );
  });
};

module.exports = {
  getMerchants,
  createMerchant,
  deleteMerchant,
  updateMerchant,
};

In the /node-postgres/react-postgres directory, I changed (using PUT method for UPDATE because of Need help about @Get @Post @Put @Delete in Restful webservice or Explain and example about 'get', 'delete', 'post', 'put', 'options', 'patch', 'head' method?):

function updateMerchant() {
  let id = prompt('Enter merchant ID');
  let name = prompt('Enter merchant name');

  fetch('http://localhost:3001/merchants', {
    method: 'PUT',
    headers: {
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({ id, name }),
  })
    .then((response) => {
      return response.text();
    })
    .then((data) => {
      alert(data);
      getMerchant();
    });
}

return (
  <div>
    {merchants ? merchants : 'There is no merchant data available'}
    <br />
    <button onClick={createMerchant}>Add</button>
    <br />
    <button onClick={deleteMerchant}>Delete</button>
    <br />
    <button onClick={updateMerchant}>Update</button>
  </div>
);

Example screenshots, updating ID 2:

enter image description here enter image description here

Press F5:

enter image description here

lorenzznerol commented 2 years ago

Getting the result of the Webserver as endpoints for the mobile phone

Using Node.JS directly on Android? Answer: No

From the mobile app team I got the information that only the endpoints are needed, the mobile phone itself cannot run node.js. But how would you then query the Webserver with the mobile app if not with Node JS querying the Webserver with a query from the mobile phone and publish that on a localhost website on the mobile phone?

This link seems to hint at a node.js app, though only at first glance:

https://code.janeasystems.com/nodejs-mobile/getting-started-react-native

This link shows that using node.js directly on the mobile phone is impossible:

https://os-system.com/blog/node-js-android-app-right-choice-for-backend/

Is it possible to run Node.js on Android?

You cannot run Node.js on Android, as these are unrelated things. Node.js runs on the server, while Android is the operating system. Of course, it is assumed that in the future emulators will be developed on which it will be possible to run Node.js, but at the moment this is technically impossible.

bild

Node.js works independently, it cannot be installed on Android OS as it is a server service that simply responds to requests. These are the requests that Android sends. By the way, it doesn’t have to be Android, it can be any operating systems.

Therefore, we need an interface to the PostgreSQL Webserver that can connect with the Android frontend. And the currently used node.js + ReactJS solution is a local server running to fetch the PostgreSQL Webserver, with the result that this local server needs to be available as a Webserver as well, else the mobile phone cannot connect.

Publishing the results of the Webserver as endpoints on a separate website to be fetched by the mobile phone

Having two webservers running plus the published website might be too much of a thing, I hope that the PostgreSQL Webserver has itself some APIs and publishing tools. In any case, publishing the endpoints on a website seems to be needed since we cannot use Node JS / React.JS and a localhost website directly on the mobile phone.

There are hints that other projects do indeed publish the endpoints from the webserver results to a website, see for example Publishing multiple Endpoints with built-in Java JAX-WS web server, though 7 years old.


Asking timescaleDB

It seems as if the current node.js Layer is already the right way to go. The frontend only needs to be made available with an API that lives in the node.js, which is for example done by the React App, and it just needs to be published to a website that is not just on the localhost.

Question

Hi timescale team,

I try to publish the result of an SQL query as endpoints so that a mobile app can fetch them with javascript.

It is impossible to install node.js on Android, therefore my previous solution to install node.js + ReactJS did only show how I might query timescaleDB with javascript, but it would not help me querying from the mobile phone and publishing the endpoints to a website that the mobile phone can fetch.

Is there a built-in API and tool to help me out with this?

Answer for timescaleDB Webserver:

Timescaledb doesn't offer, itself, an API for this. Because we're an extension on to of Postgres, nearly anything that talks to postgres can talk to us, but we don't ourselves offer a restful API or a webserver available for requests from clients. However, we've got many folks who power a number of different kinds of applications, including mobile ones with timescaledb. In those cases, most folks introduce an API layer. The application itself communicates with the API, which pushes and pulls data from the database, and returns it in a consumable format by the application. In fact, this would generally be a best practice - in most cases, you don't want the mobile application (with an unknown number of users - hopefully a lot of them!) connecting directly to the database, since you wouldn't be able to control the number of open connections, which can result in significant performance or stability issues.

This API could live in your node.js that runs on your infrastucture, and works between the mobile applications and the database. Another common access method is to front-end your data with a graphQL layer. There are many choices, but several folks I've worked with have chosen to use Hasura for this. But this all really depends on your particular needs.

Research on Hasura API

https://hasura.io/

bild

bild

bild

lorenzznerol commented 2 years ago

Setup of Hasura API

You can register just with your e-mail, no credit card nor mobile number needed.

bild

bild

bild

Hasura is secured with secrets, no private and public key needed:

bild

Connect

Everyone should now be able to connect to the Hasura API by using:

bild

https://wondrous-quail-63.hasura.app/v1/graphql

together with the Admin secret:

BukR0j7DT3QbEg2a2cPR4BIFlHLtczvNnNCO19EYEMPV0zxSiIiMoKDOezxzSRah

From Hasura to the backend

And you can connect to the TimescaleDB Webserver of course. They also advertise Heroku, though I do not like Heroku so much because it asks you for your mobile phone number and company, or at least it asks more than just the e-mail.

Yet, it seems as if Heroku would already be available with the Hasura e-mail account here, as an integrated part of Hasura:

bild

But it turns out that you have to login with your Heroku mail registration, therefore it is not integrated and our TimescaleDB Webserver still makes sense regarding the less personal data that it claims. Therefore let us connect to TimescaleDB:

bild

SSL not easy to use (and optional anyway)

I invested quite some time into connecting to the timescaleDB Webserver from Hasura, thinking that this was needed since the Webserver used SSL encryption and psql needed that SSL certificate to connect.

It turned out in the end that SSL is optional and that just the URI is enough! This whole paragraph is therefore just for documentation.

I tried to connect like I do in psql with ssl, but that does not seem to work so easily, you need to have additional node certificates and keys which timescaleDB does not provide:

bild

Key + certificate tricks

If I run

openssl req -out server.csr -new -newkey rsa:2048 -nodes \
-keyout server.key

and fill it with some random values or “.”, and change the config with sudo nano /usr/lib/ssl/openssl.cnf so that the private key “server.key” and the root certificate “ca.pem” are found, and if I then try to create the server.crt with:

openssl ca -extensions v3_intermediate_ca -days 3650 -notext \
-md sha256 -in server.csr -out server.crt

I get:

Using configuration from /usr/lib/ssl/openssl.cnf
CA certificate and CA private key do not match
140718372717888:error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch:../crypto/x509/x509_cmp.c:299:

And only https://stackoverflow.com/questions/7064087/how-to-convert-csr-to-cer-or-whatever-usable-on-windows#7064418 helps me out to get the server.crt with:

openssl x509 -req -in server.csr -signkey server.key -out server.crt`

Filling the needed ssl items:

bild

  1. SSL Root Certificate
  2. SSL Certificate
  3. SSL Key
  4. SSL Password

does not solve it. When trying to connect to my database at timescale using Hasura, I get a red text box at the upper right:

bild

Saying:

Adding data source failed Inconsistent object: Missing Env Var: -----BEGIN CERTIFICATE----- MIICoTCCAYkCFAHx6x6kVHJ023/ui0aLoWLqA8RyMA0GCSqGSIb3DQEBCwUAMA0x CzAJBgNVBAYTAkRFMB4XDTIxMDkyMTE4MDQzNVoXDTIxMTAyMTE4MDQzNVowDTEL MAkGA1UEBhMCREUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDWGxsz … g98irlkrtDn17jIzmZMjlJEQ75P9L78lJMT2eYCFRqVoCm+7x5IUS18UlnGUtwR4 BuIZOg4= -----END CERTIFICATE-----

TimescaleDB service help:

Then I asked TimescaleDB again. Answer:

While we provide the CA root because the SSL Certs for each service are internally signed, so this will allow you to create trust, but we do not provide the private keys or SSL passwords for them.

The documentation you linked to is for creating certificate auth between nodes in an on-premise multi-node environment. Timescale cloud does not allow for certificate auth, so they wouldn't be relevant here, and you wouldn't be able to combine certificates you create with the CA you downloaded from the cloud portal.postgres://tsdbadmin:ZvHXpmgjPAmAxNGU@tf-promscale-tests-internal-90d0.a.timescaledb.io:26479/defaultdb?sslmode=requiree However, I do believe these are optional. Using both the hasura docker-compose and hasura cloud, simply providing the entire connection string (as copied from the Service URI field in the Timescale Cloud Portal) was sufficient to connect:

bild

bild

postgres://tsdbadmin:ZvHXpmgjPAmAxNGU@tf-promscale-tests-internal-90d0.a.timescaledb.io:26479/defaultdb?sslmode=require

URI is the way to connect

From the answer of the Timescale service, it is clear that the URI is enough. I had tried that before as a first step, but I had already used the SSL root certificate in the SSL settings. I should have left that out and it would have run directly, but I did not expect it to be just optional.

Here is the URI, taken from timescaleDB website:

bild

postgres://tsdbadmin:rdbt9yr168bi80jw@tsdb-8edbcb8-marine-ad06.a.timescaledb.io:25145/defaultdb?sslmode=require

Use this URI to connect with Hasura, so easy:

bild

Then you see the untracked tables that were created in advance using psql on the Webserver, press "Track All":

bild

bild

bild

And here is a test query with a where condition. You do not have to write the code, you can just click to get:

query MyQuery {
  stokes(where: {mpa: {_eq: 1}}) {
    z
    traj
    time
    temp
    obs
    mpa
    lon
    lat
    land
    distance
  }
}

This gets fired over the API, with the results as endpoint at:

https://wondrous-quail-63.hasura.app/v1/graphql

{
  "data": {
    "stokes": [
      {
        "z": 1.0182366,
        "traj": 0,
        "time": "2017-03-01T00:00:00",
        "temp": 13.421764,
        "obs": 0,
        "mpa": 1,
        "lon": 5.171321,
        "lat": 43.288517,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 1,
        "time": "2017-03-05T00:00:00",
        "temp": 12.984367,
        "obs": 0,
        "mpa": 1,
        "lon": 4.9828978,
        "lat": 43.296562,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 2,
        "time": "2017-03-10T00:00:00",
        "temp": 13.468207,
        "obs": 0,
        "mpa": 1,
        "lon": 4.962841,
        "lat": 43.29465,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 3,
        "time": "2017-03-14T00:00:00",
        "temp": 13.317587,
        "obs": 0,
        "mpa": 1,
        "lon": 5.1009564,
        "lat": 43.280422,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 4,
        "time": "2017-03-19T00:00:00",
        "temp": 13.7182865,
        "obs": 0,
        "mpa": 1,
        "lon": 5.1343126,
        "lat": 43.294853,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 5,
        "time": "2017-03-23T00:00:00",
        "temp": 13.834332,
        "obs": 0,
        "mpa": 1,
        "lon": 5.2798667,
        "lat": 43.274662,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 6,
        "time": "2017-03-28T00:00:00",
        "temp": 14.2348795,
        "obs": 0,
        "mpa": 1,
        "lon": 5.0953283,
        "lat": 43.302692,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 7,
        "time": "2017-04-01T00:00:00",
        "temp": 14.556713,
        "obs": 0,
        "mpa": 1,
        "lon": 5.3015227,
        "lat": 43.291172,
        "land": 0,
        "distance": 0
      },
      {
        "z": 1.0182366,
        "traj": 8,
        "time": "2017-04-06T00:00:00",
        "temp": 14.499222,
        "obs": 0,
        "mpa": 1,
        "lon": 5.205144,
        "lat": 43.300125,
        "land": 0,
        "distance": 0
      }
    ]
  }
}

bild

I could not find out how to have a look at the data in that endpoint, at least just opening it in the browser does not work:

bild

You see to use it only like a login at the Hasura website together with the secret instead, see "Connect" above.

But it seems to be exactly what is needed for the mobile app. Without testing this any further, I see this issue can be closed with this prototype.