This repo is inspired from postgresqlCO.NF. It searches PostgreSQL server parameters in blog posts, when it finds a parameter in a post, it writes that to the database with the parameter, blog title and blog link. It returns these blog posts as JSON documents when asked with the parameter.
This service contains 2 different modules.
The crawler module is a cron scheduled Spring Batch Job. When it runs it goes to the pool and pulls all Blog sites. Then iterate over these sites and recursive go deep and search PostgreSQL parameter. After the search is done, it writes found blog site URL, PostgreSQL parameter and title of the post to the database. These two processes run as batch jobs and not the Rest API. Due to the hardware capacity, the crawling is depth limited.
Cron configuration is kept in the database. When the app builds it goes to the database and retrieves the cron configuration. After each run, it goes to the database and resets the cron config. So you can change the cron config after each iteration.
UPDATE cron_conf SET cron_exp = '0 */1 * * * *' WHERE id = 1;
The endpoint is JWT token authenticated therefore each user should have a unique JWT in jwt_token table. It takes a PostgreSQL parameter and returns the related blog posts. The application has a swagger configuration, thus the app can be tested in swagger.
The data is crawled and loaded into the database previously. If you want to use this data, you can load it to your database.
mvn clean package spring-boot:repackage -DskipTests
java -jar blogcrawling-0.0.1-SNAPSHOT.jar
mvn clean spring-boot:run
FROM openjdk:17-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh","-c","java $JAVA_OPTS -jar /app.jar"]
docker build -t <some-name>:<some-tag> .
psql <db_name> < database/db-dump.sql
docker run -e JAVA_OPTS='-Dspring.datasource.url=jdbc:postgresql://<db_host>:<port>/<db_name> -Dspring.datasource.username=<dbusername> -Dspring.datasource.password=<dbpassword>' <image_name>