hapifhir / hapi-fhir-jpaserver-starter

Apache License 2.0
379 stars 1.02k forks source link

Can't build Postgres changes or connect to container #599

Open csaun opened 11 months ago

csaun commented 11 months ago

I'm unable to connect to the Postgres container using an external program like DBVisualizer. I changed the application.yaml file so it uses postgres not H2, then ran the project.

I'm new to Docker so didn't realise the code changes wouldn't be reflected in the postgres container as it's an instance of the original image. So I'm assuming even though the postgres container is running it's empty as any data posted to the server is actually be stored in the H2 db? So even if I could connect to the container there wouldn't be anything there?

I've now copied the application.yaml file and placed it in an external folder. It's set to use postgres and port 5432 instead of H2 and uses localhost.

`datasource: url: 'jdbc:postgresql://localhost:5432/hapi' username: admin password: admin driverClassName: org.postgresql.Driver

Connection Error When I run the project that now uses the external version of the application.yaml file I get a 'Connection to localhost:5432 refused error. I'm on Windows and have turned the firewall and antivirus off.

Run cmd -> docker run -p 8080:8080 -v C:\pathtofile\my-configs:/configs -e "SPRING_CONFIG_LOCATION=file:///configs/application.yaml" hapiproject/hapi:latest

Connection Error -> 2023-10-17 13:15:36.690 [main] INFO com.zaxxer.hikari.HikariDataSource [HikariDataSource.java:110] HikariPool-1 - Starting... 2023-10-17 13:15:37.696 [main] ERROR com.zaxxer.hikari.pool.HikariPool [HikariPool.java:594] HikariPool-1 - Exception during pool initialization. org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:319) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:247) at org.postgresql.Driver.makeConnection(Driver.java:434) at org.postgresql.Driver.connect(Driver.java:291) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359) at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
at com.zaxxer.hikari.pool.HikariPool.(HikariPool.java:100) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)

Is there a step I've missed? Should I be using that mvn install -DskipTests command somewhere as well so it doesn't run the original H2 integration tests?

KevinDougan commented 11 months ago

Did you create a database called hapi in your Postgres instance and grant all privileges to the Postgres admin user to that newly-created database?

XcrigX commented 11 months ago

I'm not super familiar with how the images are published, but does the image published as hapiproject/hapi:latest contain Postgres? There is a docker-compose.yml in this project which does, but is that what is published?

KevinDougan commented 11 months ago

No, it does not provide another DB other than imbedded H2. When you changed the config to point to an external Postgres DB, it implies that you have already set one up running on your localhost on port 5432 and made it accessible to the Docker container running HAPI_FHIR:

url: 'jdbc:postgresql://localhost:5432/hapi'
XcrigX commented 11 months ago

I don't think that would work then. That localhost referenced in the application.yaml would be localhost to the docker container running HAPI, not the windows host.

XcrigX commented 11 months ago

@csaun , if you are running a completely separate Postgres container and mapping port 5432 back to your Windows localhost, you'd need to figure out how to reference the windows host. This might point in the right direction: https://stackoverflow.com/questions/40746453/how-to-connect-to-docker-host-from-container-on-windows-10-docker-for-windows. I've found it can be tricky to allow a docker container in windows to access anything on the host machine. Disabling the Windows firewall might be enough, but comes with some risk. see: https://www.github.com/microsoft/WSL/issues (search for issue 4139).

If this is just for testing purposes, you might try using the docker-compose.yml file in this project to stand up a container with both HAPI and Postgres. You could then expose 5432 as well as 8080 from that to connect to the Postgres DB. NOTE this DB would not be permanent, not suitable for production, etc.

csaun commented 11 months ago

Thanks for all the suggestions, unfortunately I'm still stuck :( The reason I want to change to use postgres is so I can browse the database tables and see how the data is stored.

1) hapi-fhir-postgres Container

I modified the application.yaml so that it points to the postgres docker container.

datasource:

url: 'jdbc:postgresql://localhost:5432/hapi'

url: 'jdbc:postgresql://hapi-fhir-postgres:5432/hapi'
username: admin
password: admin
driverClassName: org.postgresql.Driver

Deleted the existing container then ran these cmds - mvn install -DskipTests docker-compose up --build

This creates the 2 containers for the server (hapi-fhir-jpastarter-start) and postgres (hapi-fhir-postgres)

This caused no errors but I don't know if the data is being saved in the original h2 db or somewhere in the postgres container. Posted a patient json to the server and can retrieve it. When I deleted the containers and rebuilt them the data was still there, is that because its stored in this volume or is more likely in the h2 db volumes:

Is there a way to access and view the table data stored in the container or volume? To check if it's empty or not? Looked through the list of tables in the volume but can't find any data. ![Uploading postgres_volume.png…]()

Connecting to the Postgres Container @XcrigX - thanks, I tried just using docker-compose and the 2 existing services instead of the docker run cmd. However, when I attempt to connect to the Postgres (hapi-fhir-postgres) DB container from DBVisualizer it gives an unknownhostexception**

**Long Message: The connection attempt failed.

Details:    Type: org.postgresql.util.PSQLException    SQL State: 08001

Stack Trace: java.net.UnknownHostException: hapi-fhir-postgres    at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:567)    at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)**

This makes me think the data is being saved to the h2 embedded db and not the volume listed in the docker-compose _-> hapi-fhir-postgres: image: postgres:13-alpine container_name: hapi-fhir-postgres restart: always environment: POSTGRES_DB: "hapi" POSTGRES_USER: "admin" POSTGRES_PASSWORD: "admin" ports:

2) Localhost hapi

When the project was cloned a postgres image was also pulled, the db service uses that postgres image. I thought this was enough and that I didn't have to create another hapi db locally. As I can't connect and view the database tables in the postgres container (if they're even there!) I tried to connect to a locally created db instead.

hapi-fhir-jpaserver-start | Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'restfulServer' defined in class path resource [ca/uhn/fhir/jpa/starter/common/StarterJpaConfig.class]: Unsatisfied dependency expressed through method 'restfulServer' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mySystemDaoR4': Injection of persistence dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [ca/uhn/fhir/jpa/starter/common/StarterJpaConfig.class]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: HAPI_PU] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to open JDBC Connection for DDL execution
hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:800) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:541) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1391) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:887) hapi-fhir-jpaserver-start | at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:791) hapi-fhir-jpaserver-start | ... 61 common frames omitted hapi-fhir-jpaserver-start | Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mySystemDaoR4': Injection of persistence dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [ca/uhn/fhir/jpa/starter/common/StarterJpaConfig.class]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: HAPI_PU] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to open JDBC Connection for DDL execution

XcrigX commented 11 months ago

Your db visualizer app running on windows is not going to know how to resolve the hostname "hapi-fhir-postgres". That's not a hostname your windows environment knows about - it is local only to the docker container.

You can probably port-forward 5432 from the Docker container to Windows - then you can connect your db visualizer app on localhost:5432 from windows. (Same as you are port forwarding port 8080 so you can issue http requests to the HAPI server on localhost:8080).

If you want to just verify is the data getting saved into postgres, another option for troubleshooting is to connect to the docker container console. I don't know the precise syntax, but something like 'docker exec -it thecontainername sh' might work. Then once inside the container, you can run command-line 'psql' commands to connect to the DB, issue queries, etc.

csaun commented 11 months ago

Ah ok thanks I didn't realise that it couldn't see the container's hostname. From DBVisualizer if I use localhost it connects to the databases I've created in PgAdmin, postgres and hapi. I then deleted the local hapi db I'd created as I think DBVisualizer was connecting to it and not the docker container hapi db.

In DBVisualizer I changed the Database Server to localhost but it says the database hapi does not exist. Baffling! Port 5432 Database hapi UserID admin pw admin

An error occurred while establishing the connection:

Long Message: FATAL: database "hapi" does not exist

Details:    Type: org.postgresql.util.PSQLException    SQL State: 3D000

Stack Trace: org.postgresql.util.PSQLException: FATAL: database "hapi" does not exist    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713)    at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2825)    at org.postgresql.core.v3.QueryExecutorImpl.(QueryExecutorImpl.java:175)    at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:313)    at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:54)    at org.postgresql.jdbc.PgConnection.(PgConnection.java:263)    at org.postgresql.Driver.makeConnection(Driver.java:443)    at org.postgresql.Driver.connect(Driver.java:297)    at jdk.internal.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)    at

This is the docker-compose.yml `version: "3" services: hapi-fhir-jpaserver-start: build: . container_name: hapi-fhir-jpaserver-start restart: on-failure ports:

application.yaml file -> `#Adds the option to go to eg. http://localhost:8080/actuator/health for seeing the running configuration

see https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints

management: endpoints: web: exposure: include: "health,prometheus" spring: main: allow-circular-references: true

allow-bean-definition-overriding: true

flyway: enabled: false check-location: false baselineOnMigrate: true datasource:

url: 'jdbc:h2:file:./target/database/h2'

#url: 'jdbc:postgresql://localhost:5432/hapi'
url: 'jdbc:postgresql://hapi-fhir-postgres:5432/hapi'

url: 'jdbc:postgresql://172.19.0.2:5432/hapi'

username: admin
password: admin
driverClassName: org.postgresql.Driver

# this was already commented out
#url: jdbc:h2:mem:test_mem
# H2 settings
#username: sa
#password: null
#driverClassName: org.h2.Driver
max-active: 15

# database connection pool size
hikari:
  maximum-pool-size: 10

jpa: properties: hibernate.format_sql: false hibernate.show_sql: false

  #Hibernate dialect is automatically detected except Postgres and H2.
  #If using H2, then supply the value of ca.uhn.fhir.jpa.model.dialect.HapiFhirH2Dialect
  #If using postgres, then supply the value of ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres94Dialect
  #H2
  #hibernate.dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirH2Dialect
  #Postgres
  hibernate.dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres94Dialect
 # hibernate.dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres95Dialect

hibernate.hbm2ddl.auto: update

hibernate.jdbc.batch_size: 20

hibernate.cache.use_query_cache: false

hibernate.cache.use_second_level_cache: false

hibernate.cache.use_structured_entries: false

hibernate.cache.use_minimal_puts: false

These settings will enable fulltext search with lucene or elastic

  #H2
  #hibernate.search.enabled: true
  #Postgres
  hibernate.search.enabled: false

lucene parameters

hibernate.search.backend.type: lucene

hibernate.search.backend.analysis.configurer: ca.uhn.fhir.jpa.search.HapiHSearchAnalysisConfigurers$HapiLuceneAnalysisConfigurer

hibernate.search.backend.directory.type: local-filesystem

hibernate.search.backend.directory.root: target/lucenefiles

hibernate.search.backend.lucene_version: lucene_current

elastic parameters ===> see also elasticsearch section below <===

hibernate.search.backend.type: elasticsearch

hibernate.search.backend.analysis.configurer: ca.uhn.fhir.jpa.search.HapiHSearchAnalysisConfigurers$HapiElasticAnalysisConfigurer

hapi: fhir:

### This enables the swagger-ui at /fhir/swagger-ui/index.html as well as the /fhir/api-docs (see https://hapifhir.io/hapi-fhir/docs/server_plain/openapi.html)
openapi_enabled: true
### This is the FHIR version. Choose between, DSTU2, DSTU3, R4 or R5
fhir_version: R4
### This flag when enabled to true, will avail evaluate measure operations from CR Module.
### Flag is false by default, can be passed as command line argument to override.
cr_enabled: "${CR_ENABLED: false}"
### enable to use the ApacheProxyAddressStrategy which uses X-Forwarded-* headers
### to determine the FHIR server address
#   use_apache_address_strategy: false
### forces the use of the https:// protocol for the returned server address.
### alternatively, it may be set using the X-Forwarded-Proto header.
#   use_apache_address_strategy_https: false
### enables the server to host content like HTML, css, etc. under the url pattern of eg. /static/**
# staticLocationPrefix: /static
### the deepest folder level will be used. E.g. - if you put file:/foo/bar/bazz as value then the files are resolved under /static/bazz/**
#staticLocation: file:/foo/bar/bazz
### enable to set the Server URL
#    server_address: http://hapi.fhir.org/baseR4
#    defer_indexing_for_codesystems_of_size: 101
#    install_transitive_ig_dependencies: true
### tells the server whether to attempt to load IG resources that are already present
#    reload_existing_implementationGuides : false
#implementationguides:
###    example from registry (packages.fhir.org)
#  swiss:
#    name: swiss.mednet.fhir
#    version: 0.8.0
#    reloadExisting : false
#      example not from registry
#      ips_1_0_0:
#        packageUrl: https://build.fhir.org/ig/HL7/fhir-ips/package.tgz
#        name: hl7.fhir.uv.ips
#        version: 1.0.0
#    supported_resource_types:
#      - Patient
#      - Observation
##################################################
# Allowed Bundle Types for persistence (defaults are: COLLECTION,DOCUMENT,MESSAGE)
##################################################
#    allowed_bundle_types: COLLECTION,DOCUMENT,MESSAGE,TRANSACTION,TRANSACTIONRESPONSE,BATCH,BATCHRESPONSE,HISTORY,SEARCHSET
#    allow_cascading_deletes: true
#    allow_contains_searches: true
#    allow_external_references: true
#    allow_multiple_delete: true
#    allow_override_default_search_params: true
#    auto_create_placeholder_reference_targets: false
### tells the server to automatically append the current version of the target resource to references at these paths
#    auto_version_reference_at_paths: Device.patient, Device.location, Device.parent, DeviceMetric.parent, DeviceMetric.source, Observation.device, Observation.subject
#    cr_enabled: true
#    ips_enabled: false
#    default_encoding: JSON
#    default_pretty_print: true
#    default_page_size: 20
#    delete_expunge_enabled: true
#    enable_repository_validating_interceptor: true
#    enable_index_missing_fields: false
#    enable_index_of_type: true
#    enable_index_contained_resource: false
###  !!Extended Lucene/Elasticsearch Indexing is still a experimental feature, expect some features (e.g. _total=accurate) to not work as expected!!
###  more information here: https://hapifhir.io/hapi-fhir/docs/server_jpa/elastic.html
advanced_lucene_indexing: false
bulk_export_enabled: false
bulk_import_enabled: false
#    enforce_referential_integrity_on_delete: false
# This is an experimental feature, and does not fully support _total and other FHIR features.
#    enforce_referential_integrity_on_delete: false
#    enforce_referential_integrity_on_write: false
#    etag_support_enabled: true
#    expunge_enabled: true
#    client_id_strategy: ALPHANUMERIC
#    fhirpath_interceptor_enabled: false
#    filter_search_enabled: true
#    graphql_enabled: true
narrative_enabled: false
#    mdm_enabled: true
#    local_base_urls:
#      - https://hapi.fhir.org/baseR4
mdm_enabled: false
#    partitioning:
#      allow_references_across_partitions: false
#      partitioning_include_in_search_hashes: false
cors:
  allow_Credentials: true
  # These are allowed_origin patterns, see: https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/cors/CorsConfiguration.html#setAllowedOriginPatterns-java.util.List-
  allowed_origin:
    - '*'

# Search coordinator thread pool sizes
search-coord-core-pool-size: 20
search-coord-max-pool-size: 100
search-coord-queue-capacity: 200

# comma-separated package names, will be @ComponentScan'ed by Spring to allow for creating custom Spring beans
#custom-bean-packages:

# comma-separated list of fully qualified interceptor classes. 
# classes listed here will be fetched from the Spring context when combined with 'custom-bean-packages', 
# or will be instantiated via reflection using an no-arg contructor; then registered with the server  
#custom-interceptor-classes:  

# Threadpool size for BATCH'ed GETs in a bundle.
#    bundle_batch_pool_size: 10
#    bundle_batch_pool_max_size: 50

#    logger:
#      error_format: 'ERROR - ${requestVerb} ${requestUrl}'
#      format: >-
#        Path[${servletPath}] Source[${requestHeader.x-forwarded-for}]
#        Operation[${operationType} ${operationName} ${idOrResourceName}]
#        UA[${requestHeader.user-agent}] Params[${requestParameters}]
#        ResponseEncoding[${responseEncodingNoDefault}]
#      log_exceptions: true
#      name: fhirtest.access
#    max_binary_size: 104857600
#    max_page_size: 200
#    retain_cached_searches_mins: 60
#    reuse_cached_search_results_millis: 60000
tester:
  home:
    name: Local Tester
    server_address: 'http://localhost:8080/fhir'
    refuse_to_fetch_third_party_urls: false
    fhir_version: R4
  global:
    name: Global Tester
    server_address: "http://hapi.fhir.org/baseR4"
    refuse_to_fetch_third_party_urls: false
    fhir_version: R4
#    validation:
#      requests_enabled: true
#      responses_enabled: true
#    binary_storage_enabled: true
inline_resource_storage_below_size: 4000

bulk_export_enabled: true

subscription:

resthook_enabled: true

websocket_enabled: false

email:

from: some@test.com

host: google.com

port:

username:

password:

auth:

startTlsEnable:

startTlsRequired:

quitWait:

lastn_enabled: true

store_resource_in_lucene_index_enabled: true

This is configuration for normalized quantity search level default is 0

0: NORMALIZED_QUANTITY_SEARCH_NOT_SUPPORTED - default

1: NORMALIZED_QUANTITY_STORAGE_SUPPORTED

2: NORMALIZED_QUANTITY_SEARCH_SUPPORTED

normalized_quantity_search_level: 2

elasticsearch:

debug:

pretty_print_json_log: false

refresh_after_write: false

enabled: false

password: SomePassword

required_index_status: YELLOW

rest_url: 'localhost:9200'

protocol: 'http'

schema_management_strategy: CREATE

username: SomeUsername

`

XcrigX commented 11 months ago

Seems like progress. What about connecting to the default 'postgres' database?

csaun commented 11 months ago

From DBVisualizer I can connect to a db with the name postgres but if I use hapi as the db name it says it does not exist.

Database Server: localhost Port: 5432 Database: postgres UserID: admin Password: admin

When I connect to the postgres db there are only default pg tables, no FHIR data or tables. I'd expect there to be an hfj_resources table with the data I've posted to the server.

I removed the postgres volume from docker as I thought this might be stopping it from creating the hapi db. Also deleted the containers and images. Then rebuilt using docker-compose up but it still doesn't seem to be creating the db. I don't think it's creating the hapi db but then it's strange that the user and pw admin work to connect to the postgres db.

Surely docker-compose up should run the changes I made to the docker-compose and application.yaml files?

Build Log ✔ Network hapi-fhir-jpaserver-starter_default Created 0.1s ✔ Volume "hapi-fhir-jpaserver-starter_hapi-fhir-postgres" Created 0.0s ✔ Container hapi-fhir-jpaserver-start Created 0.2s ✔ Container hapi-fhir-postgres Created 0.2s Attaching to hapi-fhir-jpaserver-start, hapi-fhir-postgres hapi-fhir-postgres | The files belonging to this database system will be owned by user "postgres". hapi-fhir-postgres | This user must also own the server process. hapi-fhir-postgres | hapi-fhir-postgres | The database cluster will be initialized with locale "en_US.utf8". hapi-fhir-postgres | The default database encoding has accordingly been set to "UTF8". hapi-fhir-postgres | The default text search configuration will be set to "english". hapi-fhir-postgres | hapi-fhir-postgres | Data page checksums are disabled. hapi-fhir-postgres | hapi-fhir-postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok hapi-fhir-postgres | creating subdirectories ... ok hapi-fhir-postgres | selecting dynamic shared memory implementation ... posix hapi-fhir-postgres | selecting default max_connections ... 100 hapi-fhir-postgres | selecting default shared_buffers ... 128MB hapi-fhir-postgres | selecting default time zone ... UTC hapi-fhir-postgres | creating configuration files ... ok hapi-fhir-postgres | running bootstrap script ... ok hapi-fhir-postgres | sh: locale: not found hapi-fhir-postgres | 2023-10-19 12:10:56.865 UTC [30] WARNING: no usable system locales were found hapi-fhir-postgres | performing post-bootstrap initialization ... ok hapi-fhir-postgres | syncing data to disk ... ok hapi-fhir-postgres | hapi-fhir-postgres | initdb: warning: enabling "trust" authentication for local connections hapi-fhir-postgres | You can change this by editing pg_hba.conf or using the option -A, or hapi-fhir-postgres | --auth-local and --auth-host, the next time you run initdb. hapi-fhir-postgres | hapi-fhir-postgres | Success. You can now start the database server using: hapi-fhir-postgres | hapi-fhir-postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start hapi-fhir-postgres | hapi-fhir-postgres | waiting for server to start....2023-10-19 12:10:59.284 UTC [36] LOG: starting PostgreSQL 13.12 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit hapi-fhir-postgres | 2023-10-19 12:10:59.290 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" hapi-fhir-postgres | 2023-10-19 12:10:59.312 UTC [37] LOG: database system was shut down at 2023-10-19 12:10:58 UTC hapi-fhir-postgres | 2023-10-19 12:10:59.325 UTC [36] LOG: database system is ready to accept connections hapi-fhir-postgres | done hapi-fhir-postgres | server started hapi-fhir-postgres | CREATE DATABASE hapi-fhir-postgres | hapi-fhir-postgres | hapi-fhir-postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* hapi-fhir-postgres | hapi-fhir-postgres | waiting for server to shut down...2023-10-19 12:10:59.926 UTC [36] LOG: received fast shutdown request hapi-fhir-postgres | .2023-10-19 12:10:59.931 UTC [36] LOG: aborting any active transactions hapi-fhir-postgres | 2023-10-19 12:10:59.935 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1 hapi-fhir-postgres | 2023-10-19 12:10:59.940 UTC [38] LOG: shutting down hapi-fhir-postgres | 2023-10-19 12:11:00.119 UTC [36] LOG: database system is shut down hapi-fhir-postgres | done hapi-fhir-postgres | server stopped hapi-fhir-postgres | hapi-fhir-postgres | PostgreSQL init process complete; ready for start up. hapi-fhir-postgres | hapi-fhir-postgres | 2023-10-19 12:11:00.283 UTC [1] LOG: starting PostgreSQL 13.12 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1git20220924-r10) 12.2.1 20220924, 64-bit hapi-fhir-postgres | 2023-10-19 12:11:00.285 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 hapi-fhir-postgres | 2023-10-19 12:11:00.285 UTC [1] LOG: listening on IPv6 address "::", port 5432 hapi-fhir-postgres | 2023-10-19 12:11:00.302 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" hapi-fhir-postgres | 2023-10-19 12:11:00.325 UTC [51] LOG: database system was shut down at 2023-10-19 12:10:59 UTC hapi-fhir-postgres | 2023-10-19 12:11:00.354 UTC [1] LOG: database system is ready to accept connections hapi-fhir-jpaserver-start | hapi-fhir-jpaserver-start | . ____ hapi-fhir-jpaserver-start | /\ / __' () _ \ \ \ \ hapi-fhir-jpaserver-start | ( ( )__ | ' | '| | ' \/ _` | \ \ \ \ hapi-fhir-jpaserver-start | \/ _)| |)| | | | | || (| | ) ) ) ) hapi-fhir-jpaserver-start | ' |__| .|| ||| |\, | / / / / hapi-fhir-jpaserver-start | =========|_|==============|__/=//// hapi-fhir-jpaserver-start | :: Spring Boot :: (v2.7.12) hapi-fhir-jpaserver-start | hapi-fhir-jpaserver-start | 2023-10-19 12:11:03.567 [background-preinit] INFO o.h.validator.internal.util.Version [Version.java:21] HV000001: Hibernate Validator 6.1.5.Final hapi-fhir-jpaserver-start | 2023-10-19 12:11:03.596 [main] INFO ca.uhn.fhir.jpa.starter.Application [StartupInfoLogger.java:55] Starting Application using Java 17.0.7 on 74ec0933db63 with PID 1 (/app/main.war started by nonroot in /app) hapi-fhir-jpaserver-start | 2023-10-19 12:11:03.598 [main] INFO ca.uhn.fhir.jpa.starter.Application [SpringApplication.java:631] No active profile set, falling back to 1 default profile: "default" hapi-fhir-jpaserver-start | 2023-10-19 12:11:11.587 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate [RepositoryConfigurationDelegate.java:132] Bootstrapping Spring Data JPA repositories in DEFAULT mode. hapi-fhir-jpaserver-start | 2023-10-19 12:11:13.236 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate [RepositoryConfigurationDelegate.java:201] Finished Spring Data repository scanning in 1594 ms. Found 51 JPA repository interfaces. hapi-fhir-jpaserver-start | 2023-10-19 12:11:22.028 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker [PostProcessorRegistrationDelegate.java:376] Bean 'ca.uhn.fhir.jpa.config.BeanPostProcessorConfig' of type [ca.uhn.fhir.jpa.config.BeanPostProcessorConfig$$EnhancerBySpringCGLIB$$b5edf7ba] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) hapi-fhir-jpaserver-start | 2023-10-19 12:11:25.142 [main] INFO o.s.b.w.e.tomcat.TomcatWebServer [TomcatWebServer.java:108] Tomcat initialized with port(s): 8080 (http) hapi-fhir-jpaserver-start | 2023-10-19 12:11:25.223 [main] INFO o.a.coyote.http11.Http11NioProtocol [DirectJDKLog.java:173] Initializing ProtocolHandler ["http-nio-8080"] hapi-fhir-jpaserver-start | 2023-10-19 12:11:25.235 [main] INFO o.a.catalina.core.StandardService [DirectJDKLog.java:173] Starting service [Tomcat] hapi-fhir-jpaserver-start | 2023-10-19 12:11:25.236 [main] INFO o.a.catalina.core.StandardEngine [DirectJDKLog.java:173] Starting Servlet engine: [Apache Tomcat/9.0.75] hapi-fhir-jpaserver-start | 2023-10-19 12:11:44.776 [main] INFO o.a.c.c.C.[Tomcat].[localhost].[/] [DirectJDKLog.java:173] Initializing Spring embedded WebApplicationContext hapi-fhir-jpaserver-start | 2023-10-19 12:11:44.777 [main] INFO o.s.b.w.s.c.ServletWebServerApplicationContext [ServletWebServerApplicationContext.java:292] Root WebApplicationContext: initialization completed in 40874 ms hapi-fhir-jpaserver-start | 2023-10-19 12:11:45.125 [main] INFO ca.uhn.fhir.util.VersionUtil [VersionUtil.java:84] HAPI FHIR version 6.8.0 - Rev f33627087f hapi-fhir-jpaserver-start | 2023-10-19 12:11:45.145 [main] INFO ca.uhn.fhir.context.FhirContext [FhirContext.java:226] Creating new FHIR context for FHIR version [R4] hapi-fhir-jpaserver-start | 2023-10-19 12:11:45.617 [main] INFO o.h.jpa.internal.util.LogHelper [LogHelper.java:31] HHH000204: Processing PersistenceUnitInfo [name: HAPI_PU] hapi-fhir-jpaserver-start | 2023-10-19 12:11:46.673 [main] INFO org.hibernate.Version [Version.java:44] HHH000412: Hibernate ORM core version 5.6.15.Final hapi-fhir-jpaserver-start | 2023-10-19 12:11:49.315 [main] INFO o.h.annotations.common.Version [JavaReflectionManager.java:56] HCANN000001: Hibernate Commons Annotations {5.1.2.Final} hapi-fhir-jpaserver-start | 2023-10-19 12:11:50.643 [main] INFO com.zaxxer.hikari.HikariDataSource [HikariDataSource.java:110] HikariPool-1 - Starting... hapi-fhir-jpaserver-start | 2023-10-19 12:11:52.168 [main] INFO com.zaxxer.hikari.pool.HikariPool [HikariPool.java:565] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@4c1bdcc2 hapi-fhir-jpaserver-start | 2023-10-19 12:11:52.188 [main] INFO com.zaxxer.hikari.HikariDataSource [HikariDataSource.java:123] HikariPool-1 - Start completed. hapi-fhir-jpaserver-start | 2023-10-19 12:11:52.527 [main] INFO org.hibernate.dialect.Dialect [Dialect.java:175] HHH000400: Using dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres94Dialect hapi-fhir-jpaserver-start | 2023-10-19 12:11:52.995 [main] INFO o.h.e.b.internal.EnversServiceImpl [EnversServiceImpl.java:88] Envers integration enabled? : true hapi-fhir-jpaserver-start | 2023-10-19 12:12:03.435 [main] INFO o.h.s.m.o.b.i.HibernateSearchPreIntegrationService [HibernateSearchPreIntegrationService.java:89] HSEARCH000034: Hibernate Search version 6.1.6.Final hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.332 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:40] Server configured to allow contains searches hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.335 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:41] Server configured to deny multiple deletes hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.346 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:42] Server configured to deny external references hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.347 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:43] Server configured to enable DAO scheduling hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.349 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:44] Server configured to disable delete expunges hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.350 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:45] Server configured to enable expunges hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.351 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:46] Server configured to allow overriding default search params
hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.353 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:47] Server configured to disable auto-creating placeholder references hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.354 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:48] Server configured to auto-version references at paths [] hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.669 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:100] Server configured to have a maximum fetch size of 'unlimited' hapi-fhir-jpaserver-start | 2023-10-19 12:12:04.670 [main] INFO c.u.f.j.s.c.FhirServerConfigCommon [FhirServerConfigCommon.java:104] Server configured to cache search results for 60000 milliseconds

There's no mention of a hapi db in the log file within Docker Desktop Docker Postgres Container Log 2023-10-19 13:10:56 sh: locale: not found 2023-10-19 13:10:56 2023-10-19 12:10:56.865 UTC [30] WARNING: no usable system locales were found 2023-10-19 13:10:59 initdb: warning: enabling "trust" authentication for local connections 2023-10-19 13:10:59 You can change this by editing pg_hba.conf or using the option -A, or 2023-10-19 13:10:59 --auth-local and --auth-host, the next time you run initdb. 2023-10-19 13:11:00 2023-10-19 12:11:00.283 UTC [1] LOG: starting PostgreSQL 13.12 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit 2023-10-19 13:11:00 2023-10-19 12:11:00.285 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2023-10-19 13:11:00 2023-10-19 12:11:00.285 UTC [1] LOG: listening on IPv6 address "::", port 5432 2023-10-19 13:11:00 2023-10-19 12:11:00.302 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2023-10-19 13:11:00 2023-10-19 12:11:00.325 UTC [51] LOG: database system was shut down at 2023-10-19 12:10:59 UTC 2023-10-19 13:11:00 2023-10-19 12:11:00.354 UTC [1] LOG: database system is ready to accept connections 2023-10-19 13:10:53 The files belonging to this database system will be owned by user "postgres". 2023-10-19 13:10:53 This user must also own the server process. 2023-10-19 13:10:53 2023-10-19 13:10:53 The database cluster will be initialized with locale "en_US.utf8". 2023-10-19 13:10:53 The default database encoding has accordingly been set to "UTF8". 2023-10-19 13:10:53 The default text search configuration will be set to "english". 2023-10-19 13:10:53 2023-10-19 13:10:53 Data page checksums are disabled. 2023-10-19 13:10:53 2023-10-19 13:10:53 fixing permissions on existing directory /var/lib/postgresql/data ... ok 2023-10-19 13:10:53 creating subdirectories ... ok 2023-10-19 13:10:53 selecting dynamic shared memory implementation ... posix 2023-10-19 13:10:54 selecting default max_connections ... 100 2023-10-19 13:10:54 selecting default shared_buffers ... 128MB 2023-10-19 13:10:54 selecting default time zone ... UTC 2023-10-19 13:10:54 creating configuration files ... ok 2023-10-19 13:10:55 running bootstrap script ... ok 2023-10-19 13:10:58 performing post-bootstrap initialization ... ok 2023-10-19 13:10:59 syncing data to disk ... ok 2023-10-19 13:10:59 2023-10-19 13:10:59 2023-10-19 13:10:59 Success. You can now start the database server using: 2023-10-19 13:10:59 2023-10-19 13:10:59 pg_ctl -D /var/lib/postgresql/data -l logfile start 2023-10-19 13:10:59 2023-10-19 13:10:59 waiting for server to start....2023-10-19 12:10:59.284 UTC [36] LOG: starting PostgreSQL 13.12 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit 2023-10-19 13:10:59 2023-10-19 12:10:59.290 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2023-10-19 13:10:59 2023-10-19 12:10:59.312 UTC [37] LOG: database system was shut down at 2023-10-19 12:10:58 UTC 2023-10-19 13:10:59 2023-10-19 12:10:59.325 UTC [36] LOG: database system is ready to accept connections 2023-10-19 13:10:59 done 2023-10-19 13:10:59 server started 2023-10-19 13:10:59 CREATE DATABASE 2023-10-19 13:10:59 2023-10-19 13:10:59 2023-10-19 13:10:59 /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* 2023-10-19 13:10:59 2023-10-19 13:10:59 waiting for server to shut down...2023-10-19 12:10:59.926 UTC [36] LOG: received fast shutdown request 2023-10-19 13:10:59 .2023-10-19 12:10:59.931 UTC [36] LOG: aborting any active transactions 2023-10-19 13:10:59 2023-10-19 12:10:59.935 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1 2023-10-19 13:10:59 2023-10-19 12:10:59.940 UTC [38] LOG: shutting down 2023-10-19 13:11:00 2023-10-19 12:11:00.119 UTC [36] LOG: database system is shut down 2023-10-19 13:11:00 done 2023-10-19 13:11:00 server stopped 2023-10-19 13:11:00 2023-10-19 13:11:00 PostgreSQL init process complete; ready for start up. 2023-10-19 13:11:00

XcrigX commented 11 months ago

It is strange. While connected to the postgres database, you might be able to query to see what other databases are created, might point to a clue. This might work: SELECT * FROM pg_database

Or, you can connect to the command line of the postgres docker container and and then run command-line psql commands to interrogate it. Might be some help here on the psql commands: https://dba.stackexchange.com/questions/1285/how-do-i-list-all-databases-and-tables-using-psql