hapifhir / hapi-fhir-jpaserver-starter

Apache License 2.0
391 stars 1.05k forks source link

Failed to validate resources by "java.lang.OutOfMemoryError: Java heap space" #711

Open 2017Yasu opened 3 months ago

2017Yasu commented 3 months ago

I'm not sure if this is the appropriate space to report this issue, but let me raise a validation issue.

Describe the bug

The JPA server threw an error while validating a JP Core MedicationRequest resource.

Adding JP Core and JP FHIR Terminology to the implementation guides and enabling validation on posted resources, I launched a JPA Server in my local environment with docker-compose. I was able to validate and save a JP Core Patient resource, but when it comes to a JP Core MedicationRequest resource, the server threw an OutOfMemoryError with the following logs:

2024-07-22 01:47:12.372 [http-nio-8080-exec-2] INFO  o.h.f.c.h.v.v.VersionSpecificWorkerContextWrapper [VersionSpecificWorkerContextWrapper.java:105] Generating snapshot for StructureDefinition: http://jpfhir.jp/fhir/core/StructureDefinition/JP_MedicationRequest
2024-07-22 01:47:12.801 [http-nio-8080-exec-2] INFO  o.h.f.c.h.v.v.VersionSpecificWorkerContextWrapper [VersionSpecificWorkerContextWrapper.java:105] Generating snapshot for StructureDefinition: http://jpfhir.jp/fhir/core/StructureDefinition/JP_Encounter
...
2024-07-22 01:48:32.553 [http-nio-8080-exec-2] INFO  o.h.f.c.h.v.v.VersionSpecificWorkerContextWrapper [VersionSpecificWorkerContextWrapper.java:105] Generating snapshot for StructureDefinition: http://jpfhir.jp/fhir/core/StructureDefinition/JP_Encounter
2024-07-22 01:48:32.555 [hapi-fhir-jpa-scheduler-clustered-4] INFO  c.u.f.j.s.c.DatabaseSearchCacheSvcImpl [DatabaseSearchCacheSvcImpl.java:256] Deleted 0 expired searches
2024-07-22 01:48:32.557 [http-nio-8080-exec-2] WARN  o.h.f.c.h.v.s.SnapshotGeneratingValidationSupport [SnapshotGeneratingValidationSupport.java:70] Detected circular dependency, already generating snapshot for: http://jpfhir.jp/fhir/core/StructureDefinition/JP_Encounter
2024-07-22 01:48:43.275 [http-nio-8080-exec-2] INFO  o.h.f.c.h.v.v.VersionSpecificWorkerContextWrapper [VersionSpecificWorkerContextWrapper.java:105] Generating snapshot for StructureDefinition: http://jpfhir.jp/fhir/core/StructureDefinition/JP_Condition
2024-07-22 01:49:19.396 [http-nio-8080-exec-2] ERROR c.u.f.r.s.i.ExceptionHandlingInterceptor [ExceptionHandlingInterceptor.java:198] Failure during REST processing
ca.uhn.fhir.rest.server.exceptions.InternalErrorException: HAPI-1910: Failure invoking interceptor for pointcut(s) SERVER_INCOMING_REQUEST_POST_PROCESSED
    at ca.uhn.fhir.interceptor.executor.BaseInterceptorService$HookInvoker.invoke(BaseInterceptorService.java:564)
    at ca.uhn.fhir.interceptor.executor.BaseInterceptorService.doCallHooks(BaseInterceptorService.java:289)
    at ca.uhn.fhir.interceptor.executor.BaseInterceptorService.callHooks(BaseInterceptorService.java:277)
    at ca.uhn.fhir.interceptor.executor.BaseInterceptorService.callHooks(BaseInterceptorService.java:65)
    at ca.uhn.fhir.rest.server.RestfulServer.handleRequest(RestfulServer.java:1184)
    at ca.uhn.fhir.rest.server.RestfulServer.doPost(RestfulServer.java:436)
    at ca.uhn.fhir.rest.server.RestfulServer.service(RestfulServer.java:1944)
    at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:205)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
    at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:109)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
    at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:482)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:340)
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:391)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:896)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1744)
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
    at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
    at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at java.base/jdk.internal.misc.Unsafe.allocateUninitializedArray(Unsafe.java:1375)
    at java.base/java.lang.StringConcatHelper.newArray(StringConcatHelper.java:497)
    at java.base/java.lang.String.join(String.java:3269)
    at java.base/java.lang.String.join(String.java:3335)
    at org.hl7.fhir.convertors.context.VersionConvertorContext.getPath(VersionConvertorContext.java:121)
    at org.hl7.fhir.convertors.context.ConversionContext40_50.path(ConversionContext40_50.java:47)
    at org.hl7.fhir.convertors.conv40_50.VersionConvertor_40_50.copyElement(VersionConvertor_40_50.java:128)
    at org.hl7.fhir.convertors.conv40_50.datatypes40_50.primitive40_50.String40_50.convertString(String40_50.java:9)
    at org.hl7.fhir.convertors.conv40_50.datatypes40_50.special40_50.ElementDefinition40_50.convertElementDefinition(ElementDefinition40_50.java:25)
    at org.hl7.fhir.convertors.conv40_50.resources40_50.StructureDefinition40_50.convertStructureDefinitionDifferentialComponent(StructureDefinition40_50.java:386)
    at org.hl7.fhir.convertors.conv40_50.resources40_50.StructureDefinition40_50.convertStructureDefinition(StructureDefinition40_50.java:107)
    at org.hl7.fhir.convertors.conv40_50.resources40_50.Resource40_50.convertResource(Resource40_50.java:251)
    at org.hl7.fhir.convertors.conv40_50.VersionConvertor_40_50.convertResource(VersionConvertor_40_50.java:77)
    at org.hl7.fhir.convertors.factory.VersionConvertorFactory_40_50.convertResource(VersionConvertorFactory_40_50.java:15)
    at ca.uhn.hapi.converters.canonical.VersionCanonicalizer$R4Strategy.structureDefinitionToCanonical(VersionCanonicalizer.java:797)
    at ca.uhn.hapi.converters.canonical.VersionCanonicalizer.structureDefinitionToCanonical(VersionCanonicalizer.java:249)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper.allStructures(VersionSpecificWorkerContextWrapper.java:229)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper.fetchResourcesByType(VersionSpecificWorkerContextWrapper.java:689)
    at org.hl7.fhir.r5.utils.FHIRPathEngine.<init>(FHIRPathEngine.java:218)
    at org.hl7.fhir.r5.conformance.profile.ProfileUtilities.<init>(ProfileUtilities.java:396)
    at org.hl7.fhir.common.hapi.validation.support.SnapshotGeneratingValidationSupport.generateSnapshot(SnapshotGeneratingValidationSupport.java:102)
    at org.hl7.fhir.common.hapi.validation.support.ValidationSupportChain.generateSnapshot(ValidationSupportChain.java:120)
    at org.hl7.fhir.common.hapi.validation.support.BaseValidationSupportWrapper.generateSnapshot(BaseValidationSupportWrapper.java:141)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper.lambda$new$0(VersionSpecificWorkerContextWrapper.java:108)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper$$Lambda$1694/0x00007f7c20e8d530.load(Unknown Source)
    at ca.uhn.fhir.sl.cache.caffeine.CacheProvider$$Lambda$1695/0x00007f7c20e8dc60.load(Unknown Source)
    at com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$3(LocalLoadingCache.java:183)
    at com.github.benmanes.caffeine.cache.LocalLoadingCache$$Lambda$1696/0x00007f7c20e8e560.apply(Unknown Source)
    at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2688)
    at com.github.benmanes.caffeine.cache.BoundedLocalCache$$Lambda$1700/0x00007f7c20e9f990.apply(Unknown Source)
    at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1916)
    at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2686)
2024-07-22 01:49:19.409 [http-nio-8080-exec-2] INFO  fhirtest.access [LoggingInterceptor.java:164] ERROR - POST http://localhost:8080/fhir/MedicationRequest
2024-07-22 01:49:20.110 [Catalina-utility-1] ERROR o.a.catalina.core.StandardServer [DirectJDKLog.java:175] Error sending periodic event
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
    at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
    at org.apache.catalina.core.StandardServer.startPeriodicLifecycleEvent(StandardServer.java:933)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
    at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
    at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:112)
    at org.apache.catalina.core.StandardServer.lambda$startPeriodicLifecycleEvent$0(StandardServer.java:939)
    at org.apache.catalina.core.StandardServer$$Lambda$2120/0x00007f7c2119ef00.run(Unknown Source)
    ... 7 common frames omitted
2024-07-22 01:49:23.466 [hapi-fhir-jpa-scheduler-clustered-2] INFO  c.u.f.j.s.c.DatabaseSearchCacheSvcImpl [DatabaseSearchCacheSvcImpl.java:256] Deleted 0 expired searches
2024-07-22 01:50:23.465 [hapi-fhir-jpa-scheduler-clustered-4] INFO  c.u.f.j.s.c.DatabaseSearchCacheSvcImpl [DatabaseSearchCacheSvcImpl.java:256] Deleted 0 expired searches

If you want to see the full version of the logs, please access my repository:

https://github.com/2017Yasu/hapi-fhir-jpaserver-docker-test/blob/issues/validation-failed/sample_logs/validation_error.log

Steps to reproduce

  1. Clone my working repository with the following command:
git clone -b issues/validation-failed \
    --depth 1 \
    https://github.com/2017Yasu/hapi-fhir-jpaserver-docker-test.git 
  1. Run the following command and up fhir-jpa-server docker containers.
docker compose -f compose/compose.ig-jp-core-1.1.2.yml up -d
  1. POST a sample Patient resource with the following command:
curl -X POST \
    -d @resources/04-jp-core-patient-example.json \
    -H "Content-Type: application/fhir+json" \
    http://localhost:8080/fhir/Patient
  1. Update the subject property of resources/06-medication-request-example.json so that the existing Patient resource is referred. For example:
   "subject" : {
-    "reference" : "Patient/407"
+    "reference" : "Patient/your-id"
   },
  1. POST the updated sample MedicationRequest resource with the following command:
curl -X POST \
    -d @resources/06-medication-request-example.json \
    -H "Content-Type: application/fhir+json" \
    http://localhost:8080/fhir/MedicationRequest

Environment

$ sw_vers                
ProductName:        macOS
ProductVersion:     14.5
BuildVersion:       23F79

$ docker version
Client:
 Version:           27.0.3
 API version:       1.46
 Go version:        go1.21.11
 Git commit:        7d4bcd8
 Built:             Fri Jun 28 23:59:41 2024
 OS/Arch:           darwin/amd64
 Context:           desktop-linux

Server: Docker Desktop 4.32.0 (157355)
 Engine:
  Version:          27.0.3
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.11
  Git commit:       662f78c
  Built:            Sat Jun 29 00:02:50 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.18
  GitCommit:        ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
 runc:
  Version:          1.7.18
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

$ docker images
REPOSITORY         TAG       IMAGE ID       CREATED        SIZE
hapiproject/hapi   latest    3ab4ef02ba87   2 months ago   568MB

Comments

I tried to override the command with -Xmx5g to specify the memory size of the Java Runtime, but it doesn't work.

Watching the containers' stats with the command docker stats, I found that the CPU usage is extremely high (over 1000%) while the memory usage is quite low (around 30%).

I would appreciate it if you could address this issue together.

XcrigX commented 3 months ago

Hard to tell what's happening. One of the OOM exceptions appears that HAPI is trying to convert the resource versions. Why is that being invoked? My hunch is that the errors are due to the fact that you are using a bare-bones application.yaml file that is leaving off something important - like the target FHIR version. I would suggest copying the application.yaml from the started project and changing just the parts you need to change.

Regardless - this looks to be a problem with your custom configuration and not with the base project.

2017Yasu commented 3 months ago

Thank you for your quick response, and I'm happy to hear that there is still something that I can do.

Following your advice, I copied the application.yaml from the following address, and edited it to install JP Core implementation guides.

https://github.com/hapifhir/hapi-fhir-jpaserver-starter/blob/image/v7.2.0/src/main/resources/application.yaml

I launched v7.2.0 image with specifying the edited config file and took the same steps as above, but it resulted in the same error.

I believe there is something I've overlooked, but I have no ideas what they are, so could you identify the problems in my configuration file please? My latest configuration file is accessible from the following URL:

https://github.com/2017Yasu/hapi-fhir-jpaserver-docker-test/blob/issues/validation-failed/configs/ig-jp-core-1.1.2.application.yaml


Here is the file difference between the original config file and my new config file (ignoring the trailing space):

     #    install_transitive_ig_dependencies: true
-    #implementationguides:
+    implementationguides:
     ###    example from registry (packages.fhir.org)
     #  swiss:
     #    name: swiss.mednet.fhir
     #    version: 0.8.0
     #    reloadExisting: false
     #    installMode: STORE_AND_INSTALL
     #      example not from registry
-    #      ips_1_0_0:
-    #        packageUrl: https://build.fhir.org/ig/HL7/fhir-ips/package.tgz
-    #        name: hl7.fhir.uv.ips
-    #        version: 1.0.0
+      jp_core_r4_1_1_2:
+        packageUrl: https://jpfhir.jp/fhir/core/1.1.2/jp-core.r4-1.1.2.tgz
+        name: jp-core.r4
+        version: 1.1.2
+      jpfhir_terminology_r4_1_2_0:
+        packageUrl: https://jpfhir.jp/fhir/core/terminology/jpfhir-terminology.r4-1.2.0.tgz
+        name: jpfhir-terminology
+        version: 1.2.0
     #    supported_resource_types:
     #      - Patient
     #      - Observation
         refuse_to_fetch_third_party_urls: false
         fhir_version: R4
-    #    validation:
-    #      requests_enabled: true
+    validation:
+      requests_enabled: true
     #      responses_enabled: true
     #    binary_storage_enabled: true
XcrigX commented 3 months ago

This part of the stack trace looks suspicious:

Caused by: java.lang.OutOfMemoryError: Java heap space
    at java.base/jdk.internal.misc.Unsafe.allocateUninitializedArray(Unsafe.java:1375)
    at java.base/java.lang.StringConcatHelper.newArray(StringConcatHelper.java:497)
    at java.base/java.lang.String.join(String.java:3269)
    at java.base/java.lang.String.join(String.java:3335)
    at org.hl7.fhir.convertors.context.VersionConvertorContext.getPath(VersionConvertorContext.java:121)
    at org.hl7.fhir.convertors.context.ConversionContext40_50.path(ConversionContext40_50.java:47)
    at org.hl7.fhir.convertors.conv40_50.VersionConvertor_40_50.copyElement(VersionConvertor_40_50.java:128)
    at org.hl7.fhir.convertors.conv40_50.datatypes40_50.primitive40_50.String40_50.convertString(String40_50.java:9)
    at org.hl7.fhir.convertors.conv40_50.datatypes40_50.special40_50.ElementDefinition40_50.convertElementDefinition(ElementDefinition40_50.java:25)
    at org.hl7.fhir.convertors.conv40_50.resources40_50.StructureDefinition40_50.convertStructureDefinitionDifferentialComponent(StructureDefinition40_50.java:386)
    at org.hl7.fhir.convertors.conv40_50.resources40_50.StructureDefinition40_50.convertStructureDefinition(StructureDefinition40_50.java:107)
    at org.hl7.fhir.convertors.conv40_50.resources40_50.Resource40_50.convertResource(Resource40_50.java:251)
    at org.hl7.fhir.convertors.conv40_50.VersionConvertor_40_50.convertResource(VersionConvertor_40_50.java:77)
    at org.hl7.fhir.convertors.factory.VersionConvertorFactory_40_50.convertResource(VersionConvertorFactory_40_50.java:15)
    at ca.uhn.hapi.converters.canonical.VersionCanonicalizer$R4Strategy.structureDefinitionToCanonical(VersionCanonicalizer.java:797)
    at ca.uhn.hapi.converters.canonical.VersionCanonicalizer.structureDefinitionToCanonical(VersionCanonicalizer.java:249)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper.allStructures(VersionSpecificWorkerContextWrapper.java:229)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper.fetchResourcesByType(VersionSpecificWorkerContextWrapper.java:689)
    at org.hl7.fhir.r5.utils.FHIRPathEngine.<init>(FHIRPathEngine.java:218)
    at org.hl7.fhir.r5.conformance.profile.ProfileUtilities.<init>(ProfileUtilities.java:396)
    at org.hl7.fhir.common.hapi.validation.support.SnapshotGeneratingValidationSupport.generateSnapshot(SnapshotGeneratingValidationSupport.java:102)
    at org.hl7.fhir.common.hapi.validation.support.ValidationSupportChain.generateSnapshot(ValidationSupportChain.java:120)
    at org.hl7.fhir.common.hapi.validation.support.BaseValidationSupportWrapper.generateSnapshot(BaseValidationSupportWrapper.java:141)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper.lambda$new$0(VersionSpecificWorkerContextWrapper.java:108)
    at org.hl7.fhir.common.hapi.validation.validator.VersionSpecificWorkerContextWrapper$$Lambda$1694/0x00007f7c20e8d530.load(Unknown Source)

I could be wrong, but it looks like as it's trying to auto-validate your resources, it's trying to up-convert them from FHIR R4 to R5. Your FHIR server is configured for R4, are your JP-Core profiles versioned for R5?

2017Yasu commented 3 months ago

That's interesting. I didn't see that part in detail.

The JP Core profiles are accessible from the following URL, but it seems to be extended from FHIR 4.0.1.

https://jpfhir.jp/fhir/core/

Besides, I downloaded and extracted the latest version of the NPM package for JP Core profiles, and saw the package.json and StructureDefinition-jp-medicationrequest.json. However, it was unveiled that fhirVersion[s] properties of those JSON files are both 4.0.1.

At the same time, I also realized that baseDefinition property of the StructureDefinition resource is http://hl7.org/fhir/StructureDefinition/MedicationRequest, which leads to the R5 definition. Referring to the other profiles such as fhir-ips, baseDefinition properties have the same base URL (i.e., http://hl7.org/fhir), so I believe it's not the cause, but do you think this triggered the conversion from R4 to R5?


package.json:

{
  "name": "jp-core.r4",
  "version": "1.1.2",
  "description": "JP-CORE V1.1.2 differential package for release",
  "author": "JAMI NeXEHRS FHIR IGWG",
  "fhirVersions": [
    "4.0.1"
  ],
  "dependencies": {
    "hl7.fhir.r4.core": "4.0.1"
  },
  "url": "http://jpfhir.jp/fhir/core",
  "canonical": "http://jpfhir.jp/fhir/core"
}

StructureDefinition-jp-medicationrequest.json:

{
  "resourceType": "StructureDefinition",
  "id": "jp-medicationrequest",
  "url": "http://jpfhir.jp/fhir/core/StructureDefinition/JP_MedicationRequest",
  "name": "JP_MedicationRequest",
  "title": "JP Core MedicationRequest Profile",
  "status": "active",
  "date": "2023-10-31",
  "description": "このプロファイルはMedicationRequestリソースに対して、内服・外用薬剤処方のデータを送受信するための基礎となる制約と拡張を定めたものである。",
  "fhirVersion": "4.0.1",
  "kind": "resource",
  "abstract": false,
  "type": "MedicationRequest",
  "baseDefinition": "http://hl7.org/fhir/StructureDefinition/MedicationRequest",
  "derivation": "constraint",
  "differential": {...
  }
}
XcrigX commented 3 months ago

Sorry, I don't know much more to be helpful here. I'm not familiar with the validation code to know if the version conversion is normal or not. It might always take that path and that's expected? I really don't know, it just stood out to me.

I guess I'd want to: 1) verify the problem is not in the docker container itself (a memory constraint there). Can you reproduce the problem running outside of Docker, like on your PC? In a unit test?

2) can you reproduce the problem at a lower level - using the base HAPI validation libraries directly or the command-line validator? If so can perhaps post an issue to those projects.

2017Yasu commented 3 months ago

Regarding the first point, I believe the docker container is not the cause of this problem. I ran this hapi-fhir-jpaserver-starter repository in my local environment with the following command and reproduced the same problem printing the same error logs.

mvn spring-boot:run

I'll try examing the second point, though I'm not really sure how to perform this. It may take a few days, but I'll share the result as soon as I got it.

Thanks for your advice anyway.