Closed Arnei closed 4 days ago
This pull request is deployed at test.admin-interface.opencast.org/743/2024-06-26_10-40-16/ . It might take a few minutes for it to become available.
Use docker
or podman
to test this pull request locally.
Run test server using develop.opencast.org as backend:
podman run --rm -it -p 127.0.0.1:3000:3000 ghcr.io/opencast/opencast-admin-interface:pr-743
Specify a different backend like stable.opencast.org:
podman run --rm -it -p 127.0.0.1:3000:3000 -e PROXY_TARGET=https://stable.opencast.org ghcr.io/opencast/opencast-admin-interface:pr-743
It may take a few seconds for the interface to spin up.
It will then be available at http://127.0.0.1:3000.
For more options you can pass on to the proxy, take a look at the README.md
.
Not having source maps is really annoying. II'm not a fan of disabling them
Where did you see the errors? I may just be that my machine has more RAM, but this just worked fine for me:
❯ git diff
diff --git a/vite.config.ts b/vite.config.ts
index cfb30b8b8b..7238421677 100644
--- a/vite.config.ts
+++ b/vite.config.ts
@@ -9,7 +9,7 @@ export default defineConfig({
plugins: [react(), svgr(), viteTsconfigPaths(), preserveDirectives()],
build: {
outDir: "build",
- sourcemap: false,
+ sourcemap: true,
},
server: {
open: true,
opencast-admin-interface (c81ceb4) [!?] via v20.12.2
❯ npm run build
What happened when building was
=============
vite v5.2.12 building for production...
✓ 2344 modules transformed.
rendering chunks (1)...
<--- Last few GCs --->
[14902:0x6a73ab0] 22156 ms: Mark-Compact (reduce) 2026.4 (2082.0) -> 2026.2 (2083.0) MB, 994.04 / 0.00 ms (average mu = 0.176, current mu = 0.010) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0xb82c28 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
2: 0xeed540 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
3: 0xeed827 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
4: 0x10ff3c5 [node]
5: 0x1117248 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0x10ed361 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
7: 0x10ee4f5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
8: 0x10cbb46 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
9: 0x1527976 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
10: 0x7f9b8f699ef6
Abgebrochen (Speicherabzug geschrieben)
~but now I can't reproduce it anymore. Maybe a hiccup in my system?~ Nevermind it happened again.
Maybe this is due to #709? 🤔
If we maybe can mitigate this with env exports if necessary, do we want to merge it like this and add the env exports to the package.json
and/or the CI scripts if necessary?
Looking at the CI workflows, we don't seem to run in memory issues there: https://github.com/opencast/opencast-admin-interface/actions/workflows/test.yml
ESLint is back, yay. But while it was gone, other PRs introduced checkstyle issues, oh no. This fixes those issues, so that builds may stop failing.
Secondly, this also disables sourcemaps. I was unable to build the project due to out-of-memory errors, and disabling sourcemaps proved a workaround. We can probably enable them again once we get rid of all the old dependencies that come with our current ESLint config.