halo-dev / halo

强大易用的开源建站工具。
https://www.halo.run
GNU General Public License v3.0
33.3k stars 9.61k forks source link

创建一个自定义 console 页面插件之后, 访问uc报错500 服务器内部错误 #5515

Closed Kimser closed 6 months ago

Kimser commented 6 months ago

System information

image

What is the project operation method?

Source Code

What happened?

image

Relevant log output

No response

Additional information

No response

JohnNiang commented 6 months ago

Hi @Kimser , thank you for reaching out here!

需要提供足够多的信息,例如最小的复现步骤,插件源码等。另外,请严格按照 Issue 模板填写 Issue。

/triage needs-information

Kimser commented 6 months ago

系统信息 外部访问地址: http://localhost:8090/ 启动时间: 2024-03-18 版本: 构建时间: Git Commit: Java: OpenJDK Runtime Environment / 17.0.7+7-LTS 操作系统: Mac OS X / 14.1 已激活主题: theme-butterfly 已启动插件: 评论组件 自定义 console菜单插件 使用的哪种方式运行? Source Code 1.在 halo 工程 application.yaml中配置了菜单插件地址 runtime-mode: development fixedPluginPath:

export default definePlugin({ name: 'pluginDemo', components: {}, routes: [ { parentName: "Root", route: { // path: "/example", path: "/todos", // name: "Example", name: "ToDoList", component: HomeView, meta: { title: "Todo List", searchable: true, menu: { name: "Todo List", group: "工具", icon: markRaw(IconPlug), priority: 0, }, }, }, }, ], ucRoutes: [ // UC 个人中心路由定义 uc;!!!!路由无论配置与否 uc路由都会出现 500 { parentName: "Root", route: { path: "/uc-foo", name: "FooUC", component: HomeView, meta: { permissions: [""], menu: { name: "FooUC", group: "content", icon: markRaw(IconPlug), priority: 40 }, }, }, }, ], extensionPoints: {}, });

` 3.问题视频

https://github.com/halo-dev/halo/assets/54876002/0e71eea9-cc87-4c1c-8d2e-cfedda210288

@JohnNiang Halo 工程版本是最近的 2.13;添加的 console 菜单插件的路由可以正常访问,但是 uc路由返回 500;在没有引入菜单插件之前 uc路由都是可以正常访问的,我是否还需要配置其他配置信息

Kimser commented 6 months ago

https://github-production-user-asset-6210df.s3.amazonaws.com/54876002/313523402-5e012bd9-7d32-4218-9114-8e20ba3325d4.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240318%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240318T013922Z&X-Amz-Expires=300&X-Amz-Signature=d010bf11da46724193b65d069131ee16e41286b8db5ba15e7ee4498811e1859d&X-Amz-SignedHeaders=host&actor_id=54876002&key_id=0&repo_id=126178683

@JohnNiang

ruibaby commented 6 months ago

这个 500 页面是后端的异常,请提供日志。

Kimser commented 6 months ago

不好意意思日志如何查看没搞清楚 不过下面是终端的错误信息希望有帮助

2024-03-18T13:06:35.442+08:00 ERROR 96791 --- [actor-tcp-nio-3] a.w.r.e.AbstractErrorWebExceptionHandler : [38bd48f5-29910]  500 Server Error for HTTP GET "/uc"
org.springframework.web.reactive.function.client.WebClientRequestException: Connection refused: localhost/127.0.0.1:4000
    at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:136) ~[spring-webflux-6.1.4.jar:6.1.4]
    Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 
Error has been observed at the following site(s):
    *__checkpoint ⇢ Request to GET http://localhost:4000/uc [DefaultWebClient]
    *__checkpoint ⇢ run.halo.app.console.ProxyFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ run.halo.app.console.ProxyFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ run.halo.app.security.InitializeRedirectionWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ AuthorizationWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ ExceptionTranslationWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ LogoutWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ ServerRequestCacheWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ HaloAnonymousAuthenticationWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ ReactorContextWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ CsrfWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ HttpHeaderWriterWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain]
    *__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain]
    *__checkpoint ⇢ run.halo.app.webfilter.AdditionalWebFilterChainProxy [DefaultWebFilterChain]
    *__checkpoint ⇢ HTTP GET "/uc" [ExceptionHandlingWebHandler]
Original Stack Trace:
        at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:136) ~[spring-webflux-6.1.4.jar:6.1.4]
        at reactor.core.publisher.MonoErrorSupplied.subscribe(MonoErrorSupplied.java:55) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.Mono.subscribe(Mono.java:4563) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onError(MonoFlatMapMany.java:205) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.whenError(FluxRetryWhen.java:229) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxRetryWhen$RetryWhenOtherSubscriber.onError(FluxRetryWhen.java:279) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onError(FluxContextWrite.java:121) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.maybeOnError(FluxConcatMapNoPrefetch.java:327) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onNext(FluxConcatMapNoPrefetch.java:212) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.SinkManyEmitterProcessor.drain(SinkManyEmitterProcessor.java:476) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.SinkManyEmitterProcessor$EmitterInner.drainParent(SinkManyEmitterProcessor.java:620) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxPublish$PubSubInner.request(FluxPublish.java:874) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.request(FluxContextWrite.java:136) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.request(FluxConcatMapNoPrefetch.java:337) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.request(FluxContextWrite.java:136) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.Operators$DeferredSubscription.request(Operators.java:1743) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onError(FluxRetryWhen.java:196) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:205) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.netty.http.client.HttpClientConnect$MonoHttpConnect$ClientTransportSubscriber.onError(HttpClientConnect.java:311) ~[reactor-netty-http-1.1.16.jar:1.1.16]
        at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:205) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.onError(DefaultPooledConnectionProvider.java:172) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at reactor.netty.internal.shaded.reactor.pool.AbstractPool$Borrower.fail(AbstractPool.java:488) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at reactor.netty.internal.shaded.reactor.pool.SimpleDequePool.lambda$drainLoop$9(SimpleDequePool.java:436) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onError(FluxDoOnEach.java:186) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:205) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.netty.resources.DefaultPooledConnectionProvider$PooledConnectionAllocator$PooledConnectionInitializer.onError(DefaultPooledConnectionProvider.java:583) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at reactor.core.publisher.MonoFlatMap$FlatMapMain.secondError(MonoFlatMap.java:241) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.MonoFlatMap$FlatMapInner.onError(MonoFlatMap.java:315) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:106) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.Operators.error(Operators.java:198) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.MonoError.subscribe(MonoError.java:53) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.Mono.subscribe(Mono.java:4563) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103) ~[reactor-core-3.6.3.jar:3.6.3]
        at reactor.netty.transport.TransportConnector$MonoChannelPromise.tryFailure(TransportConnector.java:576) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at reactor.netty.transport.TransportConnector$MonoChannelPromise.setFailure(TransportConnector.java:522) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at reactor.netty.transport.TransportConnector.lambda$doConnect$7(TransportConnector.java:261) ~[reactor-netty-core-1.1.16.jar:1.1.16]
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:322) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:338) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
        at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:4000
Caused by: java.net.ConnectException: Connection refused
    at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
    at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[na:na]
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) ~[na:na]
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.107.Final.jar:4.1.107.Final]
    at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
当我直接访问 http://localhost:4000/uc  出现了无限请求页面的现象
   [https://github.com/halo-dev/halo/assets/54876002/bf549af8-a591-466a-a5f3-0869a4a98ba8](https://github.com/halo-dev/halo/issues/url)
@ruibaby @JohnNiang 
Kimser commented 6 months ago

当我直接访问 http://localhost:4000/uc 出现了无限请求页面的现象 https://github.com/halo-dev/halo/assets/54876002/bf549af8-a591-466a-a5f3-0869a4a98ba8

https://github.com/halo-dev/halo/assets/54876002/f272586c-d9f1-4221-a90a-af1bb711abf0

@ruibaby @JohnNiang

guqing commented 6 months ago

当我直接访问 http://localhost:4000/uc

建议去参考开发文档 https://docs.halo.run/developer-guide/core/run

image

如果你需要进行插件开发,可以参考文档使用我们为插件开发提供的 Devtools 来运行和开发插件,这样你就不用在自己通过开发模式运行 Halo 源码了 https://docs.halo.run/developer-guide/plugin/basics/devtools

/kind support

Kimser commented 6 months ago

前面已经提到了直接访问 http://localhost:9090/uc 直接返回 500 、 终端报了http://localhost:4000 的错误日志 image

Devtools依赖 Docker 后面有时间会学习 @guqing @ruibaby @JohnNiang

guqing commented 6 months ago

前面已经提到了直接访问 http://localhost:9090/uc 直接返回 500 、 终端报了http://localhost:4000 的错误日志 ![image]

你是通过什么方式运行的前端项目, 是执行的 ./gradlew :ui:dev

Kimser commented 6 months ago

通过执行这个

image

@guqing

guqing commented 6 months ago

通过执行这个

那你看看你的终端是否显示如下信息:

VITE v4.2.3  ready in 638 ms

# Console 控制台服务
➜  Local:   http://localhost:3000/console/

# UC 个人中心服务
➜  Local:   http://localhost:4000/uc/

确定日志打印的 uc 是 4000 端口,如果端口被占用的情况下,这里会自动改为其他端口,导致不正确,因此首先需要确定这里是不是输出的 4000

Kimser commented 6 months ago

有的 但是偶尔会开出多个端口 但访问都是有问题的

image

https://github.com/halo-dev/halo/assets/54876002/d878b5e3-1fa5-4707-9b18-949d3d8f14bb @guqing

guqing commented 6 months ago

有的 但是偶尔会开出多个端口 但访问都是有问题的

先停止你所有的 java 和 node 进程,并关闭 IDEA ,然后在终端执行 ./gradlew :ui:dev 来启动前端部分,不要使用 IDEA 来运行这个以避免你重启 Halo 后出现问题,预期的行为如上面提到的日志,不应该出现多个端口地址,预期的端口是 console 端口为 3000 , uc 端口为 4000

Kimser commented 6 months ago

重启之后正常了 谢谢

guqing commented 6 months ago

/remove-triage needs-information /close

f2c-ci-robot[bot] commented 6 months ago

@guqing: Closing this issue.

In response to [this](https://github.com/halo-dev/halo/issues/5515#issuecomment-2003179186): >/remove-triage needs-information >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.