Open jdhoffa opened 1 month ago
Replicated.
It may be useful to know that the "Uploading" loading spinner seems to succeed.
It is the "Validating" spinner where things get stuck:
Looking in the logs, I'm seeing the following error:
{
"level": "error",
"ts": 1723674290.6975484,
"caller": "azevents/azevents.go:445",
"msg": "failed to save analysis response to database",
"error": "failed to perform operation: creating blob: creating blob row: ERROR: duplicate key value violates unique constraint \"blob_blob_uri_key\" (SQLSTATE 23505)",
"stacktrace": "<pasted below>"
}
github.com/RMI/pacta/azure/azevents.(*Server).handleCompletedAnalysis
azure/azevents/azevents.go:445
github.com/RMI/pacta/azure/azevents.(*Server).handleCreatedReport
azure/azevents/azevents.go:395
github.com/RMI/pacta/azure/azevents.(*Server).handleEventGrid
azure/azevents/azevents.go:278
net/http.HandlerFunc.ServeHTTP
GOROOT/src/net/http/server.go:2136
github.com/RMI/pacta/azure/azevents.(*Server).verifyWebhook-fm.(*Server).verifyWebhook.func1
azure/azevents/azevents.go:147
net/http.HandlerFunc.ServeHTTP
GOROOT/src/net/http/server.go:2136
github.com/go-chi/chi/v5/middleware.Recoverer.func1
external/com_github_go_chi_chi_v5/middleware/recoverer.go:45
net/http.HandlerFunc.ServeHTTP
GOROOT/src/net/http/server.go:2136
github.com/go-chi/chi/v5.(*ChainHandler).ServeHTTP
external/com_github_go_chi_chi_v5/chain.go:31
github.com/go-chi/chi/v5.(*Mux).routeHTTP
external/com_github_go_chi_chi_v5/mux.go:444
net/http.HandlerFunc.ServeHTTP
GOROOT/src/net/http/server.go:2136
github.com/go-chi/chi/v5.(*Mux).ServeHTTP
external/com_github_go_chi_chi_v5/mux.go:90
main.run.(*Cors).Handler.func5
external/com_github_rs_cors/cors.go:236
net/http.HandlerFunc.ServeHTTP
GOROOT/src/net/http/server.go:2136
net/http.serverHandler.ServeHTTP
GOROOT/src/net/http/server.go:2938
net/http.(*conn).serve
GOROOT/src/net/http/server.go:2009
investigating!
Seems to be some issue with the files created + uploaded during parsing a portfolio, I'll keep digging in tomorrow
If there's issues with workflow.portfolio.parsing
docker image, I can jump on those
Lol I think I figured out the issue, and it's an exceptionally dumb one. Has anyone actually looked at sample-1.csv
? When I download it, the contents are:
version https://git-lfs.github.com/spec/v1
oid sha256:abdb709396bff1337709b6eb7bc26cde547615d88a619f5c9846fde1fb585335
size 131685
Which is 100% not a CSV file. It's a git-lfs file. Since git is bad at storing binary files, we use LFS to store binary blobs (which happens to include this CSV for whatever reason). When you checkout the code and run git lfs pull
, all the correct files are there, but it looks like our deploy process wasn't pulling LFS files correctly.
When I upload the actual sample-1.csv
, everything works as expected:
I'll take a look at fixing the deploy process (should be a one-liner), and we can improve the error messages here as well.
Ahahah oh man, indeed that is silly. I uploaded an actual portfolio, and confirmed from my end that it does successfully upload:
Hey all,
looking at workflow.portfolio.parsing
, and running it against the incorrect (not-really-a-csv) file shows that part is working as intended, since it spits the error almost immediately.
I've attached the processed_portfolios.json file resulting from running that image against the file in question, and what I'm seeing is that:
processed_portfolios.json
as expected0
errors
key (which right now has the single generic error that we emit in any case), and no portfolios
key (which is only present when at least one portfolio has been re-exported successfully).This smells like the app is waiting on something to tell it which files it needs to point the user to, and since the portfolios
key doesn't exist (and even if it did, it would be empty), it's just stuck hanging. Does that sound accurate @bcspragu?
Not sure which assumption isn't being met here, but if you let me know if needs to change on my side, I can take care of it.
Maybe it would be better if I updated the errors
messages to include the raw filenames, and then we could surface those errors tot he user upon upload? would checking for that key (as well as portfolios) help escape the hanging state?
Thanks for the write-up @AlexAxthelm, I fully agree. It's definitely a question of improving our non-happy path error handling, I don't even think we're checking the errors
key yet.
To reproduce
Error details