Altinn / altinn-studio

Next generation open source Altinn platform and applications.
https://docs.altinn.studio
BSD 3-Clause "New" or "Revised" License
114 stars 70 forks source link

App deploy to multiple environments or several builds at at once fails #4321

Closed lorang92 closed 2 months ago

lorang92 commented 4 years ago

Describe the bug

See title. The deploy pipelines fail with the error message Cloning into 'ca9945f6024fe5907f02f107c1c13c9362436c9b'... fatal: could not read Password for 'https://***@dev.altinn.studio': terminal prompts disabled . Is it any restrictions in gitea that disables us from pulling the same repo multiple times simultaneously using the same token? .. Seems weird.

Note that this is only an issue for users who have multiple environments to deploy to, and is probably not an issue that affects our pilots. scratch that, hit one of our orgs. See slack thread. (internal link)

Workaround

Trigger deploys with a 10sec interval if you need to deploy to multiple environments.

To Reproduce

Steps to reproduce the behavior:

  1. Go to an app to studio
  2. Queue a deploy to several environments at once (for instance at22, 23 and 24).
  3. See that pull repo task fails in the deploy pipeline

Expected behavior

Deploying to several environments at the same time should be ok, or this option should be restricted.

Additional info

The following deploys were triggered at the same time:

https://dev.azure.com/brreg/altinn-studio/_build/results?buildId=35065&view=results https://dev.azure.com/brreg/altinn-studio/_build/results?buildId=35064&view=results https://dev.azure.com/brreg/altinn-studio/_build/results?buildId=35063&view=results https://dev.azure.com/brreg/altinn-studio/_build/results?buildId=35081&view=results

nkylstad commented 7 months ago

@mirkoSekulic @framitdavid I remember we discussed a similar issue a little while back. Did we fix that issue? Something about caching the studio token?

mirkoSekulic commented 7 months ago

I remember discussing it but I don't remember implementing it. It's again the question of using distributed cache if we want to scale our api.

framitdavid commented 7 months ago

Perhaps I misunderstand @mirkoSekulic, but in this case, shouldn't it be unnecessary to scale the API or distribute the cache for the locking-key? It's the same organization/application across these pipelines, and we already have locking on the pull endpoint. We aren't scaling up the API now so local locking should work fine for this scenario, right? 🤔

I remember us discussing this issue, but I can't we have implemented a solution yet. :/ Do you think it would be a good issue to address during the Easter sprint, @nkylstad?

nkylstad commented 7 months ago

I believe the issue here was that trying to trigger a build on multiple apps for the same org at the same time casued issues, because we were invalidating the token that was being used or something like that? Not sure if that has anything to do with scaling/locking, although there might be multiple problems here 🤔

framitdavid commented 7 months ago

I believe the issue here was that trying to trigger a build on multiple apps for the same org at the same time casued issues, because we were invalidating the token that was being used or something like that? Not sure if that has anything to do with scaling/locking, although there might be multiple problems here 🤔

I agree, that was my understanding too, hence I asked @mirkoSekulic if I misunderstood something or not. I share the same thought as you @nkylstad about what the issue might be here. But I'm quite sure we haven't implemented a solution for this yet.

mirkoSekulic commented 7 months ago

Sorry for the incomplete explanation @nkylstad @framitdavid

Every build invalidates the previous token, therefore we need to cache the token instead of regenerating the new one. Yes, we can solve the issue by caching the token in memory. That will work as long as we have only one API replica. If the API is scaled and the build is triggered in 2 instances of API simultaneously they need somehow to share the same cache which is not possible if we're using in-memory approach.

nkylstad commented 7 months ago

Got it, thanks @mirkoSekulic ❤ Do we have a specific timeline on scaling Studio API? Could we use in-memory approach for now and make changes once we have a distributed cache available?

mirkoSekulic commented 7 months ago

@nkylstad sure, we can do it. I’m just expressing the concerns that we need to think about the possibility of scaling our API. Due to these reasons, I suggest using the IDistributedCache abstraction here. We can start with an InMemory implementation, and when we introduce distributed cache, we won’t need to change anything in our code. Just the way we’re registering the service.

framitdavid commented 7 months ago

@mirkoSekulic perfect!! 🚀

mirkoSekulic commented 2 months ago

@nkylstad @framitdavid Closing the issue. With changes implemented in new login pr we're sending the short lived acess_token to pipelines, instead of PAT, so it's possible now to deploy multiple apps at once.