Closed ghost closed 4 years ago
This is affecting us very badly. any update on this?
Description
I Activated Garbage Collector on Portus Background process with
keep_latest: 5
andolder_than: 100
But it deletes all images older_than 100 ignoring the keep_latest flag. In result I have old repositories wiped all completely[...] Portus version: 2.4.3@5a616c0ef860567df5700708256f42505cdb9952
Thanks in advance Roberto
Hi @robgiovanardi ? I have a feeling, but no more time just wanted to give you an important tip :
PORTUS_BACKGROUND=true
environement variable set ?portus
, with one standalone container : the documentation will confirm you that a second container, at least, is required. They call it the background
. At that's why docker-compose at least, if not k8s, is natural for a prod deployment : container orchestration of several containers.Btw, Garbage collection sounds soooo much like a background job, doesn't it ? (Imagine a Stop-the-world
- java dah - in Portus....). But Java has nothing to do with our issue.
Hope this will help
Hi @Jean-Baptiste-Lasselle thanks for your answer. I confirm we have two containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96e14d4670ad opensuse/portus:2.4 "/init" 7 months ago Up 2 weeks 3000/tcp portus_background
e23352be8846 opensuse/portus:2.4 "/init" 7 months ago Up 4 weeks 0.0.0.0:3000->3000/tcp portus
Only _portusbackgound container has PORTUS_BACKGROUND=true
set and this is the one container with garbage_collector feature enabled:
garbage_collector:
enabled: true
older_than: 100
keep_latest: 5
tag: ""
So, yes, we are running garbage collection on Portus Background dedicated container and the problem is still here..
We also have the same problem and can confirm that the keep_latest setting doesn’t work.
Hi @Jean-Baptiste-Lasselle thanks for your answer. I confirm we have two containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 96e14d4670ad opensuse/portus:2.4 "/init" 7 months ago Up 2 weeks 3000/tcp portus_background e23352be8846 opensuse/portus:2.4 "/init" 7 months ago Up 4 weeks 0.0.0.0:3000->3000/tcp portus
Only _portusbackgound container has
PORTUS_BACKGROUND=true
set and this is the one container with garbage_collector feature enabled:garbage_collector: enabled: true older_than: 100 keep_latest: 5 tag: ""
So, yes, we are running garbage collection on Portus Background dedicated container and the problem is still here..
hi @robgiovanardi thank you so much for ur feedback, i mean it's very interesting, i haven't tested the feature yet, but it's at least very important business case.
But i can try n help you :
registry
and portus_background
containers ? I wanna ask here, because both of your portus and portus_background are using same port number@robgiovanardi all in all, since some images (all of them) are deleted, the communication should be ok, i think you've got your hands on a real bug inside portus
' code here. Still, you config is odd with those redundant port numbers, i think it's worth checking.
I have another idea, too : for your portus
container (not the portus_background
) , do you have any occurrence of the string garbage_collector
in the config file /srv/Portus/config/config.yml
?
Hi @Jean-Baptiste-Lasselle thanks for your answer. I confirm we have two containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 96e14d4670ad opensuse/portus:2.4 "/init" 7 months ago Up 2 weeks 3000/tcp portus_background e23352be8846 opensuse/portus:2.4 "/init" 7 months ago Up 4 weeks 0.0.0.0:3000->3000/tcp portus
Only _portusbackgound container has
PORTUS_BACKGROUND=true
set and this is the one container with garbage_collector feature enabled:garbage_collector: enabled: true older_than: 100 keep_latest: 5 tag: ""
So, yes, we are running garbage collection on Portus Background dedicated container and the problem is still here..
hi @robgiovanardi thank you so much for ur feedback, i mean it's very interesting, i haven't tested the feature yet, but it's at least very important business case.
But i can try n help you :
- next step : how is communication established between ur
registry
andportus_background
containers ? I wanna ask here, because both of your portus and portus_background are using same port number portus foreground is externally facing, so the 3000 port exposed; portus_background has no exposed port because it should communicate with registry, which is located on the docker host.- and maybe that's why you went standalone, so that port conflict does not blow everything. Plus, if tried to access "from the outside", your portus_background is unreachable, it's always your portus container who gets the requests on to 3000. Is it why? I supposed portus_background doesn't need to be externally accessible. Am I wrong?
I forgot to mention environment variable used for background and foreground, I'm adding that to Description
@robgiovanardi all in all, since some images (all of them) are deleted, the communication should be ok, i think you've got your hands on a real bug inside
portus
' code here. Still, you config is odd with those redundant port numbers, i think it's worth checking.I have another idea, too : for your
portus
container (not theportus_background
) , do you have any occurrence of the stringgarbage_collector
in the config file/srv/Portus/config/config.yml
?
Actually, yes, because we are using the very same config.yaml for both foreground and background, and running them with different environment variables, I just updated initial description including those envs.
Let me try a different config.yml
background
Excellent news @robgiovanardi !!!! Indeed, my idea was that the comunication between your registry and the portus_background
actually never happens, and here is what it involves:
portus
container and the portus_background
container : the portus_background
has the - PORTUS_BACKGROUND=true
environement variable. The portus
container hasnot, (and must not) have that environmentvariable value set to true
,i think it defaults to false.portus_background
: the docker-compose ad deployment examples in the portus project for that sake, are awfully misleading, using same (network) identities for completely different services, for example. registry
and the portus_background
docker-compose.yml
(which has portus inside) ?One final amusement remark : The examples in the portus distrib are awful..,And it's funny I heave a feeling thy were kind of teared off a legacy docker swarm cluster... I Might be wrong. Or not . :)
Anyway,I actually understand the point of view of OpenSUSE
, and I am here to support community, because we are going to make portus
work together, and that is great(ly strategic in the container planet). And I am so thanking OpenSUSE
guys for what they dropped us. Plus i started years ago because of OpenStack
.
OpenSUSE
guys will understand the message.
@robgiovanardi all in all, since some images (all of them) are deleted, the communication should be ok, i think you've got your hands on a real bug inside
portus
' code here. Still, you config is odd with those redundant port numbers, i think it's worth checking. I have another idea, too : for yourportus
container (not theportus_background
) , do you have any occurrence of the stringgarbage_collector
in the config file/srv/Portus/config/config.yml
?Actually, yes, because we are using the very same config.yaml for both foreground and background, and running them with different environment variables, I just updated initial description including those envs.
Let me try a different config.yml
yes, do your thing with the config.yml
:
garbage_collector
config is present for the portus_background
, portus
container's config.yml
tag
parameter)having accurate results on that test, will help me a lot. Still is sure that we will have to ironize your network setup, so it's more explicitly telling the operator who 's talking to who and for what purpose. You security guys will like that too.
background
Excellent news @robgiovanardi !!!! Indeed, my idea was that the comunication between your registry and the
portus_background
actually never happens, and here is what it involves:
- There is one hugedifference (extremely important) between
portus
container and theportus_background
container : theportus_background
has the- PORTUS_BACKGROUND=true
environement variable. Theportus
container hasnot, (and must not) have that environmentvariable value set totrue
,i think it defaults to false.- So okay : that a massive difference, it makes what runs into those two contaners just as different as windows and linux.
- And that 'swhy it is SO importantthat youmake sure that the registry communicates with the
portus_background
: the docker-compose ad deployment examples in the portus project for that sake, are awfully misleading, using same (network) identities for completely different services, for example.- So her is what I think : your no.1 prirority is to make sure communication happens between your
registry
and theportus_background
Let me better understand: communication would happens from portus_background to registry, right? Not from registry to portus_background?
If this is true, then i can boot up a portus_background without externally facing port but still network: now the portus_background can queries mysqldb, find the registry network settings (attached screenshot, in this case, this one: )
and then portus_background can do anything he want with the registry
- I have a question : reading how you speak about it, I think it's possible the private docker registry which communicates with your portus, was already there long before you tried portus, am I right? If yes,then you confirm this private docker registry you wan to operate is not in the
docker-compose.yml
(which has portus inside) ?
Yes, you right, the registry is installed as legacy application, with zypper, not deployed with docker nor docker-compose
One final amusement remark : The examples in the portus distrib are awful..,And it's funny I heave a feeling thy were kind of teared off a legacy docker swarm cluster... I Might be wrong. Or not . :) Anyway,I actually understand the point of view of
OpenSUSE
, and I am here to support community, because we are going to makeportus
work together, and that is great(ly strategic in the container planet). And I am so thankingOpenSUSE
guys for what they dropped us. Plus i started years ago because ofOpenStack
.OpenSUSE
guys will understand the message.
Hi @Jean-Baptiste-Lasselle I can confirm that the problem is still present:
completely removed garbage entries from config.yml
for portus:
#garbage_collector:
# enabled: true
# # Remove images not pulled and older than a specific value. This value is
# # interpreted as the number of days.
# #
# # e.g.: If an image wasn't pulled in the latest 30 days and the image wasn't
# # updated somehow in the latest 30 days, the image will be deleted.
# older_than: 30
# # Keep the latest X images regardless if it's older than the value set in
# # `older_than` configuration.
# keep_latest: 15
# # Provide a string containing a regular expression. If you provide a
# # valid regular expression, garbage collector will only be applied into tags
# # matching a given name.
# #
# # Valid values might be:
# # - "jenkins": if you anticipate that you will always have a tag with a
# # specific name, you can simply use that.
# # - "build-\\d+": your tag follows a format like "build-1234" (note that
# # we need to specify "\\d" and not just "\d").
# tag: ""
portus_background
with garbage enabled and with an external port:
docker run -d --restart=always -v /opt/ssl:/certificates:ro -v /srv/portus/config/config_background.yml:/srv/Portus/config/config.yml -p3001:3000 --name portus_background --env-file=/srv/portus/config/env_background opensuse/portus:2.4
The problem is still there
Things to notice: I enabled debug mode on portus_background: it drop this query to find if there's some images to delete:
(0.2ms) SELECT COUNT(*) FROM `tags` WHERE `tags`.`marked` = 0 AND (updated_at < '2019-11-18 10:07:00.047409')
I used older_than: 30
garbage configuration, so why the updated_at time 2019-11-18. But i can't see other logs that indicate the keep_latest
Some debug logs
(0.4ms) SELECT COUNT(*) FROM `tags` WHERE `tags`.`marked` = 0 AND (updated_at < '2019-11-18 10:03:08.231707')
User Load (0.5ms) SELECT `users`.* FROM `users` WHERE `users`.`username` = 'portus' LIMIT 1
Tag Load (0.7ms) SELECT `tags`.* FROM `tags` WHERE `tags`.`marked` = 0 AND (updated_at < '2019-11-18 10:03:08.231707')
Repository Load (0.3ms) SELECT `repositories`.* FROM `repositories` WHERE `repositories`.`id` = 1 LIMIT 1
SQL (1.5ms) UPDATE `tags` SET `tags`.`marked` = 1 WHERE `tags`.`digest` = 'sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a' AND `tags`.`repository_id` = 1 Registry Load (0.3ms) SELECT `registries`.* FROM `registries` ORDER BY `registries`.`id` ASC LIMIT 1
Namespace Load (0.4ms) SELECT `namespaces`.* FROM `namespaces` WHERE `namespaces`.`id` = 3 LIMIT 1
Tag Load (1.1ms) SELECT `tags`.* FROM `tags` WHERE `tags`.`digest` = 'sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a' AND `tags`.`repository_id` = 1 ORDER BY `tags`.`id` ASC LIMIT 1000
[catalog] Removed the tag 'latest'.
Tag Load (0.2ms) SELECT `tags`.* FROM `tags` WHERE `tags`.`id` = 1 LIMIT 1
(0.1ms) BEGIN
ScanResult Load (0.2ms) SELECT `scan_results`.* FROM `scan_results` WHERE `scan_results`.`tag_id` = 1
SQL (0.6ms) DELETE FROM `tags` WHERE `tags`.`id` = 1
(4.6ms) COMMIT
with tag_load
it select tags marked = 0 updated 30 days ago; then at the end it run a DELETE FROM tags where tags.id =1. Seems no keep_latest checks are done at all
hi @robgiovanardi :
registry
and background_portus
Let me better understand: communication would happens from portus_background to registry, right? Not from registry to portus_background?
Absolutely, accurately;
registry
(at least, not even database)Your portus
webpage screenshot :
Does not make think that your background_portus
can reach the registry
. Do you have any reason to think otherwise (actually asking, maybe i'm forgetting about something) ? To me , it could just be the portus
container being able to query the portus database, why else would we have such a configuration for the portus
(if not to reach database) :
- PORTUS_DB_HOST=db
- PORTUS_DB_DATABASE=portus_production
- PORTUS_DB_PASSWORD=${DATABASE_PASSWORD}
- PORTUS_DB_POOL=5
background_container
docker run -d --restart=always -v /opt/ssl:/certificates:ro -v /srv/portus/config/config_background.yml:/srv/Portus/config/config.yml -p3001:3000 --name portus_background --env-file=/srv/portus/config/env_background opensuse/portus:2.4
You have created a directory with path `/srv/Portus/config/env_background` ? I mean the path just looks like a path inside, not outside a container.
* regardless from this path question, about `--env-file`, I would need to have a look at the content of `/srv/Portus/config/env_background`
### About `portus_background` logs
Well what we see in logs, are SQL queries made from `portus_background`, to the database.
So, about the garbage collection process, and without diving into timestamps details :
* ok, it's deleting entries in the portus database. So they don't appear anymore in the Portus WebUI.
* but do you have anything else in the `portus_background` container logs, that let you think that anything is actually removed inside the registry storage ? Given What i read in portus' documentation, images are supposed to be deleted from registry storage as well, not just the portus app database. And that's the whole point of garbage collection : making some space.
* All in all, these `portus_background` logs make us sure that communication between the `portus_background`, and the portus database, is going on ok. It tells us nothing about communication between `portus_background`, and `registry` you have installed out of any container, on your `images.culturebase.org` machine/vm. Here is what can give us more informations here : do the same test again, not changing any config, and let's see :
* if in the logs of your `registry`, we have anything that let's us think that anyone, is trying to delete docker images in your `registry`, be it the `portus_background` container, or any other (like `portus`).
* if in the logs of `portus_background`, we have anything that let's us think that here's communication between `portus_background` and you `registry`
* another thing would be helpful to confirm my hypothesis : can we see your registry `config.yml` file, especially the `storage` section ? That will help me confirm you that the `mariadb` database is indeed, used by portus, but _not_ as storage for your docker images. For example,in the config I'm running on my servers, I have this `storage` configuration section, inside the `config.yml` for my private docker registry :
```Yaml
storage:
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
About the garbage collection process, and without diving into timestamps details you have pointed out : I did not check it all, but it looks like indeed, yes, there is here a bug, the SQL queries do not spare the keep_latest
.
I have not yet finished automation of reporducing your business case, but I will evenutally by end of december, worst case, probably before. Then I'll battle test your remarks on keep_latest
, and give you feed back on my results.
Again, I will support your case untill it's solved, if you don't want some specific details to be exposed in this conversation where we are working, you can reach me by email, to quickly give me details
jean.baptiste.lasselleATgmail.com . I will sign up any Non Disclosure Agreement if necessary, without any financial counter parties or fees.
hi @robgiovanardi just to keep you informed, I am totally working on the matter, still haven't finished complete automation, and I today found something that sounds very good, in relation with garbage collection : https://github.com/SUSE/Portus/issues/2275#issuecomment-586648423
What those
# Sync config
- PORTUS_BACKGROUND_REGISTRY_ENABLED=true
- PORTUS_BACKGROUND_SYNC_ENABLED=true
- PORTUS_BACKGROUND_SYNC_STRATEGY=update
made me think ... Maybe there exists things like
PORTUS_BACKGROUND_GARBAGE_COLLECTION_ENABLED
, PORTUS_BACKGROUND_GC_ENABLED=true # GC for Garbage Collection
PORTUS_BACKGROUND_GC_OLDER_THAN=100
PORTUS_BACKGROUND_GC_KEEP_LATEST=5
Never the less, I wouldn't be surprised that the PORTUS_BACKGROUND_REGISTRY_ENABLED=true
is required to have Garbage Collection working fine.
@robgiovanardi I think I found your solution!!!!, and oh my god, the idea I just wrote 5minutes ago, inspred by https://github.com/SUSE/Portus/issues/2275#issuecomment-577829167 , gave full reward !!!!
have a look out there : https://github.com/Ashtonian/server-setup/blob/bc9ac031a18f1c686da5a662d3cf969009a50c38/portus/docker-compose.yml
So Yesssss! there exist PORTUS_BACKGROUND_GARBAGE_COLLECTION_XXXX
variables to activate and configure Garbage collection !!! :D :D :D thank you sooo much @kylegoetz and Ashtonian
And so, what you need to do, is to ad the following env. variables to both your background
and your portus
services in docker-compose.yml
- PORTUS_DELETE_ENABLED=true
- PORTUS_DELETE_CONTRIBUTORS=false
- PORTUS_DELETE_GARBAGE_COLLECTOR_ENABLED=true
- PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=30
- PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5
Honestly, I'll try as soon aspossible to set that only for the background, just to check if it works, cause there 's a potential non-necessary copy-paste in this example.
I have a lot of other work on Portus, so I can't do that this weekend, I am so dying that you do that and give me feedback even before I run it :smile:
I found it seraching github with string PORTUS_BACKGROUND_REGISTRY_ENABLED
, and got only 4 results in code, in the whole of github.com as of 15/02/2020!!
Even funnier, :laughing: , none of those 4 results are in portus documentation !! I had take the screenshot before there are more results on github.com ! :laughing:
version: "3.7"
services:
portus:
image: opensuse/portus:2.4.3
# env_file:
# - ./portus.env
environment:
- PORTUS_MACHINE_FQDN_VALUE=portus.ashlab.dev
- PORTUS_DB_HOST=db
- PORTUS_DB_DATABASE=portus_production
- PORTUS_DB_PASSWORD=${DATABASE_PASSWORD}
- PORTUS_DB_POOL=5
- PORTUS_SECRET_KEY_BASE=${SECRET_KEY_BASE}
- PORTUS_KEY_PATH=/certificates/portus.ashlab.dev/privatekey.key
- PORTUS_PASSWORD=${PORTUS_PASSWORD}
- PORTUS_CHECK_SSL_USAGE_ENABLED=false
- PORTUS_SIGNUP_ENABLED=false
- RAILS_SERVE_STATIC_FILES=true
- PORTUS_GRAVATAR_ENABLED=true
- PORTUS_DELETE_ENABLED=true
- PORTUS_DELETE_CONTRIBUTORS=false
- PORTUS_DELETE_GARBAGE_COLLECTOR_ENABLED=true
- PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=30
- PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5
- PORTUS_ANONYMOUS_BROWSING_ENABLED=false
- PORTUS_OAUTH_GITHUB_ENABLED=true
- PORTUS_OAUTH_GITHUB_CLIENT_ID=${PORTUS_OAUTH_GITHUB_CLIENT_ID}
- PORTUS_OAUTH_GITHUB_CLIENT_SECRET=${PORTUS_OAUTH_GITHUB_CLIENT_SECRET}
- PORTUS_OAUTH_GITHUB_ORGANIZATION=karsto
# # - PORTUS_OAUTH_GITHUB_TEAM=''
# # - PORTUS_OAUTH_GITHUB_DOMAIN=''
# - PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
# ports:
# - 3000:3000
depends_on:
- db
links:
- db
volumes:
- traefik_certs_raw:/certificates:ro
# - secrets:/certificates:ro
networks:
- portus
- public
labels:
- "traefik.enable=true"
# - "traefik.http.middlewares.sslHeaders.headers.SSLHost=portus.ashlab.dev"
- "traefik.http.routers.portus.rule=Host(`portus.ashlab.dev`)"
- "traefik.http.routers.portus.middlewares=https_redirect, sslHeaders"
- "traefik.http.routers.portus.service=portus"
- "traefik.http.routers.portus.tls=true"
- "traefik.http.routers.portus.tls.certresolver=le"
- "traefik.http.services.portus.loadbalancer.server.port=3000"
- "traefik.http.services.portus.loadbalancer.server.scheme=http"
- "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
- "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
# - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
# - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
# - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
# - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
# - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
deploy:
labels:
- "traefik.enable=true"
# - "traefik.http.middlewares.sslHeaders.headers.SSLHost=portus.ashlab.dev"
- "traefik.http.routers.portus.rule=Host(`portus.ashlab.dev`)"
- "traefik.http.routers.portus.middlewares=https_redirect, sslHeaders"
- "traefik.http.routers.portus.service=portus"
- "traefik.http.routers.portus.tls=true"
- "traefik.http.routers.portus.tls.certresolver=le"
- "traefik.http.services.portus.loadbalancer.server.port=3000"
- "traefik.http.services.portus.loadbalancer.server.scheme=http"
# - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
# - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
# - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
# - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
# - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
# - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
# - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
background:
image: opensuse/portus:2.4.3
depends_on:
- portus
- db
environment:
# Theoretically not needed, but cconfig's been buggy on this...
- CCONFIG_PREFIX=PORTUS
- PORTUS_MACHINE_FQDN_VALUE=portus.ashlab.dev
- PORTUS_DB_HOST=db
- PORTUS_DB_DATABASE=portus_production
- PORTUS_DB_PASSWORD=${DATABASE_PASSWORD}
- PORTUS_DB_POOL=5
- PORTUS_SECRET_KEY_BASE=${SECRET_KEY_BASE}
- PORTUS_KEY_PATH=/certificates/portus.ashlab.dev/privatekey.key
- PORTUS_PASSWORD=${PORTUS_PASSWORD}
# - PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
# - PORTUS_CHECK_SSL_USAGE_ENABLED=false
- PORTUS_GRAVATAR_ENABLED=true
- PORTUS_DELETE_ENABLED=true
- PORTUS_DELETE_CONTRIBUTORS=false
- PORTUS_DELETE_GARBAGE_COLLECTOR_ENABLED=true
- PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=30
- PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5
- PORTUS_OAUTH_GITHUB_ENABLED=true
- PORTUS_OAUTH_GITHUB_CLIENT_ID=${PORTUS_OAUTH_GITHUB_CLIENT_ID}
- PORTUS_OAUTH_GITHUB_CLIENT_SECRET=${PORTUS_OAUTH_GITHUB_CLIENT_SECRET}
- PORTUS_OAUTH_GITHUB_ORGANIZATION=karsto
# - PORTUS_OAUTH_GITHUB_TEAM=''
# - PORTUS_OAUTH_GITHUB_DOMAIN=''
- PORTUS_ANONYMOUS_BROWSING_ENABLED=false
- PORTUS_BACKGROUND=true
- PORTUS_BACKGROUND_REGISTRY_ENABLED=true
- PORTUS_BACKGROUND_SYNC_ENABLED=true
- PORTUS_BACKGROUND_SYNC_STRATEGY=update-delete
links:
- db
# env_file:
# - ./portus.env
volumes:
- traefik_certs_raw:/certificates:ro
networks:
- portus
db:
image: library/mariadb:10.0.33
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
# env_file:
# - ./portus.env
environment:
- MYSQL_DATABASE=portus_production
- MYSQL_ROOT_PASSWORD=${DATABASE_PASSWORD}
volumes:
- mariadb:/var/lib/mysql
networks:
- portus
# clair: TODO:
# image: quay.io/coreos/clair
# restart: unless-stopped
# depends_on:
# - postgres
# links:
# - postgres
# - portus
# ports:
# - "6060-6061:6060-6061"
# volumes:
# - /tmp:/tmp
# - ./clair/clair.yml:/clair.yml
# command: [-config, /clair.yml]
registry:
image: library/registry:2.7.1
# env_file:
# - ./portus.env
environment:
# REGISTRY_HTTP_ADDR: registry.ashlab.dev
# Authentication
REGISTRY_AUTH_TOKEN_REALM: https://portus.ashlab.dev/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: registry.ashlab.dev
REGISTRY_AUTH_TOKEN_ISSUER: portus.ashlab.dev
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certificates/portus.ashlab.dev/certificate.crt
# Portus endpoint
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: https://portus.ashlab.dev/v2/webhooks/events
timeout: 2000ms
threshold: 5
backoff: 1s
volumes:
- traefik_certs_raw:/certificates:ro
- registry:/var/lib/registry
- secrets:/secrets:ro
- ./config.yml:/etc/docker/registry/config.yml:ro
ports:
# - 5000:5000
- 5001:5001 # required to access debug service
links:
- portus:portus
networks:
- portus
- public
labels:
- "traefik.enable=true"
# - "traefik.http.middlewares.sslHeaders.headers.SSLHost=registry.ashlab.dev"
- "traefik.http.routers.registry.rule=Host(`registry.ashlab.dev`)"
- "traefik.http.routers.registry.middlewares=https_redirect, sslHeaders"
- "traefik.http.routers.registry.service=registry"
- "traefik.http.routers.registry.tls=true"
- "traefik.http.routers.registry.tls.certresolver=le"
- "traefik.http.services.registry.loadbalancer.server.port=5000"
- "traefik.http.services.registry.loadbalancer.server.scheme=http"
# - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
# - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
# - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
# - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
# - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
# - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
# - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
deploy:
labels:
- "traefik.enable=true"
# - "traefik.http.middlewares.sslHeaders.headers.SSLHost=registry.ashlab.dev"
- "traefik.http.routers.registry.rule=Host(`registry.ashlab.dev`)"
- "traefik.http.routers.registry.middlewares=https_redirect, sslHeaders"
- "traefik.http.routers.registry.service=registry"
- "traefik.http.routers.registry.tls=true"
- "traefik.http.routers.registry.tls.certresolver=le"
- "traefik.http.services.registry.loadbalancer.server.port=5000"
- "traefik.http.services.registry.loadbalancer.server.scheme=http"
# - "traefik.http.middlewares.https_redirect.redirectscheme.scheme=https" # Standard move to default when traefik fixes behavior
# - "traefik.http.middlewares.https_redirect.redirectscheme.permanent=true"
# - "traefik.http.middlewares.sslHeaders.headers.framedeny=true"
# - "traefik.http.middlewares.sslHeaders.headers.sslredirect=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSSeconds=315360000"
# - "traefik.http.middlewares.sslHeaders.headers.browserXSSFilter=true"
# - "traefik.http.middlewares.sslHeaders.headers.contentTypeNosniff=true"
# - "traefik.http.middlewares.sslHeaders.headers.forceSTSHeader=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSIncludeSubdomains=true"
# - "traefik.http.middlewares.sslHeaders.headers.STSPreload=true"
volumes:
secrets:
driver: local
driver_opts:
type: "none"
o: "bind,rw"
device: "/mnt/workspace/portus/secrets"
traefik_certs_raw:
driver: local
driver_opts:
type: "none"
o: "bind,ro"
device: "/mnt/workspace/traefik_certs_raw/"
mariadb:
registry:
networks:
public:
external: true
portus:
@robgiovanardi so apply environment variables I gave you, inserting them into your env_background
file
@Jean-Baptiste-Lasselle Hey - is there a reason why the latest garbage collection code (https://github.com/SUSE/Portus/commit/d8470717b228d56a780bb50983dea9b2d8577adb) isn't in the 2.4.3 release? As far as I can tell the release was made in May, but this code was merged in January? This is a pretty crucial patch. Can we get a 2.4.4 release?
release
hi Matt @diranged , Actually I am not an OpenSUSE engineer,or am not 'yet?) part of the official portus support or dev team. So I can't make release. Looks like you're gonna have to make a personal release n your infrastructure, building portus
from source (tagged 2.4.4-private-release
?)
I think there 's a docker image portus:2.5
, though there is no 2.5
yet
Hi @Jean-Baptiste-Lasselle thanks for your help.
About environment variables, you probably read the doc: http://port.us.org/docs/Configuring-Portus.html:
In Portus we follow a naming convention for environment variables: first of all we have the PORTUS_ prefix, and then we add each key in uppercase. So, for example, the previous example can be tweaked by setting: PORTUS_FEATURE_ENABLED and PORTUS_FEATURE_VALUE
So using
- PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=5
is equal to use:
delete:
garbage_collector:
keep_latest: 5
Anyway I tested the env var but nothing changed: keep_latest was ignored and all tags was deleted.
@diranged You're right, that fundamental commit isn't in the 2.4.3 release.. I'll try to build the latest release and coming back for a feedback.
ignored
@robgiovanardi @diranged so thank you both of you I think you are right, the keep_latest
shallwork only when we get a release with that commit meaning until then, we have to make a build from source to benefit that feature.
If so :
Vítor Avelino committed on Jan 15, 2019
Jan 16, 2019
, Signed-off-by: Vítor Avelino vavelino@suse.com
, see https://github.com/SUSE/Portus/pull/20952.4.3
is from march 2019
: how can the commit not be in release 2.4.3
? @robgiovanardi so thank you about :
In Portus we follow a naming convention for environment variables: first of all we have the PORTUS_ prefix, and then we add each key in uppercase. So, for example, the previous example can be tweaked by setting: PORTUS_FEATURE_ENABLED and PORTUS_FEATURE_VALUE
So ok, we can infer which env. variables to use, from the config files descriptions :
2.4.x
, while I below prove it is definitely not (please tell me if I made any mistake you see in my analysis) :
17/02/2020
)keep_latest
: https://github.com/SUSE/Portus/pull/2095keep_latest
option) : https://github.com/SUSE/Portus/commit/925bfc42a1b76bd294bb999cc93e789bdd06f384master
, and merged back to master
, on Jan 16 , 2019
as shown in the portus repo github graph : 2.3.7
release, yet, no 2.3.7
branch exist :
there are branches named v2.5
, v2.5
, 2.3.7
, but there is a v2.3
branch, and actually, there is a branch for every major release,at least since version 2.0 : 2.4
was created from master on October 2, 2018
, long before @viovanov pull request was merged on Jan 16 , 2019
(you can checkthis using the https://github.com/SUSE/Portus/network graph, going back to October 2, 2019
: master
branch was merged back to branch v2.4
, since its creation (as I, and you, expected) 2.4.3
? jbl@poste-devops-jbl-16gbram:~$ mkdir crusty
jbl@poste-devops-jbl-16gbram:~$ cd crusty
jbl@poste-devops-jbl-16gbram:~/crusty$ date
Mon Feb 17 18:06:53 CET 2020
jbl@poste-devops-jbl-16gbram:~/crusty$ git clone "https://github.com/SUSE/Portus" .
Cloning into '.'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 38886 (delta 0), reused 0 (delta 0), pack-reused 38883
Receiving objects: 100% (38886/38886), 36.51 MiB | 22.24 MiB/s, done.
Resolving deltas: 100% (18381/18381), done.
jbl@poste-devops-jbl-16gbram:~/crusty$ git branch --contains tags/2.4.3
jbl@poste-devops-jbl-16gbram:~/crusty$ git tag -ln
1.0.0 Merge pull request #159 from jordimassaguerpla/fix_css_landing_page
1.0.1 teams: fixed regression where namespaces could not be created from team page
2.0.0 Version 2.0.0
2.0.1 First patch level release since 2.0.0
2.0.2 Fixed an issue regarding distribution 2.3 support
2.0.3 More fixes on the docker 1.10 & distribution 2.3 versions
2.0.4 Small fixes
2.0.5 Small fixes
2.1.0 2.1.0 release. Read the changelog in the CHANGELOG.md file
2.1.1 Important fixes and some small improvements
2.2.0 Final release of 2.2.0
2.2.0-rc1 First release candidate of the 2.2.0 release
2.2.0rc2 Added somes fixes to activities
2.3.0 2.3.0
2.3.1 2.3.1 security update
2.3.2 Security fixes
2.3.3 Bug fixes since 2.3.2
2.3.4 Added some fixes
2.3.5 Update on sprocket
2.3.6 Release with a couple of important fixes
2.3.7 Minor upgrades on vulnerable gems
2.4.0 Release 2.4.0
2.4.1 Bug fixes and gem upgrades
2.4.2 Minor fixes and support for registries 2.7.x
2.4.3 Minor patch-level release
jbl@poste-devops-jbl-16gbram:~/crusty$ git branch -a --contains tags/2.4.3
remotes/origin/v2.4
jbl@poste-devops-jbl-16gbram:~/crusty$
2.4.3
, which is latest available release of portus
, can't ever have @viovanov fix about garbage collector, if suse team sticks to its current git workflow on the project.v2.5
happened after @viovanov contrib, any v2.5.x
future release will include. v2.5.x
release yet, so no release include that fix yet.So I think for now,we have a proof, we need to build from source to get the keep_latest
Garbage Collector feature in portus.
And Also see here an improvement opportunity, with portus
CI/CD as of Mon Feb 17 18:35:58 CET 2020
:
2.5.x
release is available yetkeep_latest
, https://github.com/SUSE/Portus/commit/925bfc42a1b76bd294bb999cc93e789bdd06f384 is not on branch v2.5
? So that you are sure releases and related docs and in sync... 2.4.x
, while I just proved it is definitely not :
Also can't help writing, what about adopting the git-flow in Portus project ? (If it really is important to pull and merge to master
, whatever the reason is, there are hotfix in the git-flow ...)
hi @robgiovanardi Did you try and use portus:2.5
docker image, instead of building portus
from source, to see if you get the keep_latest
feature ?
I mean :
portus:2.5
docker image, on docker hub2.5.*
release in https://github.com/SUSE/Portus/releases portus:2.5
docker image, on docker hub , to push that feature in early access (though it's now a quite old, "early-access") ?Hi @Jean-Baptiste-Lasselle I still have no time to do my tests, but thanks for points me out that docker release. This will speed up my tests
Hi @Jean-Baptiste-Lasselle I still have no time to do my tests, but thanks for points me out that docker release. This will speed up my tests
Very interesting test automation case though :
dronie-*
PORTUS_DELETE_ENABLED=false
: the test starts with 5 days, during the first 3 days, any deletion forbidden, and we push oci container images everyday for 5 days. at day 4 at midnight plus one minute, we set PORTUS_DELETE_ENABLED=true
and restart the portus
and background
services. So garbage collection should start at day 6, midnight, plus one minute. And always keep last 3
tags for any repository (OCI container image). PORTUS_DELETE_GARBAGE_COLLECTOR_OLDER_THAN=2
PORTUS_DELETE_GARBAGE_COLLECTOR_KEEP_LATEST=3
PORTUS_DELETE_GARBAGE_COLLECTOR_TAG=''
test-setup :
day 0 :
registry + portus provisioning.
also, create 2 users, the first super admin, and another beeio
. Create team beebee
, add beeio
to beebee
, and create the hive
namespace for the beebee
team. Finally creating a token named buzz
, for the beeio
user. All this using Portus API
pushing 3 images every day, for 5 days, from day 0 using beeio
username and its buzz
token
day 0 :
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
### NODE ON ALPINE
docker pull node:8-alpine
docker tag node:8-alpine $OCI_SERVICE/hive/node:8-alpine
# docker logged-in
docker push $OCI_SERVICE/hive/node:8-alpine
### HELM ON ALPINE
docker pull alpine/helm:3.1.1
docker tag alpine/helm:3.1.1-alpine $OCI_SERVICE/hive/helm:3.1.1-alpine
docker push $OCI_SERVICE/hive/helm:3.1.1-alpine
### ATLANTIS TERRAGRUNT
docker pull exositebot/atlantis-terragrunt:version-1.4.1
docker tag exositebot/atlantis-terragrunt:version-1.4.1 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.1
docker push $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.1
* day 1 :
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in
### NODE ON ALPINE
docker pull node:9-alpine
docker tag node:9-alpine $OCI_SERVICE/hive/node:9-alpine
docker push $OCI_SERVICE/hive/node:9-alpine
### HELM ON ALPINE
docker pull alpine/helm:3.1.0
docker tag alpine/helm:3.1.0-alpine $OCI_SERVICE/hive/helm:3.1.0-alpine
docker push $OCI_SERVICE/hive/helm:3.1.0-alpine
### ATLANTIS TERRAGRUNT
docker pull exositebot/atlantis-terragrunt:version-1.4.0
docker tag exositebot/atlantis-terragrunt:version-1.4.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.0
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in
### NODE ON ALPINE
docker pull node:10-alpine
docker tag node:10-alpine $OCI_SERVICE/hive/node:10-alpine
docker push $OCI_SERVICE/hive/node:10-alpine
### HELM ON ALPINE
docker pull alpine/helm:3.0.3
docker tag alpine/helm:3.0.3-alpine $OCI_SERVICE/hive/helm:3.0.3-alpine
docker push $OCI_SERVICE/hive/helm:3.0.3-alpine
### ATLANTIS TERRAGRUNT
docker pull exositebot/atlantis-terragrunt:version-1.3.0
docker tag exositebot/atlantis-terragrunt:version-1.3.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.3.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.3.0
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in
### NODE ON ALPINE
docker pull node:11-alpine
docker tag node:11-alpine $OCI_SERVICE/hive/node:11-alpine
docker push $OCI_SERVICE/hive/node:11-alpine
### HELM ON ALPINE
docker pull alpine/helm:2.15.2
docker tag alpine/helm:2.15.2-alpine $OCI_SERVICE/hive/helm:2.15.2-alpine
docker push $OCI_SERVICE/hive/helm:2.15.2-alpine
### ATLANTIS TERRAGRUNT
docker pull exositebot/atlantis-terragrunt:version-1.1.0
docker tag exositebot/atlantis-terragrunt:version-1.1.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.1.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.1.0
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
# docker logged-in
### NODE ON ALPINE
docker pull node:12-alpine
docker tag node:12-alpine $OCI_SERVICE/hive/node:12-alpine
docker push $OCI_SERVICE/hive/node:12-alpine
### HELM ON ALPINE
docker pull alpine/helm:2.15.1
docker tag alpine/helm:2.15.1-alpine $OCI_SERVICE/hive/helm:2.15.1-alpine
docker push $OCI_SERVICE/hive/helm:2.15.1-alpine
### ATLANTIS TERRAGRUNT
docker pull exositebot/atlantis-terragrunt:version-1.2.0
docker tag exositebot/atlantis-terragrunt:version-1.2.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.2.0
docker tag $OCI_SERVICE/hive/atlantis-terragrunt:version-1.2.0
export OCI_SERVICE=docker.culturebase.org
# you access portus web ui through https://$PORTUS_SERVICE/
export PORTUS_SERVICE=portus.culturebase.org
export EXISTING_NAMESPACE
### NODE ON ALPINE
docker pull node:8-alpine
docker pull node:9-alpine
docker pull node:10-alpine
docker pull node:11-alpine
docker pull node:12-alpine
docker tag node:8-alpine $OCI_SERVICE/hive/node:8-alpine
docker tag node:9-alpine $OCI_SERVICE/hive/node:9-alpine
docker tag node:10-alpine $OCI_SERVICE/hive/node:10-alpine
docker tag node:11-alpine $OCI_SERVICE/hive/node:11-alpine
docker tag node:12-alpine $OCI_SERVICE/hive/node:12-alpine
### HELM ON ALPINE
docker pull alpine/helm:3.1.1
docker pull alpine/helm:3.1.0
docker pull alpine/helm:3.0.3
docker pull alpine/helm:2.15.2
docker pull alpine/helm:2.15.1
docker tag alpine/helm:3.1.1-alpine $OCI_SERVICE/hive/helm:3.1.1-alpine
docker tag alpine/helm:3.1.0-alpine $OCI_SERVICE/hive/helm:3.1.0-alpine
docker tag alpine/helm:3.0.3-alpine $OCI_SERVICE/hive/helm:3.0.3-alpine
docker tag alpine/helm:2.15.2-alpine $OCI_SERVICE/hive/helm:2.15.2-alpine
docker tag alpine/helm:2.15.1-alpine $OCI_SERVICE/hive/helm:2.15.1-alpine
### ATLANTIS TERRAGRUNT
docker pull exositebot/atlantis-terragrunt:version-1.4.1
docker pull exositebot/atlantis-terragrunt:version-1.4.0
docker pull exositebot/atlantis-terragrunt:version-1.3.0
docker pull exositebot/atlantis-terragrunt:version-1.1.0
docker pull exositebot/atlantis-terragrunt:version-1.2.0
docker tag exositebot/atlantis-terragrunt:version-1.4.1 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.1
docker tag exositebot/atlantis-terragrunt:version-1.4.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.4.0
docker tag exositebot/atlantis-terragrunt:version-1.3.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.3.0
docker tag exositebot/atlantis-terragrunt:version-1.1.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.1.0
docker tag exositebot/atlantis-terragrunt:version-1.2.0 $OCI_SERVICE/hive/atlantis-terragrunt:version-1.2.0
9:00am
:
oci-images.json
file using registry
's Catalog
API endpoint, (not Portus API)Portus
,and save that asjson into portus.oci-images.json
expected-oci-inventory.day6.json
, containing the list of all images expected to be found in the registry on day 6
, that is to say, all images but those pushed on day 0
.expected-oci-inventory.day6.json
, and oci-images.json
, generate a nice diff reportexpected-oci-inventory.day6.json
, and portus.oci-images.json
, generate a nice diff reportday 0
, day 1
, day 2
, day 3
, day 4
, and day 5
, we run similar test,using same json diff techniqueSo I have files with huge list of existing image tags :
# preparation of huge test dataset for
# tests like loead tests on portus
export namespace=library
export repo_name=ubuntu
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=notary
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=centos
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=debian
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=archlinux
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=registry
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=node
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=busybox
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=dkron
export repo_name=dkron
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=httpd
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags | awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}'>> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=tomcat
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=golang
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=python
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=rails
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
export namespace=library
export repo_name=ruby
echo '{' > all.${namespace}.${repo_name}.tags.json
curl -L -s https://registry.hub.docker.com/v1/repositories/$namespace/$repo_name/tags| awk -F '[' '{print $2}'| awk -F ']' '{print $1}'| awk -F '},' '{for (i=0;i<=NF;i++) { if ($i == $NF) {print $i;} else {print $i "}, "} }}' >> all.${namespace}.${repo_name}.tags.json
echo '}' >> all.${namespace}.${repo_name}.tags.json
ls -allh *.tags.json
# And we can have 1,2,5,10,20, 30, 50, 100, 150, 200, 300, etc...500 simultaneous docker clients constantly pulling and pushing any randomly picked tag among this list
Hi @Jean-Baptiste-Lasselle I tried to use portus:2.5 but got:
So I can't proceed with that. Anyway thanks for your support
Hi @Jean-Baptiste-Lasselle I tried to use portus:2.5 but got:
* #2197 * #2200
So I can't proceed with that. Anyway thanks for your support
Hi @robgiovanardi It's just a pleasure to support users who feedback so fast and share. Plus I love the General issue with the Portus project, it's like very significant, I believe, about the revolution currently happening all over the world with the cloud.
So, Ok, I duly note your results about 2.5
, and definitely will feedback my further work on this issue : Yes, we will get the keep_latest
feature, should it take me to take over the whole portus project.
So next: I'll reproduce those two issues #2197 and #2200
I am so not surprised by #2197 , because while solving this issue I found out that there is (February 2020) no rails official image based on ruby above 2.4
. I bet without even reading it, it's exactly the problem for #2197 , that is rails framework version should be upgraded (surely what the opensuse
developers do on a regular basis), because of whatever dependency requiring minimum rails version blabla.
One of the cloud's critical challenge : mastering the dependency hell.
Note there's here a distribution management problem of the OpenSUSE
project Portus
, that we are currently, along with our friend @diranged , clearly identifying
The dream problem for a devops like me. :)
@robgiovanardi just to say i now read both :
All in all I'd say 99% chances I provide you with a fix for your setup, before end of next week andi'lldo that with a repo here : https://github.com/pokusio
@robgiovanardi Hi, ok :
rails
web application. So :
keep_latest
feature.portus
, does this : portus
infrastructure (machine is out there somewhere, we don't care, just need to be able to reach it)OLDER_THAN=XX
days, filter list to keep only images OLDER_THAN=XX days, and then keep only latest N tags in list. Let's call that list .eligible.keep_latest.json
portus
, or a protus that has a much shorter (or longer?) OLDER_THAN
. First portus
reachable at portus.mycompany.org/registry.mycompany.org
, This other portus
at freeze.portus.mycompany.org/freeze.registry.mycompany.org
.eligible.keep_latest.json
, freeze.registry.mycompany.org
, and unrecommended/unguaranteed on registry.mycompany.org
freeze.registry.mycompany.org
to docker pull
the said images. Note they can still keep on docker pushing new images to registry.mycompany.org
, during the garbage collection/curation period. PORTUS_DELETE_GARBAGE_COLLECTOR_*
to turn on garbage collector, and docker-compose restart.kept_latest
images, freeze.registry.mycompany.org
I'd think, for a dream tool for that, of a K8S batch service manager / executor, with audit capabilities, inside a little K8S cluster, say 3VM on 2 different physical machines, plus dkron.io .
Hi @Jean-Baptiste-Lasselle There's a DB workaround: I noticed that tags with 'marked = 1' wouldn't take in account by garbage collector process, so we can do following queries to Database:
update tags set marked = 0 where repository_id = 7;
update tags set marked = 1 where repository_id = 7 order by created_at desc limit 15;
Now I need to write a query/stored procedure which updates the latest N rows for every repository_id; then that query can be scheduled by a cronjob
Hi @Jean-Baptiste-Lasselle There's a DB workaround: I noticed that tags with 'marked = 1' wouldn't take in account by garbage collector process, so we can do following queries to Database:
1. given repository_id = 7, updates all tags to marked 0: ``` update tags set marked = 0 where repository_id = 7; ``` 2. Now mark all to 1 except the latest 15: ``` update tags set marked = 1 where repository_id = 7 order by created_at desc limit 15; ``` 3. Restart portus_background process for force a garbage collection: now our latest 15 tags are preserved because have flag 'marked = 1' and garbage collection process will ignore them.
Now I need to write a query/stored procedure which updates the latest N rows for every repository_id; then that query can be scheduled by a cronjob
f*** (please allow me) I really mean like a thank you there :
I really think it is very important that there is a serious alternative to harbor, and should it take weeks, to tackle down the entire software factory of Open SUSE / Portus, I will. Actually, I already know enough to probably re-develop the whole thing, but :
So Thank you, before any test result, Rob
(The Open SUSE servers downlaod.opensuse.org going down yesterday at 22:00 more / less... harsh)
Hi @Jean-Baptiste-Lasselle There's a DB workaround: I noticed that tags with 'marked = 1' wouldn't take in account by garbage collector process, so we can do following queries to Database:
1. given repository_id = 7, updates all tags to marked 0: ``` update tags set marked = 0 where repository_id = 7; ``` 2. Now mark all to 1 except the latest 15: ``` update tags set marked = 1 where repository_id = 7 order by created_at desc limit 15; ``` 3. Restart portus_background process for force a garbage collection: now our latest 15 tags are preserved because have flag 'marked = 1' and garbage collection process will ignore them.
Now I need to write a query/stored procedure which updates the latest N rows for every repository_id; then that query can be scheduled by a cronjob
And btw, great info on the db, will save me some of my time searching, and :
keep_latest
_, and have a transition solution, before I get more serious.keep_latest
which is kind of like a topological frontier, in the math. sense of general topology. I can draw it much faster.hashtag autobots all on portus ^^
Just to write it down,like that, I have a feeling openSUSE is conducted a huge migration on containers, so that it works with podman
, or any other #nobigfatdaemon
oci runtime ecosystem. I think that's what the left maintainers like @mssola are concerned about, just migrating portus distributed containers in that broader context. like what they have in mind constantly is Kubernetes
Also to share with community, the tools list I'm gonna test to manage batch jobs :
big data
, with spark
, see https://github.com/TommyLike/spark-operator-volcano-demo .The service takes responsibility for triggering and possibly also re-triggering invocations until a successful exit status has been emitted or some other limit has been reached.
styx
, to see if it improves autoscaling of styx
. Hi @robgiovanardi I have news here :
portus:2.5
docker image, background
KO
, and here it is in the logs : background_1 | /usr/bin/bundle:23:in `load': cannot load such file -- /usr/lib64/ruby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle (LoadError)
background_1 | from /usr/bin/bundle:23:in `<main>'
sync
and garbage collector
, content trust
support (search notary
in the issue list ... )And that, I note to bear it in mind :
ruby
error when I tried building from source, in a debian
container. GEM_PATH
and GEM_HOME
.background
and portus
services, the same image, built exactly like in release 0.0.1 of in my repo , I double checked , triple checked my docker-compose, after every test.GEM_PATH
for portus
is /srv/Portus/vendor/bundle/ruby/2.5.3
GEM_PATH
for background
is /srv/Portus/vendor/bundle/ruby/2.6.0
/init
script, when it is the portus
, service, we have log lines we don't have, for the execution of the exact same /init
script, but this time fo the background
: portuscontainer | + export RACK_ENV=production
portuscontainer | + RACK_ENV=production
portuscontainer | + export RAILS_ENV=production
portuscontainer | + RAILS_ENV=production
portuscontainer | + export CCONFIG_PREFIX=PORTUS
portuscontainer | + CCONFIG_PREFIX=PORTUS
portuscontainer | + '[' -z '' ']'
portuscontainer | + export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
portuscontainer | + GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
export RACK_ENV=production
, export RAILS_ENV=production
, export CCONFIG_PREFIX=PORTUS
, export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3
not defined for background ? (Ok, puma and LDAP related Env VARS, not relevant for background
, though there might be background involved in process where LDAP data is imported to portus db)/init
script, damn... Yes, that's the thing about that scripts, it runs a different thing, depending on whether you want it to run as background, or as portus, cf. https://github.com/pokusio/opensuzie-oci-library/blob/8fe1d9fb87fda6060627342f1b46959a68c85a2e/library/portus/init#L103 file_env
function they copy pasted from docker library's postgres
definition , which messes up ending up setting completely inconsistent configuration for both cousins service background
and portus
. I have to debug that, and makethe env stable/reliable/consistent (just apply the same damn values for config params shared between the two services). if [ -z "$PORTUS_GEM_GLOBAL" ]; then
export GEM_PATH="/srv/Portus/vendor/bundle/ruby/2.6.0"
fi
PORTUS_GEM_GLOBAL
to the correct value, so that it forces a fixed ruby
version, for both background
and portus
to 2.5.3
(cause it works for portus, so set that too, for background
) . if [ -z "$PORTUS_GEM_GLOBAL" ]; then
export GEM_PATH="/srv/Portus/vendor/bundle/ruby/2.5.3"
fi
unset PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE
for portus conainer, and it's RAILS_ENV=production
...? (secrets should ALWAYS be in files in prodcution, never in env vars). I have to check secret management process, and there only one point in this /init
script, where there is an unset
command, in the file_env
function that was copy pasted from https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh . And by the way, I have seen people having issues about LDAP integration, well, they might wanna know about that... I guess I have my tomorrow 's TODO List.
Funny thing
Check out the commit (and its commit message) of @mssola on https://github.com/opensuse/docker-containers , which is today (1st of March 2020) :
In order to keep
Virtualization:containers:Portus
cleaner, we have removed some packages from there and we are fetching them now fromdevel:languages:ruby
. I've changed the code so theGPG
key for this repo is handled as well.Moreover, this commit also contains some needed changes on the
init
file as for the migration toruby
2.6
. Signed-off-by: default avatarMiquel Sabaté Solà msabate@suse.com
:) Note also the commit dates back to 10 months ago :
portus
major version. For example they have branches named portus-2.1
, portus-2.2
, portus-2.3
, etc.. portus-2.5
, and it's the last commit on each of these branches, that define the Docker image for all portus updates in the same minor release. There, there is complexity with this choice, because this means that all Dockerfiles on a given bracnh must not ever crash all updates in a minor release (no breaking change, or big newfeature concept), of portus
. A litlle less than that, because it is a reccurent suite, but it does not matter to our issue. apt-get install -y portus
, but instead they zypper add portus
. portus
package, instead, they: portus
, namely obs://Virtualization:containers:Portus/openSUSE_Leap_15.1
zypper refresh
,obs://devel:languages:ruby/openSUSE_Leap_15.1
. Btw I think in zypper -ar $SUSE_LX_PKG_REPO_URI
, ar
stands for add repository portus
and background
services (and the /init
script)portus:2.5
repaired imagejibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f portus|more Attaching to portuscontainer portuscontainer | + mkdir -p /secrets/certificates portuscontainer | + mkdir -p /secrets/rails portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | PORTUS PKI-INIT portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo 'PORTUS PKI-INIT' portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + set -x portuscontainer | + mkdir -p /certificates portuscontainer | + cp /secrets/certificates/portus.crt /certificates portuscontainer | + cp /secrets/certificates/portus-oci-registry.crt /certificates portuscontainer | + cp /secrets/certificates/portus-background.crt /certificates portuscontainer | + update-ca-certificates portuscontainer | + set -e portuscontainer | + secrets=(PORTUS_DB_PASSWORD PORTUS_PASSWORD PORTUS_SECRET_KEY_BAS E PORTUS_EMAIL_SMTP_PASSWORD PORTUS_LDAP_AUTHENTICATION_PASSWORD) portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z portus ]] portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z 12341234 ]] portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z '' ]] portuscontainer | + file_env PORTUS_SECRET_KEY_BASE portuscontainer | + local var=PORTUS_SECRET_KEY_BASE portuscontainer | + local fileVar=PORTUS_SECRET_KEY_BASE_FILE portuscontainer | + local def= portuscontainer | + '[' '' ']' portuscontainer | + local val= portuscontainer | + '[' '' ']' portuscontainer | + '[' /secrets/rails/portus.secret.key.base ']' portuscontainer | + val=4e779b234f79de439e962b1f07991de41fe4baf611625545b5513405b7036 c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a portuscontainer | + export PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe 4baf611625545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a portuscontainer | + PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe4baf611 625545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a portuscontainer | + unset PORTUS_SECRET_KEY_BASE_FILE portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z '' ]] portuscontainer | + file_env PORTUS_EMAIL_SMTP_PASSWORD portuscontainer | + local var=PORTUS_EMAIL_SMTP_PASSWORD portuscontainer | + local fileVar=PORTUS_EMAIL_SMTP_PASSWORD_FILE portuscontainer | + local def= portuscontainer | + '[' '' ']' portuscontainer | + local val= portuscontainer | + '[' '' ']' portuscontainer | + '[' '' ']' portuscontainer | + export PORTUS_EMAIL_SMTP_PASSWORD= portuscontainer | + PORTUS_EMAIL_SMTP_PASSWORD= portuscontainer | + unset PORTUS_EMAIL_SMTP_PASSWORD_FILE portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z '' ]] portuscontainer | + file_env PORTUS_LDAP_AUTHENTICATION_PASSWORD portuscontainer | + local var=PORTUS_LDAP_AUTHENTICATION_PASSWORD portuscontainer | + local fileVar=PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE portuscontainer | + local def= portuscontainer | + '[' '' ']' portuscontainer | + local val= portuscontainer | + '[' '' ']' portuscontainer | + '[' '' ']' portuscontainer | + export PORTUS_LDAP_AUTHENTICATION_PASSWORD= portuscontainer | + PORTUS_LDAP_AUTHENTICATION_PASSWORD= portuscontainer | + unset PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE portuscontainer | + update-ca-certificates portuscontainer | + export PORTUS_PUMA_HOST=0.0.0.0:3000 portuscontainer | + PORTUS_PUMA_HOST=0.0.0.0:3000 portuscontainer | + export RACK_ENV=production portuscontainer | + RACK_ENV=production portuscontainer | + export RAILS_ENV=production portuscontainer | + RAILS_ENV=production portuscontainer | + export CCONFIG_PREFIX=PORTUS portuscontainer | + CCONFIG_PREFIX=PORTUS portuscontainer | + '[' -z '' ']' portuscontainer | + export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | + GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | + '[' debug == debug ']' portuscontainer | + printenv portuscontainer | PORTUS_DB_PASSWORD=portus portuscontainer | PORTUS_DB_HOST=db portuscontainer | HOSTNAME=b586460b2733 portuscontainer | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060 portuscontainer | RAILS_SERVE_STATIC_ASSETS='true' portuscontainer | PORTUS_DB_POOL=5 portuscontainer | CCONFIG_PREFIX=PORTUS portuscontainer | PORTUS_KEY_PATH=/secrets/certificates/portus.key portuscontainer | PORTUS_LDAP_AUTHENTICATION_PASSWORD= portuscontainer | PWD=/ portuscontainer | PORTUS_PUMA_HOST=0.0.0.0:3000 portuscontainer | HOME=/root portuscontainer | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io portuscontainer | RAILS_SERVE_STATIC_FILES='true' portuscontainer | PORTUS_EMAIL_SMTP_PASSWORD= portuscontainer | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061 portuscontainer | RAILS_ENV=production portuscontainer | PORTUS_PASSWORD=12341234 portuscontainer | PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe4baf61162 5545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a portuscontainer | RACK_ENV=production portuscontainer | PORTUS_SERVICE_FQDN_VALUE=portus.pegasusio.io portuscontainer | PORTUS_LOG_LEVEL=debug portuscontainer | PORTUS_PUMA_TLS_CERT=/secrets/certificates/portus.crt portuscontainer | PORTUS_SECURITY_CLAIR_TIMEOUT=900s portuscontainer | SHLVL=2 portuscontainer | PORTUS_PUMA_TLS_KEY=/secrets/certificates/portus.key portuscontainer | PORTUS_DB_DATABASE=portus_production portuscontainer | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin portuscontainer | _=/usr/bin/printenv portuscontainer | + cd /srv/Portus portuscontainer | + '[' '!' -z '' ']' portuscontainer | + '[' -z '' ']' portuscontainer | + setup_database portuscontainer | + wait_for_database 1 portuscontainer | + should_setup=1 portuscontainer | + TIMEOUT=90 portuscontainer | + COUNT=0 portuscontainer | + RETRY=1 portuscontainer | + '[' 1 -ne 0 ']' portuscontainer | + case $(portusctl exec rails r /srv/Portus/bin/check_db.rb | grep DB) in portuscontainer | ++ portusctl exec rails r /srv/Portus/bin/check_db.rb portuscontainer | ++ grep DB portuscontainer | [WARN] couldn't connect to database. Skipping PublicActivity::Activ ity#parameters's serialization portuscontainer | Waiting for mariadb to be ready in 5 seconds portuscontainer | + '[' 0 -ge 90 ']' portuscontainer | + echo 'Waiting for mariadb to be ready in 5 seconds' portuscontainer | + sleep 5 portuscontainer | + COUNT=5 portuscontainer | + '[' 1 -ne 0 ']' portuscontainer | + case $(portusctl exec rails r /srv/Portus/bin/check_db.rb | grep DB) in portuscontainer | ++ grep DB portuscontainer | ++ portusctl exec rails r /srv/Portus/bin/check_db.rb portuscontainer | [WARN] table PublicActivity::ORM::ActiveRecord::Activity doesn't ex ist. Skipping PublicActivity::Activity#parameters's serialization portuscontainer | + '[' 1 -eq 1 ']' portuscontainer | + echo 'Initializing database' portuscontainer | + portusctl exec rake db:setup portuscontainer | Initializing database portuscontainer | [WARN] table PublicActivity::ORM::ActiveRecord::Activity doesn't ex ist. Skipping PublicActivity::Activity#parameters's serialization portuscontainer | Database 'portus_production' already exists portuscontainer | [schema] Selected the schema for mysql portuscontainer | (0.3ms) SET NAMES utf8, @@SESSION.sql_mode = CONCAT(CONCAT(@@sql_mode, ',STRICT_ALL_TABLES'), ',NO_AUTO_VALUE_ON_ZERO'), @@SESSION.sql_auto_is_null = 0, @@S ESSION.wait_timeout = 2147483 portuscontainer | [Mailer config] Host: portus.pegasusio.io portuscontainer | [Mailer config] Protocol: https:// portuscontainer | (0.3ms) SET NAMES utf8, @@SESSION.sql_mode = CONCAT(CONCAT(@@sql_mode, ',STRICT_ALL_TABLES'), ',NO_AUTO_VALUE_ON_ZERO'), @@SESSION.sql_auto_is_null = 0, @@S ESSION.wait_timeout = 2147483
jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f background|more Attaching to compose_background_1 background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | PORTUS BACKGROUND PKI-INIT background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | + mkdir -p /certificates background_1 | + cp /secrets/certificates/portus.crt /certificates background_1 | + cp /secrets/certificates/portus-oci-registry.crt /certificates background_1 | + cp /secrets/certificates/portus-background.crt /certificates background_1 | + update-ca-certificates background_1 | PORTUS_DB_PASSWORD=portus background_1 | PORTUS_DB_HOST=db background_1 | HOSTNAME=694ac5463bed background_1 | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060 background_1 | PORTUS_DB_POOL=5 background_1 | CCONFIG_PREFIX=PORTUS background_1 | PORTUS_KEY_PATH=/secrets/certificates/portus-background.key background_1 | PWD=/ background_1 | PORTUS_PUMA_HOST=0.0.0.0:3000 background_1 | HOME=/root background_1 | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io background_1 | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.6.0 background_1 | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061 background_1 | RAILS_ENV=production background_1 | PORTUS_PASSWORD=12341234 background_1 | PORTUS_SECRET_KEY_BASE=4e779b234f79de439e962b1f07991de41fe4baf61162 5545b5513405b7036c67bd5e7a63719c1e917d84edc2f81bda6ebe643f52fd6aabbb97a4825dee07943a background_1 | RACK_ENV=production background_1 | PORTUS_LOG_LEVEL=debug background_1 | PORTUS_SECURITY_CLAIR_TIMEOUT=900s background_1 | SHLVL=2 background_1 | PORTUS_DB_DATABASE=portus_production background_1 | PORTUS_BACKGROUND=true background_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin background_1 | _=/usr/bin/printenv background_1 | [WARN] couldn't connect to database. Skipping PublicActivity::Activ ity#parameters's serialization background_1 | Waiting for mariadb to be ready in 5 seconds background_1 | [WARN] table PublicActivity::ORM::ActiveRecord::Activity doesn't ex ist. Skipping PublicActivity::Activity#parameters's serialization background_1 | /usr/bin/bundle:23:in `load': cannot load such file -- /usr/lib64/r uby/gems/2.6.0/gems/bundler-1.16.4/exe/bundle (LoadError) background_1 | from /usr/bin/bundle:23:in `' background_1 | Database ready background_1 | [schema] Selected the schema for mysql background_1 | (0.4ms) SET NAMES utf8, @@SESSION.sql_mode = CONCAT(CONCAT(@@sql_mode, ',STRICT_ALL_TABLES'), ',NO_AUTO_VALUE_ON_ZERO'), @@SESSION.sql_auto_is_null = 0, @@S ESSION.wait_timeout = 2147483 background_1 | [Mailer config] Host: portus.pegasusio.io background_1 | [Mailer config] Protocol: https:// background_1 | User Exists (0.4ms) SELECT 1 AS one FROM `user s` WHERE `users`.`username` = 'portus' LIMIT 1 background_1 | User Load (0.5ms) SELECT `users`.* FROM `users ` WHERE `users`.`username` = 'portus' LIMIT 1 background_1 | (0.4ms) BEGIN background_1 | User Update (0.5ms) UPDATE `users` SET `encrypt ed_password` = '$2a$10$EYWWJKtYCLV2MWePMBEa1OTnss/pdtX/s1znYb6jWKc1lhJJJ119.', `updated_at` = '2020-03-01 04:31: 16' WHERE `users`.`id` = 1 background_1 | (0.3ms) COMMIT --Plus--
docker-compose config
:
networks:
pipeline_portus:
driver: bridge
services:
background:
depends_on:
- db
- portus
entrypoint:
- /bin/bash
- -c
- /init-pki && /bin/chmod +x /init && /init
environment:
CCONFIG_PREFIX: PORTUS
PORTUS_BACKGROUND: "true"
PORTUS_DB_DATABASE: portus_production
PORTUS_DB_HOST: db
PORTUS_DB_PASSWORD: portus
PORTUS_DB_POOL: '5'
PORTUS_KEY_PATH: /secrets/certificates/portus-background.key
PORTUS_LOG_LEVEL: debug
PORTUS_MACHINE_FQDN_VALUE: portus.pegasusio.io
PORTUS_PASSWORD: '12341234'
PORTUS_SECRET_KEY_BASE_FILE: /secrets/rails/portus.secret.key.base
PORTUS_SECURITY_CLAIR_HEALTH_PORT: '6061'
PORTUS_SECURITY_CLAIR_SERVER: http://clair.pegasusio.io:6060
PORTUS_SECURITY_CLAIR_TIMEOUT: 900s
extra_hosts:
- oci-registry.pegasusio.io:192.168.1.22
- portus.pegasusio.io:192.168.1.22
image: opensuzie/portus:2.5
links:
- db
networks:
pipeline_portus:
aliases:
- portus-backservice.pegasusio.io
volumes:
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/portus_background/init-pki:/init-pki:ro
clair:
build:
context: /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/oci/clair
command:
- -config
- /clair.yml
depends_on:
- postgres
entrypoint:
- /usr/bin/dumb-init
- --
- /clair.customized
image: oci-registry.pegasusio.io/pokus/clair:v2.1.2
links:
- postgres
networks:
pipeline_portus:
aliases:
- clair.pegasusio.io
ports:
- 6060:6060/tcp
- 6061:6061/tcp
restart: unless-stopped
volumes:
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/tmpclair:/tmp:rw
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/clair/clair.yml:/clair.yml:rw
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/clair/clair.customized:/clair.customized:rw
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets/certificates:/secrets/certificates:rw
db:
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci
--init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
environment:
MYSQL_DATABASE: portus_production
MYSQL_ROOT_PASSWORD: portus
extra_hosts:
- oci-registry.pegasusio.io:192.168.1.22
- portus.pegasusio.io:192.168.1.22
image: library/mariadb:10.0.23
networks:
pipeline_portus:
aliases:
- db.pegasusio.io
volumes:
- /var/lib/portus/mariadb:/var/lib/mysql:rw
nginx:
extra_hosts:
- oci-registry.pegasusio.io:192.168.1.22
- portus.pegasusio.io:192.168.1.22
image: library/nginx:alpine
links:
- registry:registry
- portus:portus
networks:
pipeline_portus: null
ports:
- 80:80/tcp
- 443:443/tcp
volumes:
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/nginx:/etc/nginx:ro
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
- static:/srv/Portus/public:ro
portus:
container_name: portuscontainer
entrypoint:
- /bin/bash
- -c
- /init
environment:
PORTUS_DB_DATABASE: portus_production
PORTUS_DB_HOST: db
PORTUS_DB_PASSWORD: portus
PORTUS_DB_POOL: '5'
PORTUS_KEY_PATH: /secrets/certificates/portus.key
PORTUS_LOG_LEVEL: debug
PORTUS_MACHINE_FQDN_VALUE: portus.pegasusio.io
PORTUS_PASSWORD: '12341234'
PORTUS_PUMA_TLS_CERT: /secrets/certificates/portus.crt
PORTUS_PUMA_TLS_KEY: /secrets/certificates/portus.key
PORTUS_SECRET_KEY_BASE_FILE: /secrets/rails/portus.secret.key.base
PORTUS_SECURITY_CLAIR_HEALTH_PORT: '6061'
PORTUS_SECURITY_CLAIR_SERVER: http://clair.pegasusio.io:6060
PORTUS_SECURITY_CLAIR_TIMEOUT: 900s
PORTUS_SERVICE_FQDN_VALUE: portus.pegasusio.io
RAILS_SERVE_STATIC_ASSETS: '''true'''
RAILS_SERVE_STATIC_FILES: '''true'''
extra_hosts:
- oci-registry.pegasusio.io:192.168.1.22
- portus.pegasusio.io:192.168.1.22
image: opensuzie/portus:2.5
links:
- db
networks:
pipeline_portus:
aliases:
- portus.pegasusio.io
- portus
ports:
- 3000:3000/tcp
volumes:
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/portus/init:/init:ro
- static:/srv/Portus/public:rw
portus_secret_base_key_generator:
build:
args:
RAILS_VERSION: 5.0.1
RUBY_VERSION: 2.5.0
context: /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/oci/secrets/generators/rails_secret_base_key
environment:
PORTUS_SECRET_KEY_BASE_FILE_NAME: portus.secret.key.base
VAULT_ADDR: https://vault.pegasusio.io:8233
VAULT_KV_ENGINE: dev_culturebase_org
VAULT_KV_ENGINE_SECRET_KEY: secret_base_key
VAULT_KV_ENGINE_SECRET_PATH: production/portus/rails
VAULT_TOKEN_FILE: /secrets/portus_secret_base_key_generator/vault.token
image: railsecretmngr:0.0.1
volumes:
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets/rails:/usr/src/portusecretkeybase/share:rw
postgres:
environment:
POSTGRES_PASSWORD: portus
image: library/postgres:10-alpine
networks:
pipeline_portus:
aliases:
- pgclair.pegasusio.io
registry:
command:
- /bin/sh
- /etc/docker/registry/init
environment:
REGISTRY_AUTH_TOKEN_ISSUER: portus.pegasusio.io
REGISTRY_AUTH_TOKEN_REALM: https://portus.pegasusio.io:3000/v2/token
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /secrets/certificates/portus.crt
REGISTRY_AUTH_TOKEN_SERVICE: oci-registry.pegasusio.io
REGISTRY_HTTP_TLS_CERTIFICATE: /secrets/certificates/portus-oci-registry.crt
REGISTRY_HTTP_TLS_KEY: /secrets/certificates/portus-oci-registry.key
REGISTRY_NOTIFICATIONS_ENDPOINTS: "- name: portus\n url: https://portus.pegasusio.io:3000/v2/webhooks/events\n\
\ timeout: 2000ms\n threshold: 5\n backoff: 1s\n"
extra_hosts:
- oci-registry.pegasusio.io:192.168.1.22
- portus.pegasusio.io:192.168.1.22
image: library/registry:2.6
links:
- portus:portus
networks:
pipeline_portus:
aliases:
- oci-registry.pegasusio.io
ports:
- 5000:5000/tcp
- 5001:5001/tcp
volumes:
- /var/lib/portus/registry:/var/lib/registry:rw
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/secrets:/secrets:ro
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/registry/config.yml:/etc/docker/registry/config.yml:ro
- /home/jibl/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose/registry/init:/etc/docker/registry/init:ro
version: '3.0'
volumes:
static:
driver: local
opensuse/portus:2.4.3
imageNow is the most interesting thing :
opensuse/portus:2.4.3
, I check the background
logs, and here I go with a completely environment (so before mssola 's update on /init
script, ruby version was 2.5.0
) : background_1 | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.0 background_1 | RAILS_ENV=production background_1 | RACK_ENV=production
opensuse/portus:2.4.3
: jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f background|more Attaching to compose_background_1 background_1 | + mkdir -p /certificates background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | PORTUS BACKGROUND PKI-INIT background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | ++++++++++++++++++++++++++ background_1 | + cp /secrets/certificates/portus.crt /certificates background_1 | + cp /secrets/certificates/portus-oci-registry.crt /certificates background_1 | + cp /secrets/certificates/portus-background.crt /certificates background_1 | + update-ca-certificates background_1 | PORTUS_DB_PASSWORD=portus background_1 | PORTUS_DB_HOST=db background_1 | HOSTNAME=1f8001b2a341 background_1 | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060 background_1 | PORTUS_DB_POOL=5 background_1 | CCONFIG_PREFIX=PORTUS background_1 | PORTUS_KEY_PATH=/secrets/certificates/portus-background.key background_1 | PORTUS_LDAP_AUTHENTICATION_PASSWORD= background_1 | PWD=/ background_1 | PORTUS_PUMA_HOST=0.0.0.0:3000 background_1 | HOME=/root background_1 | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io background_1 | PORTUS_EMAIL_SMTP_PASSWORD= background_1 | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.0 background_1 | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061 background_1 | RAILS_ENV=production background_1 | PORTUS_PASSWORD=12341234 background_1 | PORTUS_SECRET_KEY_BASE=19af1ad1d3c58649ca6bf1ca4514f22388660855df6c f82d368f4d869554bf62de5bd92b273f7c6ed470961c510da7fda483ffb162b58cfb87b474bd9909fe08 background_1 | RACK_ENV=production background_1 | PORTUS_LOG_LEVEL=debug background_1 | PORTUS_SECURITY_CLAIR_TIMEOUT=900s background_1 | SHLVL=2 background_1 | PORTUS_DB_DATABASE=portus_production background_1 | PORTUS_BACKGROUND=true background_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin background_1 | _=/usr/bin/printenv background_1 | [Mailer config] Host: portus.pegasusio.io background_1 | [Mailer config] Protocol: https:// background_1 | User Exists (0.2ms) SELECT 1 AS one FROM `users` W HERE `users`.`username` = 'portus' LIMIT 1 background_1 | User Load (0.5ms) SELECT `users`.* FROM `users` WHERE `users`.`username` = 'portus' LIMIT 1 background_1 | (0.2ms) BEGIN background_1 | SQL (0.4ms) UPDATE `users` SET `encrypted_password` = '$2a$10$Cz7bYma5FEaaEH1QeuP6qeiCL7PSwZ29q8QPvvm6Xj.MT.GugcSm6', `updated_at` = '2020-03-01 07:47:19' WHERE `user s`.`id` = 1 background_1 | (0.3ms) COMMIT background_1 | User Exists (0.3ms) SELECT 1 AS one FROM `users` WHER E `users`.`username` = 'portus' LIMIT 1 background_1 | (0.1ms) SELECT COUNT(*) FROM `registries` background_1 | (0.4ms) SELECT COUNT(*) FROM `repositories` background_1 | [Initialization] Running: 'Registry events', 'Security scanning', ' Registry synchronization' background_1 | RegistryEvent Load (0.4ms) SELECT `registry_events `.* FROM `registry_events` WHERE `registry_events`.`status` = 2 ORDER BY `registry_events`.`id` ASC LIMIT 1000 0m background_1 | Tag Exists (0.6ms) SELECT 1 AS one FROM `tags` WHERE `tags`.`scanned` = 0 LIMIT 1 background_1 | (0.3ms) SELECT COUNT(*) FROM `repositories` background_1 | Registry Load (0.4ms) SELECT `registries`.* FROM `reg istries` ORDER BY `registries`.`id` ASC LIMIT 1000 background_1 | RegistryEvent Load (0.3ms) SELECT `registry_events `.* FROM `registry_events` WHERE `registry_events`.`status` = 2 ORDER BY `registry_events`.`id` ASC LIMIT 1000 0m background_1 | RegistryEvent Load (0.3ms) SELECT `registry_events`.*
opensuse/portus:2.4.3
, for the portus
service, I have the exact same environment, compared to my reparied opensuzie/portus:2.5
: portuscontainer | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | RAILS_ENV=production portuscontainer | RACK_ENV=production
portus
service logs : jibl@poste-devops-typique:~/portus.autopilot.provision.XXXXXX/portus/official/compose/examples/compose$ docker-compose logs -f portus|more Attaching to portuscontainer portuscontainer | + mkdir -p /secrets/certificates portuscontainer | + mkdir -p /secrets/rails portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo 'PORTUS PKI-INIT' portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + echo ++++++++++++++++++++++++++ portuscontainer | + set -x portuscontainer | + mkdir -p /certificates portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | PORTUS PKI-INIT portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | ++++++++++++++++++++++++++ portuscontainer | + cp /secrets/certificates/portus.crt /certificates portuscontainer | + cp /secrets/certificates/portus-oci-registry.crt /certificates portuscontainer | + cp /secrets/certificates/portus-background.crt /certificates portuscontainer | + update-ca-certificates portuscontainer | + set -e portuscontainer | + secrets=(PORTUS_DB_PASSWORD PORTUS_PASSWORD PORTUS_SECRET_KEY_BAS E PORTUS_EMAIL_SMTP_PASSWORD PORTUS_LDAP_AUTHENTICATION_PASSWORD) portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z portus ]] portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z 12341234 ]] portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z '' ]] portuscontainer | + file_env PORTUS_SECRET_KEY_BASE portuscontainer | + local var=PORTUS_SECRET_KEY_BASE portuscontainer | + local fileVar=PORTUS_SECRET_KEY_BASE_FILE portuscontainer | + local def= portuscontainer | + '[' '' ']' portuscontainer | + local val= portuscontainer | + '[' '' ']' portuscontainer | + '[' /secrets/rails/portus.secret.key.base ']' portuscontainer | + val=dc997f32935707adb399dfe06a57041ce12a8dc96c00898feb016a742da46 d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b portuscontainer | + export PORTUS_SECRET_KEY_BASE=dc997f32935707adb399dfe06a57041ce12 a8dc96c00898feb016a742da46d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b portuscontainer | + PORTUS_SECRET_KEY_BASE=dc997f32935707adb399dfe06a57041ce12a8dc96c 00898feb016a742da46d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b portuscontainer | + unset PORTUS_SECRET_KEY_BASE_FILE portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z '' ]] portuscontainer | + file_env PORTUS_EMAIL_SMTP_PASSWORD portuscontainer | + local var=PORTUS_EMAIL_SMTP_PASSWORD portuscontainer | + local fileVar=PORTUS_EMAIL_SMTP_PASSWORD_FILE portuscontainer | + local def= portuscontainer | + '[' '' ']' portuscontainer | + local val= portuscontainer | + '[' '' ']' portuscontainer | + '[' '' ']' portuscontainer | + export PORTUS_EMAIL_SMTP_PASSWORD= portuscontainer | + PORTUS_EMAIL_SMTP_PASSWORD= portuscontainer | + unset PORTUS_EMAIL_SMTP_PASSWORD_FILE portuscontainer | + for s in "${secrets[@]}" portuscontainer | + [[ -z '' ]] portuscontainer | + file_env PORTUS_LDAP_AUTHENTICATION_PASSWORD portuscontainer | + local var=PORTUS_LDAP_AUTHENTICATION_PASSWORD portuscontainer | + local fileVar=PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE portuscontainer | + local def= portuscontainer | + '[' '' ']' portuscontainer | + local val= portuscontainer | + '[' '' ']' portuscontainer | + '[' '' ']' portuscontainer | + export PORTUS_LDAP_AUTHENTICATION_PASSWORD= portuscontainer | + PORTUS_LDAP_AUTHENTICATION_PASSWORD= portuscontainer | + unset PORTUS_LDAP_AUTHENTICATION_PASSWORD_FILE portuscontainer | + update-ca-certificates portuscontainer | + export PORTUS_PUMA_HOST=0.0.0.0:3000 portuscontainer | + PORTUS_PUMA_HOST=0.0.0.0:3000 portuscontainer | + export RACK_ENV=production portuscontainer | + RACK_ENV=production portuscontainer | + export RAILS_ENV=production portuscontainer | + RAILS_ENV=production portuscontainer | + export CCONFIG_PREFIX=PORTUS portuscontainer | + CCONFIG_PREFIX=PORTUS portuscontainer | + '[' -z '' ']' portuscontainer | + export GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | + GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | + '[' debug == debug ']' portuscontainer | + printenv portuscontainer | PORTUS_DB_PASSWORD=portus portuscontainer | PORTUS_DB_HOST=db portuscontainer | HOSTNAME=30b6885e771c portuscontainer | PORTUS_SECURITY_CLAIR_SERVER=http://clair.pegasusio.io:6060 portuscontainer | RAILS_SERVE_STATIC_ASSETS='true' portuscontainer | PORTUS_DB_POOL=5 portuscontainer | CCONFIG_PREFIX=PORTUS portuscontainer | PORTUS_KEY_PATH=/secrets/certificates/portus.key portuscontainer | PORTUS_LDAP_AUTHENTICATION_PASSWORD= portuscontainer | PWD=/ portuscontainer | PORTUS_PUMA_HOST=0.0.0.0:3000 portuscontainer | HOME=/root portuscontainer | PORTUS_MACHINE_FQDN_VALUE=portus.pegasusio.io portuscontainer | RAILS_SERVE_STATIC_FILES='true' portuscontainer | + cd /srv/Portus portuscontainer | PORTUS_EMAIL_SMTP_PASSWORD= portuscontainer | GEM_PATH=/srv/Portus/vendor/bundle/ruby/2.5.3 portuscontainer | PORTUS_SECURITY_CLAIR_HEALTH_PORT=6061 portuscontainer | RAILS_ENV=production portuscontainer | PORTUS_PASSWORD=12341234 portuscontainer | PORTUS_SECRET_KEY_BASE=dc997f32935707adb399dfe06a57041ce12a8dc96c00 898feb016a742da46d881991f404443c20f559c6cb993cadd6ab68c5c61f88cdc26399adcee6d302d75b portuscontainer | RACK_ENV=production portuscontainer | PORTUS_SERVICE_FQDN_VALUE=portus.pegasusio.io portuscontainer | PORTUS_LOG_LEVEL=debug portuscontainer | PORTUS_PUMA_TLS_CERT=/secrets/certificates/portus.crt portuscontainer | PORTUS_SECURITY_CLAIR_TIMEOUT=900s portuscontainer | SHLVL=2 portuscontainer | PORTUS_PUMA_TLS_KEY=/secrets/certificates/portus.key portuscontainer | PORTUS_DB_DATABASE=portus_production portuscontainer | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin portuscontainer | _=/usr/bin/printenv portuscontainer | + '[' '!' -z '' ']' portuscontainer | + '[' -z '' ']' portuscontainer | + setup_database portuscontainer | + wait_for_database 1 portuscontainer | + should_setup=1 portuscontainer | + TIMEOUT=90 portuscontainer | + COUNT=0 portuscontainer | + RETRY=1 portuscontainer | + '[' 1 -ne 0 ']' portuscontainer | + case $(portusctl exec rails r /srv/Portus/bin/check_db.rb | grep DB) in portuscontainer | ++ portusctl exec rails r /srv/Portus/bin/check_db.rb portuscontainer | ++ grep DB portuscontainer | + echo 'Database ready' portuscontainer | + break portuscontainer | + set -e portuscontainer | + portusctl exec 'pumactl -F /srv/Portus/config/puma.rb start' portuscontainer | Database ready portuscontainer | [64] Puma starting in cluster mode... portuscontainer | [64] * Version 3.10.0 (ruby 2.5.0-p0), codename: Russell's Teapot portuscontainer | [64] * Min threads: 1, max threads: 4 portuscontainer | [64] * Environment: production portuscontainer | [64] * Process workers: 4 portuscontainer | [64] * Preloading application portuscontainer | [Mailer config] Host: portus.pegasusio.io portuscontainer | [Mailer config] Protocol: https:// portuscontainer | User Exists (0.3ms) SELECT 1 AS one FROM `users` W HERE `users`.`username` = 'portus' LIMIT 1 portuscontainer | User Load (0.7ms) SELECT `users`.* FROM `users` WHERE `users`.`username` = 'portus' LIMIT 1 portuscontainer | (0.4ms) BEGIN
keep_latest
feature, ansible
, terraform
, Packer
.Just to ask in case u know about it : OpenSUSE Leap
, it's just like their CentOS Atomic
, isn't it ?
KEEP_LATEST
featureI have to warn you, and the warning stands for both KEEP_LATEST
and SYNC
:
SYNC
feature has serious unmanaged limitations on the batch jobs it executes : If the batch job is huge, it systematically fails, and boum portus background is stuck in a starting, then failing, so stoping , then restarting..., failing, again, then etc... SYNC
feature :
Garbage Collector
feature : batch jobs, to clean up disk and persistance spaces (tables collections in dbses)background
process(like so waiting for your results on keep_latest now :) )
/init
script fix.ruby-version
at root of portus repo : it is used inside the Docker container, with something called RVM
, like nvm
, but for ruby
..ruby-version
file, we, on master, have 2.6.2
. 2.5.0
and 2.5.3
versions.portus
and background
services. The portus
source code commit id must match, though, and distribution channel publishing the pre-built docker images should guarantee that. Ok, the OpenSUSE Team use the https://github.com/SUSE/Portus source code repo, to version control pipeline recipe, in particular :
portus
into a linux opensuse package, see https://github.com/SUSE/Portus/blob/master/packaging/suse/package_and_push_to_obs.sh portus
image, installing portus into it, using package manager zypper.https://github.com/SUSE/Portus/blob/3de71a6fb1f9865ca194fe015d8014d06c4f3ef2/.travis.yml#L4
https://github.com/SUSE/Portus/blob/3de71a6fb1f9865ca194fe015d8014d06c4f3ef2/.travis.yml#L5
Thought : I think they use RVM to normalize the set of executables involved in the portus ruby stack : They use it to make their portus
zypper
packages. Those in the zypper repository added in Dockerfile, to install back portus inside containers, "for production", obs://Virtualization:containers:Portus/openSUSE_Leap_15.1 portus
.
@robgiovanardi Tested and re-tested, versioned and released, https://github.com/pokusio/opensuzie-oci-library/releases/tag/0.0.2
IMPORTANT UPDATE TO READER WILLING TO USE PORTUS 2.5 :
Dockerfile
all based on opensuseDockerfile
sio that portus and all its dependencies, are installed during the Docker build process, using the zypper package manager.Debian
, and a proper, standard ruby on rails execution and build envrionment(end of update)
master
I checked) is b87d37e4e692b4fe5616b6f0970cb606688c344a
: Clonage dans 'Portus'...
remote: Enumerating objects: 9, done.
remote: Counting objects: 100% (9/9), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 38892 (delta 4), reused 0 (delta 0), pack-reused 38883
Réception d'objets: 100% (38892/38892), 36.50 MiB | 16.89 MiB/s, fait.
Résolution des deltas: 100% (18385/18385), fait.
jibl@poste-devops-typique:~$ cd Portus
jibl@poste-devops-typique:~/Portus$ export GIT_COMMIT_ID='b87d37e4e692b4fe5616b6f0970cb606688c344a'
jibl@poste-devops-typique:~/Portus$ git branch --contains $GIT_COMMIT_ID
* master
jibl@poste-devops-typique:~/Portus$
jibl@poste-devops-typique:~/Portus$
and it is this https://github.com/SUSE/Portus/commit/b87d37e4e692b4fe5616b6f0970cb606688c344a so, as of today, the exact last commit on master . So yeah, they always build from source from last commit on master. Like I would never have accused SUSE Team members to do that without a serious proof like that... Guys, just switch to git flow AVH Edition
(in case they build from source the outdated Vincent Driessen's, to do their suse git-flow package)...
the clair scanner didn't seem too busy, when I was running portus:2.4.3
, and now.... Even Clair Works GREAAAAT :D
b87d37e4e692b4fe5616b6f0970cb606688c344a
: Rocket Chat
official distribution for docker ... full of vulnerabilities, my dear, as I expected ^ ^ : Now, I can't wait you test that on your side :)
Hi, I'm a bit confused. I'm setting up an instance of Portus. I've been battling with it for quite a while, the example compose file gets you so far but there has been a lot of trial and error to get things working. Now I am looking at setting up the garbage collection and I stumbled on this bug. I'm running opensuse/portus:2.4. So what is the recommended way to set this up given the limitations that we know exist? I'm wondering if I can leave the keep_latest option and just use a combination of older_than and tag? I'm planning to use git flow and have a tag for each branch. If I set a regex for the tag option that only includes feature, release and hotfix branches then only these should be considered for GC, leaving master and develop safely alone. I can't tell from this bug report if this will work or not?
I can't tell from this bug report if this will work or not?
Hi @benthurley82 , I today found your message.
question 1 : For this question, forget anything about Garbage collection, or any deletion at all, of any OCI image. Do you have any docker-compose.yml
with which you successfully run Portus
, and by sucessfully, I mean :
registry
, background
, portus
question 2 : Does the text you will read below answers your question ? (yes, or no, i'll ask another question if no)
keep_latest
option, using a combination of tags
and older_than
, it is logically impossible : tag
config option, the only thing you can say is "Hey Portus, don't garbage collect any tag that matches exactly this regular expression".*.*.*
. bensregexp
be a regular expression (pick one, write it on a paper as you read this), and MYSTERY_VERSION_NUMBER
any of your existing version numbers.DTomega
, at which MYSTERY_VERSION_NUMBER
was the latest version. Because that was true at the very instant you created / tagged / released that new version number.DTomega
, MYSTERY_VERSION_NUMBER
matched bensregexp
. And so, MYSTERY_VERSION_NUMBER
matches bensregexp
today, bensregexp
will match all of your version numbers.tag
config. option is used to just filter environments, not version : dev is dev, and will all remain just dev, just new versions of it. dev-3.4.7
, 3.4.7
is a version number of a software, endowed with the dev
label, used to "slip in" the info that this docker image is the dev
execution environment for the 3.4.7
version of your software. alpine-12.0.4
, is alpine
a version number ?keep_latest
config. option. Alright, so you HAVE, to use the keep_latest
tag, to get the behavior you (and bob) actually want, which is (isn't it ?) :
portus
, either the keep_latest
is not available (version too old), or that config. option is bugged. portus
source code that has a repaired keep_latest
config. option. master
branch, is not tagged, and not merged back to any other branch, like the v2.5
branch for example.keep_latest
option, we had to build our own docker image of portus
, starting from its from source.Now :
portus
docker image, starting from portus
source code, which has a working keep_latest
feature : https://github.com/SUSE/Portus/issues/2241#issuecomment-593370789 (pay special attention to the IMPORTANT UPDATE out there) Thanks for all your contributions! This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
Description
I Activated Garbage Collector on Portus Background process with
keep_latest: 5
andolder_than: 100
But it deletes all images older_than 100 ignoring the keep_latest flag. In result I have old repositories wiped all completelySteps to reproduce
Here initial logs:
Deployment information
Deployment method: Portus is deployed as a standalone Container (not Compose) which connects to local MariaDB and Registry.
Configuration:
Portus version: 2.4.3@5a616c0ef860567df5700708256f42505cdb9952
env_portus: environment file used for customizing Portus Foreground:
We are running portus with:
env_background: environment file used for customizing Portus Background:
Then we are running portus background:
Thanks in advance Roberto