spantaleev / matrix-docker-ansible-deploy

šŸ³ Matrix (An open network for secure, decentralized communication) server setup using Ansible and Docker
GNU Affero General Public License v3.0
4.88k stars 1.04k forks source link

gnupg2 on Debian 11 missing #2448

Closed friedger closed 9 months ago

friedger commented 1 year ago

Describe the bug I tried to run setup-all start and the the following error message:

TASK [galaxy/geerlingguy.docker : Ensure additional dependencies are installed (on Ubuntu < 20.04 and any other systems).] ***
fatal: [matrix.<domain>]: FAILED! => {"changed": false, "msg": "No package matching 'gnupg2' is available"}

I am using Debian 11

To Reproduce Setup server with Debian 11 Setup ansible, etc run ansible-playbook -i inventory/hosts matrix-docker-ansible-deploy/setup.yml --tags=setup-all,start

Expected behavior Matrix is deployed and started

Matrix Server:

spantaleev commented 1 year ago

docker run -it --rm --entrypoint=/bin/bash docker.io/debian:11 -c 'apt update && apt install gnupg2' succeeds for me.

So at least this Debian-based Docker container can find gnupg2 in its repositories.

Maybe there's a problem with your Debian VPS or its apt repositories.

From what I gather, gnupg2 is an alias for gnupg (which provides the same version - 2.2.27-2+deb11u2 right now).

thigg commented 1 year ago

I ran into this problem on a fresh vm where apt update has never been executed before

hungrymonkey commented 9 months ago
TASK [galaxy/docker : Ensure additional dependencies are installed (on Ubuntu < 20.04 and any other systems).] ***
fatal: [matrix.iopa.duckdns.org]: FAILED! => changed=false 
  msg: No package matching 'gnupg2' is available

I tested on a very fresh install. I only ran ansible-playbook -i inventory/hosts setup.yml --tags=install-all on Debian 12. The ansible script will need to run apt-get update somehow.

17c9c8a6ded366872a612f1511207aa1fa415a0c

hosts

[matrix_servers]
matrix.iopa.duckdns.org ansible_host=2600:1f14:53c:4801:f7bb:d69a:43fe:c563 ansible_ssh_user=admin become=true become_user=root 

vars.yml

---
matrix_domain: iopa.duckdns.org

matrix_homeserver_implementation: synapse

matrix_homeserver_generic_secret_key: 'ikRQFbjv0Svw2C64k97Nk2PSufF19ocNFCQeC1GNFqRCn0QrLFmK79Xqxt0nKdjW'

matrix_playbook_reverse_proxy_type: playbook-managed-traefik

devture_traefik_config_certificatesResolvers_acme_email: 'johndoe@example.com'

# A Postgres password to use for the superuser Postgres user (called `matrix` by default)
devture_postgres_connection_password: ''

I am junking these settings anyways.

spantaleev commented 9 months ago

We're using the geerlingguy/ansible-role-docker Ansible role for installing Docker, so you may wish to report the problem there. That repository is not very active, so reporting it may not go as smooth as you imagine.

The issue only seems to affect a few people - perhaps those on a host which uses Debian without an apt cache populated after an initial installation (or being populated incorrectly?). It seems like on most VPS installs, one can just start installing packages directly (without explicitly doing apt update first).

So you're saying doing apt update manually at least once resolved the problem for you?

hungrymonkey commented 9 months ago

So you're saying doing apt update manually at least once resolved the problem for you?

Yea. I think Debian images were created without apt cache. The solution is to run apt update. I noticed this issue in AWS.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

data "aws_ami" "debian" {
  most_recent = true

  filter {
    name   = "name"
    values = ["*debian-12-*"]
  }
  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["136693071363"] # debian

  tags = {
    Name = "terraform please delete test"
    user = "admin"
  }
}

resource "aws_key_pair" "deployer" {
  key_name   = "test-del"
  public_key = "ssh-rsa AAAAB3................... <.ssh/id_isa.pub me@hostname.com"
  tags = {
    Name = "test terraform delete me"
  }
}

resource "aws_vpc" "us-west-2" {
  enable_dns_support               = true
  enable_dns_hostnames             = true
  assign_generated_ipv6_cidr_block = true
  cidr_block                       = "10.0.0.0/16"

  tags = {
    Name = "test terraform delete me"
  }
}

resource "aws_subnet" "us-west-2" {
  vpc_id                  = aws_vpc.us-west-2.id
  cidr_block              = cidrsubnet(aws_vpc.us-west-2.cidr_block, 4, 1)
  map_public_ip_on_launch = true

  ipv6_cidr_block                 = cidrsubnet(aws_vpc.us-west-2.ipv6_cidr_block, 8, 1)
  assign_ipv6_address_on_creation = true
  availability_zone               = "us-west-2b"
}

resource "aws_internet_gateway" "us-west-2" {
  vpc_id = aws_vpc.us-west-2.id
}

resource "aws_default_route_table" "us-west-2" {
  default_route_table_id = aws_vpc.us-west-2.default_route_table_id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.us-west-2.id
  }

  route {
    ipv6_cidr_block = "::/0"
    gateway_id      = aws_internet_gateway.us-west-2.id
  }
}

resource "aws_route_table_association" "us-west-2" {
  subnet_id      = aws_subnet.us-west-2.id
  route_table_id = aws_default_route_table.us-west-2.id
}

resource "aws_security_group" "us-west-2" {
  name   = "terraform-example-instance"
  vpc_id = aws_vpc.us-west-2.id
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    ipv6_cidr_blocks = ["::/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    ipv6_cidr_blocks = ["::/0"]
  }
}

resource "aws_instance" "us-west-2" {
  ami                    = data.aws_ami.debian.id # Amazon linux
  key_name               = "test-del"
  instance_type          = "t3a.small"
  subnet_id              = aws_subnet.us-west-2.id
  ipv6_address_count     = 1
  vpc_security_group_ids = ["${aws_security_group.us-west-2.id}"]
  tags = {
    Name = "my-ipv6-test"
  }
  depends_on = [aws_internet_gateway.us-west-2]
}

output "us-west-2_IPv6" {
  value = ["${aws_instance.us-west-2.ipv6_addresses}"]
}
terraform init
terraform plan -out terraform.out
terraform apply terraform.out
terraform destroy
hungrymonkey commented 9 months ago

https://github.com/geerlingguy/ansible-role-docker/issues/407

hungrymonkey commented 9 months ago

https://github.com/geerlingguy/ansible-role-docker/issues/407#issuecomment-1651104811

This will break idempotenceā€”for my roles, I always manage apt caches at the play level, not at the role level. Otherwise every role I have, I would need to add in update_cache and manage a lifetime for the cache, which I'd rather not do since everyone has a different approach. See: https://github.com/geerlingguy/ansible-role-docker/blob/master/molecule/default/converge.yml#L6-L9

spantaleev commented 9 months ago

His statement does make sense!

It's an edge-case to find a Debian-based system without an apt cache populated, but it sounds good to be able to handle it anyway. I've added some tasks that prewarm the cache to our playbook_help role, here.

Updating this playbook (e.g. git pull) and updating its roles (just roles or make roles) should give you a playbook that doesn't suffer from this problem anymore.