Closed greenpau closed 4 years ago
The nftables backend for iptables should work, even with nftables... @mccv1r0 PTAL
I can't recreate on fedora 30. The cni config uses backend nftables
and nftables
is being used by fedora 30.
$ sudo podman version
Version: 1.7.0
RemoteAPI Version: 1
Go Version: go1.12.14
OS/Arch: linux/amd64
$ rpm -q podman
podman-1.7.0-3.fc30.x86_64
$
What do you mean by The nftables backend for iptables should work, even with nftables...
The system should be running what the cni config specifies.
The system should be running what the cni config specifies.
@mccv1r0 , why would the following clause trigger the execution of iptables
and not nft
?
{
"type": "firewall",
"backend": "nftables"
}
CNI uses what the underlying OS is already running.
CNI uses what the underlying OS is already running.
@mccv1r0 , in my case I don't run iptables. Only nftables. Should this issue belong to github.com/containernetworking/plugins
?
failed to list chains: running [/usr/sbin/iptables -t nat -S --wait]: exit status 1: iptables v1.8.2 (nf_tables): table `nat' is incompatible, use 'nft' tool.
The error is coming from here. Deep dive begins ...
Opened a separate issue https://github.com/containernetworking/plugins/issues/461.
Upon research, it appears that cni is being implemented via vendor/github.com/cri-o/ocicni/pkg/ocicni/ocicni.go
, which in turn uses github.com/containernetworking/cni/libcni
@mheon , do you know why github.com/containernetworking/plugins/pkg/utils/iptables.go
is not in libpod
's vendor/
directory?
There is a series of dependencies when it comes to podman
and CNI:
go get github.com/containernetworking/plugins@47a9fd80c825
go get github.com/cri-o/ocicni@d2881573038f
go get github.com/containernetworking/cni@4fae32b84921
go get github.com/coreos/go-iptables@f901d6c2a4f2
Unfortunately, I am getting when attempting to compile, the vendor/
directory gets only a subset of github.com/containernetworking/plugins
files.
The files https://github.com/containers/libpod/tree/master/vendor/github.com/containernetworking/plugins/pkg/utils
and https://github.com/containernetworking/plugins/tree/master/pkg/utils
do not match.
Most likely because we're not invoking it directly?
My understanding is that CNI is packaged as a series of plugins - small binaries that are executed separately, each doing part of the job of network setup. The heavy lifting, including IPTables, likely occurs there. Podman sends along instructions for configuring the network, but we do not directly invoke the relevant code, but instead separate executables.
My understanding is that CNI is packaged as a series of plugins - small binaries that are executed separately, each doing part of the job of network setup
@mheon , let me try recompiling the CNI plugins.
@mheon , you are correct 👍 The plugins are being called indirectly.
Compiled github.com/containernetworking/plugins
and added trace to pkg/utils/iptables.go
diff --git a/pkg/utils/iptables.go b/pkg/utils/iptables.go
index b38a2cd..3ec931f 100644
--- a/pkg/utils/iptables.go
+++ b/pkg/utils/iptables.go
@@ -19,6 +19,9 @@ import (
"fmt"
"github.com/coreos/go-iptables/iptables"
+
+ "github.com/davecgh/go-spew/spew"
+ "github.com/sirupsen/logrus"
)
const statusChainExists = 1
@@ -26,6 +29,11 @@ const statusChainExists = 1
// EnsureChain idempotently creates the iptables chain. It does not
// return an error if the chain already exists.
func EnsureChain(ipt *iptables.IPTables, table, chain string) error {
+
+ //logrus.Errorf("EnsureChain() ipt: %s", spew.Sdump(ipt))
+ logrus.Errorf("EnsureChain() table: %s", spew.Sdump(table))
+ logrus.Errorf("EnsureChain() chain: %s", spew.Sdump(chain))
+
if ipt == nil {
return errors.New("failed to ensure iptable chain: IPTables was nil")
}
As part of EnsureChain()
function, there is ChainExists()
check. The following arguments are being passed to it. It checks for the existence of the CNI-FORWARD
chain.
ERRO[0000] EnsureChain() table: (string) (len=6) "filter"
ERRO[0000] EnsureChain() chain: (string) (len=11) "CNI-FORWARD"
That is where it fails.
Another aspect of this issue. When I run iptables -t filter -S --wait
it partially succeeds. However, when I do the same with root
, then it fails.
$ iptables -t filter -S --wait
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
iptables: Permission denied (you must be root).
$ sudo iptables -t filter -S --wait
iptables v1.8.2 (nf_tables): table `filter' is incompatible, use 'nft' tool.
@mccv1r0 , @mheon , this is not a bug. It is more of a feature request in the upstream repo, i.e. containernetworking/plugins
. Closing this one. Thank you for being responsive! 👍 😃
table `nft' is incompatible, use 'nft' tool - facing this issue while installing openstack via devstack on ubuntu20.04
@ratulb , try using the plugins: https://github.com/greenpau/cni-plugins#getting-started
@greenpau the simplest solution is to use another table name rather then nat or filter. nftables actuall rules doesn't care about the table name and it uses a hook. The main technical issue is that for compatibility it is allowed to create the filter/nat/raw/mangle tables directly via the nft tools. If it's created by the nft tools instead of iptables I assume that there is some datastructure which is missing for iptables to run in parallel with nfttables. (Again it's assumption rather then knowledge about a specific piece of code) Simply naming/renaming the local nat table in nftables to main_nat or any other name will resolve the issue and will allow podman and probably also docker to run smoothly.
To get rid of that libvirt error, my permanent workaround in Debian 11 (as a host) with libvirtd daemon is to block the loading of iptables-related modules:
Create a file in /etc/modprobe.d/nft-only.conf
:
# Source: https://www.gaelanlloyd.com/blog/migrating-debian-buster-from-iptables-to-nftables/
#
blacklist x_tables
blacklist iptable_nat
blacklist iptable_raw
blacklist iptable_mangle
blacklist iptable_filter
blacklist ip_tables
blacklist ipt_MASQUERADE
blacklist ip6table_nat
blacklist ip6table_raw
blacklist ip6table_mangle
blacklist ip6table_filter
blacklist ip6_tables
libvirtd
daemon now starts without any error.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I use nftables; when starting a container I get:
Steps to reproduce the issue:
3.
Describe the results you received:
Describe the results you expected:
No errors.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Physical.