netbird 0.32.0 breaks K3s 1.32.2+k3s1 with flannel due to iptables conflicts #1439

Closed
opened 2025-11-20 05:30:20 -05:00 by saavagebueno · 12 comments
Owner

Originally created by @christian-schlichtherle on GitHub (Nov 21, 2024).

Describe the problem

We're operating an IoT project where some K3s nodes are placed at customer premises. So we are installing Netbird 0.32.0 on each node first and then install K3s v1.32.2+k3s1 using flannel next. When installing k3s, we are providing flannel-iface=wt0 to tell it to use the Netbird interface for node-to-node communication.

This works great to some extent but there is a problem: When the Netbird service starts, it sets up its iptables rules. Also, flannel sets up its iptables rules. However, there seem to be conflicts in those rules, resulting in communication being broken after every restart of the Netbird service, e.g. when installing an upgrade. As a workaround, I have to restart the k3s(-agent) service after every restart of the Netbird service.

Summing it up, to restart all Netbird services in the cluster, I have to do something like this:

ansible k3s_server -b -m shell --forks 1 -a 'systemctl restart netbird && sleep 3 && systemctl restart k3s'
ansible k3s_agent -b -m shell -a 'systemctl restart netbird && sleep 3 && systemctl restart k3s-agent'

As you can imagine, this is not a sustainable solution, just a hacky workaround.

Is this a known issue? What are my options? Wait for a fix or try another CNI like cilium?

To Reproduce

Steps to reproduce the behavior:

  • Install Netbird on a bunch of nodes
  • Install K3s on the nodes with flannel-iface=wt0
  • Restart the netbird service only and watch the in-cluster communication to break, e.g. you can't kubectl logs <any-pod> anymore.

Expected behavior

Not breaking the in-cluster communication by leaving flannel's iptable rules alone.

Are you using NetBird Cloud?

Yes

NetBird version

0.32.0

NetBird status -dA output:

n/a

Do you face any (non-mobile) client issues?

Yes.

Screenshots

n/a

Additional context

See above.

Originally created by @christian-schlichtherle on GitHub (Nov 21, 2024). **Describe the problem** We're operating an IoT project where some K3s nodes are placed at customer premises. So we are installing Netbird 0.32.0 on each node first and then install K3s v1.32.2+k3s1 using flannel next. When installing k3s, we are providing `flannel-iface=wt0` to tell it to use the Netbird interface for node-to-node communication. This works great to some extent but there is a problem: When the Netbird service starts, it sets up its iptables rules. Also, flannel sets up its iptables rules. However, there seem to be conflicts in those rules, resulting in communication being broken after every restart of the Netbird service, e.g. when installing an upgrade. As a workaround, I have to restart the k3s(-agent) service after every restart of the Netbird service. Summing it up, to restart all Netbird services in the cluster, I have to do something like this: ```shell ansible k3s_server -b -m shell --forks 1 -a 'systemctl restart netbird && sleep 3 && systemctl restart k3s' ansible k3s_agent -b -m shell -a 'systemctl restart netbird && sleep 3 && systemctl restart k3s-agent' ``` As you can imagine, this is not a sustainable solution, just a hacky workaround. Is this a known issue? What are my options? Wait for a fix or try another CNI like cilium? **To Reproduce** Steps to reproduce the behavior: + Install Netbird on a bunch of nodes + Install K3s on the nodes with flannel-iface=wt0 + Restart the netbird service only and watch the in-cluster communication to break, e.g. you can't `kubectl logs <any-pod>` anymore. **Expected behavior** Not breaking the in-cluster communication by leaving flannel's iptable rules alone. **Are you using NetBird Cloud?** Yes **NetBird version** 0.32.0 **NetBird status -dA output:** n/a **Do you face any (non-mobile) client issues?** Yes. **Screenshots** n/a **Additional context** See above.
saavagebueno added the clienttriage-neededk8s labels 2025-11-20 05:30:20 -05:00
Author
Owner

@christian-schlichtherle commented on GitHub (Nov 21, 2024):

BTW: This is a long-standing problem, I just had no time to report it earlier.

@christian-schlichtherle commented on GitHub (Nov 21, 2024): BTW: This is a long-standing problem, I just had no time to report it earlier.
Author
Owner

@lixmal commented on GitHub (Nov 22, 2024):

Hi @christian-schlichtherle, can you post your iptables/nftables before and after your workaround?

iptables-save
nft list ruleset

You might need to install nftables for the nft tool to be available

@lixmal commented on GitHub (Nov 22, 2024): Hi @christian-schlichtherle, can you post your iptables/nftables before and after your workaround? ``` iptables-save nft list ruleset ``` You might need to install `nftables` for the nft tool to be available
Author
Owner

@christian-schlichtherle commented on GitHub (Nov 24, 2024):

@lixmal I have run these commands. Unfortunately, the output reveals too much sensitive information to share it here, but in order to have a meaningful diff, I processed the output as follows:

ansible my-worker-node -b -a 'iptables-save' > 10_iptables-save_before
ansible my-worker-node -b -a 'nft list ruleset' > 10_nft_list_ruleset_before
ansible my-worker-node -b -m service -a 'name=netbird state=restarted'
ansible my-worker-node -b -a 'iptables-save' > 20_iptables-save_after_netbird_restart
ansible my-worker-node -b -a 'nft list ruleset' > 20_nft_list_ruleset_after_netbird_restart
ansible my-worker-node -b -m service -a 'name=k3s-agent state=restarted'
ansible my-worker-node -b -a 'iptables-save' > 30_iptables-save_after_k3s_agent_restart
ansible my-worker-node -b -a 'nft list ruleset' > 30_nft_list_ruleset_after_k3s_agent_restart
for file in ??_iptables-save_*; do grep -v -e '^#' -e '^*' -e 'COMMIT' < $file | sort > $file.sorted; done
for file in ??_nft_list_ruleset_*; do grep -v -e '^#' -e '^\s*$' -e '^\s*table' < $file | sort > $file.sorted; done

This results in a bunch of *.sorted files which I could compare using text diff. The result was that the only difference between the files 10_*.sorted and 20_*.sorted was in the packet counters. Yet, the pod-to-pod communication is definitely broken after restarting the netbird service. So now we know that it has nothing to do with iptables/nftables rules. I'm sorry for the misleading title of this issue.

Another mistake I have done in my original posting is to say that a restart of the netbird service breaks kubectl logs: That's not correct - this command still works (it doesn't require flannel). However, pod-to-pod communication is definitely broken. In our case, a client could not connect to another service anymore. After a final restart of the k3s-agent service, it works again.

Summing it up, a restart of the netbird service does break flannel, although the information given in my original posting is not exactly correct. I hope this information helps to reproduce the issue.

@christian-schlichtherle commented on GitHub (Nov 24, 2024): @lixmal I have run these commands. Unfortunately, the output reveals too much sensitive information to share it here, but in order to have a meaningful diff, I processed the output as follows: ``` ansible my-worker-node -b -a 'iptables-save' > 10_iptables-save_before ansible my-worker-node -b -a 'nft list ruleset' > 10_nft_list_ruleset_before ansible my-worker-node -b -m service -a 'name=netbird state=restarted' ansible my-worker-node -b -a 'iptables-save' > 20_iptables-save_after_netbird_restart ansible my-worker-node -b -a 'nft list ruleset' > 20_nft_list_ruleset_after_netbird_restart ansible my-worker-node -b -m service -a 'name=k3s-agent state=restarted' ansible my-worker-node -b -a 'iptables-save' > 30_iptables-save_after_k3s_agent_restart ansible my-worker-node -b -a 'nft list ruleset' > 30_nft_list_ruleset_after_k3s_agent_restart for file in ??_iptables-save_*; do grep -v -e '^#' -e '^*' -e 'COMMIT' < $file | sort > $file.sorted; done for file in ??_nft_list_ruleset_*; do grep -v -e '^#' -e '^\s*$' -e '^\s*table' < $file | sort > $file.sorted; done ``` This results in a bunch of `*.sorted` files which I could compare using text diff. The result was that the only difference between the files `10_*.sorted` and `20_*.sorted` was in the packet counters. Yet, the pod-to-pod communication is definitely broken after restarting the netbird service. So now we know that it has nothing to do with iptables/nftables rules. I'm sorry for the misleading title of this issue. Another mistake I have done in my original posting is to say that a restart of the netbird service breaks `kubectl logs`: That's not correct - this command still works (it doesn't require flannel). However, pod-to-pod communication is definitely broken. In our case, a client could not connect to another service anymore. After a final restart of the k3s-agent service, it works again. Summing it up, a restart of the netbird service does break flannel, although the information given in my original posting is not exactly correct. I hope this information helps to reproduce the issue.
Author
Owner

@nazarewk commented on GitHub (Apr 28, 2025):

Hello @christian-schlichtherle,

We're currently reviewing our open issues and would like to verify if this problem still exists in the latest NetBird version.

Could you please confirm if the issue is still there?

We may close this issue temporarily if we don't hear back from you within 2 weeks, but feel free to reopen it with updated information.

Thanks for your contribution to improving the project!

@nazarewk commented on GitHub (Apr 28, 2025): Hello @christian-schlichtherle, We're currently reviewing our open issues and would like to verify if this problem still exists in the [latest NetBird version](https://github.com/netbirdio/netbird/releases). Could you please confirm if the issue is still there? We may close this issue temporarily if we don't hear back from you within **2 weeks**, but feel free to reopen it with updated information. Thanks for your contribution to improving the project!
Author
Owner

@christian-schlichtherle commented on GitHub (Apr 28, 2025):

We have abandoned Netbird for Netmaker.

@christian-schlichtherle commented on GitHub (Apr 28, 2025): We have abandoned Netbird for Netmaker.
Author
Owner

@jitbasemartin commented on GitHub (Apr 30, 2025):

@nazarewk I don't know if it's related but I had this issue with netbird v0.43.0

Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: F0430 13:50:37.442393   52622 network_policy_controller.go:400] Failed to verify rule exists in FORWARD chain due to running [/var/lib/rancher/k3s/data/9c5025828cb319b4b3aed5bd59ed36df271b889c6ce6206e39e8a0f751afa816/bin/aux/iptables -t filter -C FORWARD -m comment --comment kube-router netpol - TEMCG2JMHZYE7H7T -j KUBE-ROUTER-FORWARD --wait]: exit status 3: Error: cmp sreg undef
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: iptables v1.8.9 (nf_tables): Parsing nftables rule failed
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: Perhaps iptables or your kernel needs to be upgraded.
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: panic: F0430 13:50:37.442393   52622 network_policy_controller.go:400] Failed to verify rule exists in FORWARD chain due to running [/var/lib/rancher/k3s/data/9c5025828cb319b4b3aed5bd59ed36df271b889c6ce6206e39e8a0f751afa816/bin/aux/iptables -t filter -C FORWARD -m comment --comment kube-router netpol - TEMCG2JMHZYE7H7T -j KUBE-ROUTER-FORWARD --wait]: exit status 3: Error: cmp sreg undef
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         iptables v1.8.9 (nf_tables): Parsing nftables rule failed
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         Perhaps iptables or your kernel needs to be upgraded.
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: goroutine 10939 [running]:
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.(*loggingT).output(0xb8a0ca0, 0x3, 0xc000a18c40, 0xc009eb0e70, 0x1, {0x911691d?, 0x2?}, 0xc00e0af4a0?, 0x0)
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:965 +0x734
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.(*loggingT).printfDepth(0xb8a0ca0, 0x3, 0xc000a18c40, {0x0, 0x0}, 0x1, {0x6b9425d, 0x32}, {0xc00b49c7c0, 0x2, ...})
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:767 +0x1f0
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.(*loggingT).printf(...)
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:744
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.Fatalf(...)
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:1655
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol.(*NetworkPolicyController).ensureTopLevelChains.func2({0x7a372e0, 0xc008f44460}, {0x6a2fa8d, 0x7}, {0xc014150660, 0x6, 0x6}, {0xc00be48ec0, 0x10}, 0x1)
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/kube-router/v2@v2.2.1/pkg/controllers/netpol/network_policy_controller.go:400 +0x1b2
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol.(*NetworkPolicyController).ensureTopLevelChains(0xc00b20b7a0)
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/kube-router/v2@v2.2.1/pkg/controllers/netpol/network_policy_controller.go:467 +0x1be9
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol.(*NetworkPolicyController).Run(0xc00b20b7a0, 0xc002e8d570, 0xc0010b3960, 0xc0063d1190)
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/pkg/mod/github.com/k3s-io/kube-router/v2@v2.2.1/pkg/controllers/netpol/network_policy_controller.go:168 +0x159
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: created by github.com/k3s-io/k3s/pkg/agent/netpol.Run in goroutine 1
Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]:         /go/src/github.com/k3s-io/k3s/pkg/agent/netpol/netpol.go:184 +0x10c5
Apr 30 13:50:37 g-cd39ea4c86cb573f systemd[1]: k3s.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

Related issue: https://github.com/k3s-io/k3s/issues/11493

@jitbasemartin commented on GitHub (Apr 30, 2025): @nazarewk I don't know if it's related but I had this issue with netbird v0.43.0 ``` Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: F0430 13:50:37.442393 52622 network_policy_controller.go:400] Failed to verify rule exists in FORWARD chain due to running [/var/lib/rancher/k3s/data/9c5025828cb319b4b3aed5bd59ed36df271b889c6ce6206e39e8a0f751afa816/bin/aux/iptables -t filter -C FORWARD -m comment --comment kube-router netpol - TEMCG2JMHZYE7H7T -j KUBE-ROUTER-FORWARD --wait]: exit status 3: Error: cmp sreg undef Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: iptables v1.8.9 (nf_tables): Parsing nftables rule failed Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: Perhaps iptables or your kernel needs to be upgraded. Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: panic: F0430 13:50:37.442393 52622 network_policy_controller.go:400] Failed to verify rule exists in FORWARD chain due to running [/var/lib/rancher/k3s/data/9c5025828cb319b4b3aed5bd59ed36df271b889c6ce6206e39e8a0f751afa816/bin/aux/iptables -t filter -C FORWARD -m comment --comment kube-router netpol - TEMCG2JMHZYE7H7T -j KUBE-ROUTER-FORWARD --wait]: exit status 3: Error: cmp sreg undef Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: iptables v1.8.9 (nf_tables): Parsing nftables rule failed Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: Perhaps iptables or your kernel needs to be upgraded. Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: goroutine 10939 [running]: Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.(*loggingT).output(0xb8a0ca0, 0x3, 0xc000a18c40, 0xc009eb0e70, 0x1, {0x911691d?, 0x2?}, 0xc00e0af4a0?, 0x0) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:965 +0x734 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.(*loggingT).printfDepth(0xb8a0ca0, 0x3, 0xc000a18c40, {0x0, 0x0}, 0x1, {0x6b9425d, 0x32}, {0xc00b49c7c0, 0x2, ...}) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:767 +0x1f0 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.(*loggingT).printf(...) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:744 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: k8s.io/klog/v2.Fatalf(...) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/klog/v2@v2.120.1-k3s1/klog.go:1655 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol.(*NetworkPolicyController).ensureTopLevelChains.func2({0x7a372e0, 0xc008f44460}, {0x6a2fa8d, 0x7}, {0xc014150660, 0x6, 0x6}, {0xc00be48ec0, 0x10}, 0x1) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/kube-router/v2@v2.2.1/pkg/controllers/netpol/network_policy_controller.go:400 +0x1b2 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol.(*NetworkPolicyController).ensureTopLevelChains(0xc00b20b7a0) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/kube-router/v2@v2.2.1/pkg/controllers/netpol/network_policy_controller.go:467 +0x1be9 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol.(*NetworkPolicyController).Run(0xc00b20b7a0, 0xc002e8d570, 0xc0010b3960, 0xc0063d1190) Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/pkg/mod/github.com/k3s-io/kube-router/v2@v2.2.1/pkg/controllers/netpol/network_policy_controller.go:168 +0x159 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: created by github.com/k3s-io/k3s/pkg/agent/netpol.Run in goroutine 1 Apr 30 13:50:37 g-cd39ea4c86cb573f k3s[52622]: /go/src/github.com/k3s-io/k3s/pkg/agent/netpol/netpol.go:184 +0x10c5 Apr 30 13:50:37 g-cd39ea4c86cb573f systemd[1]: k3s.service: Main process exited, code=exited, status=2/INVALIDARGUMENT ``` Related issue: https://github.com/k3s-io/k3s/issues/11493
Author
Owner

@mad73923 commented on GitHub (Jun 15, 2025):

Same problem here. Running 0.46.0

@mad73923 commented on GitHub (Jun 15, 2025): Same problem here. Running 0.46.0
Author
Owner

@mad73923 commented on GitHub (Jun 16, 2025):

Somebody did a patch related to this. Will check this:
https://gist.github.com/dkrhodes/040778116eedafe857ddc34a1ccdcfec

@mad73923 commented on GitHub (Jun 16, 2025): Somebody did a patch related to this. Will check this: https://gist.github.com/dkrhodes/040778116eedafe857ddc34a1ccdcfec
Author
Owner

@mad73923 commented on GitHub (Jun 16, 2025):

This actually works: (credits to @dkrhodes)

diff --git a/client/internal/routemanager/systemops/systemops_linux.go b/client/internal/routemanager/systemops/systemops_linux.go
index b48cfa24..94dd6af1 100644
--- a/client/internal/routemanager/systemops/systemops_linux.go
+++ b/client/internal/routemanager/systemops/systemops_linux.go
@@ -55,8 +55,9 @@ type ruleParams struct {
 
 func getSetupRules() []ruleParams {
        return []ruleParams{
-               {100, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V4, false, 0, "rule with suppress prefixlen v4"},
-               {100, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V6, false, 0, "rule with suppress prefixlen v6"},
+               // fix conflict with cilium rule
+               {105, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V4, false, 0, "rule with suppress prefixlen v4"},
+               {105, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V6, false, 0, "rule with suppress prefixlen v6"},
                {110, nbnet.ControlPlaneMark, NetbirdVPNTableID, netlink.FAMILY_V4, true, -1, "rule v4 netbird"},
                {110, nbnet.ControlPlaneMark, NetbirdVPNTableID, netlink.FAMILY_V6, true, -1, "rule v6 netbird"},
        }

My setup:

  • k3s
  • cilium
  • netbird

any chance to get this merged/fixed?

@mad73923 commented on GitHub (Jun 16, 2025): This actually works: (credits to @dkrhodes) ``` diff --git a/client/internal/routemanager/systemops/systemops_linux.go b/client/internal/routemanager/systemops/systemops_linux.go index b48cfa24..94dd6af1 100644 --- a/client/internal/routemanager/systemops/systemops_linux.go +++ b/client/internal/routemanager/systemops/systemops_linux.go @@ -55,8 +55,9 @@ type ruleParams struct { func getSetupRules() []ruleParams { return []ruleParams{ - {100, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V4, false, 0, "rule with suppress prefixlen v4"}, - {100, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V6, false, 0, "rule with suppress prefixlen v6"}, + // fix conflict with cilium rule + {105, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V4, false, 0, "rule with suppress prefixlen v4"}, + {105, 0, syscall.RT_TABLE_MAIN, netlink.FAMILY_V6, false, 0, "rule with suppress prefixlen v6"}, {110, nbnet.ControlPlaneMark, NetbirdVPNTableID, netlink.FAMILY_V4, true, -1, "rule v4 netbird"}, {110, nbnet.ControlPlaneMark, NetbirdVPNTableID, netlink.FAMILY_V6, true, -1, "rule v6 netbird"}, } ``` My setup: - k3s - cilium - netbird any chance to get this merged/fixed?
Author
Owner

@aldenparker commented on GitHub (Sep 16, 2025):

Which version solved this? I am getting something similar on 0.56.0. I love netbird, but I need to make it work with k3s.

Sep 16 20:57:30 y-6602280 k3s[34373]: panic: F0916 20:57:30.459805   34373 network_policy_controller.go:398] Failed to verify rule exists in FORWARD chain due to running [/nix/store/dll2vraq2bk5g01q898cgfjykkc3prnp-iptables-1.8.11/bin/iptables -t filter -C FORWARD -m comment --comment kube-router netpol - TEMCG2JMHZYE7H7T -j KUBE-ROUTER-FORWARD --wait]: exit status 3: Error: cmp sreg undef
Sep 16 20:57:30 y-6602280 k3s[34373]:         iptables v1.8.11 (nf_tables): Parsing nftables rule failed
Sep 16 20:57:30 y-6602280 k3s[34373]:         Perhaps iptables or your kernel needs to be upgraded.
@aldenparker commented on GitHub (Sep 16, 2025): Which version solved this? I am getting something similar on 0.56.0. I love netbird, but I need to make it work with k3s. ``` Sep 16 20:57:30 y-6602280 k3s[34373]: panic: F0916 20:57:30.459805 34373 network_policy_controller.go:398] Failed to verify rule exists in FORWARD chain due to running [/nix/store/dll2vraq2bk5g01q898cgfjykkc3prnp-iptables-1.8.11/bin/iptables -t filter -C FORWARD -m comment --comment kube-router netpol - TEMCG2JMHZYE7H7T -j KUBE-ROUTER-FORWARD --wait]: exit status 3: Error: cmp sreg undef Sep 16 20:57:30 y-6602280 k3s[34373]: iptables v1.8.11 (nf_tables): Parsing nftables rule failed Sep 16 20:57:30 y-6602280 k3s[34373]: Perhaps iptables or your kernel needs to be upgraded. ```
Author
Owner

@codedge commented on GitHub (Oct 5, 2025):

Also unclear for me in which version this was solved. I run in the same issue using netbird 0.59.2 with k3s 1.34.1+k3s1.

@codedge commented on GitHub (Oct 5, 2025): Also unclear for me in which version this was solved. I run in the same issue using netbird `0.59.2` with k3s `1.34.1+k3s1`.
Author
Owner

@mad73923 commented on GitHub (Oct 5, 2025):

Please reopen this!
After upgrading from netbird 0.58.2 -> 0.59.2
Cilium Helm chart version: 1.17.8
k3s version: v1.33.5+k3s1

The error (as stated above) is back!

E1005 15:10:57.113147 3201084 proxier.go:857] "Failed to ensure chain jumps" err=<
        error checking rule: exit status 3: Ignoring deprecated --wait-interval option.
        Error: cmp sreg undef
        iptables v1.8.11 (nf_tables): Parsing nftables rule failed
        Perhaps iptables or your kernel needs to be upgraded.
 > table="filter" srcChain="FORWARD" dstChain="KUBE-EXTERNAL-SERVICES"
I1005 15:10:57.113224 3201084 proxier.go:820] "Sync failed" retryingTime="30s"

after reverting the upgrade, the error is gone & the cluster services are available from extern again

using the following k3s args:

--flannel-backend=none --disable-network-policy --prefer-bundled-bin
@mad73923 commented on GitHub (Oct 5, 2025): Please reopen this! After upgrading from netbird 0.58.2 -> 0.59.2 Cilium Helm chart version: 1.17.8 k3s version: v1.33.5+k3s1 The error (as stated above) is back! ``` E1005 15:10:57.113147 3201084 proxier.go:857] "Failed to ensure chain jumps" err=< error checking rule: exit status 3: Ignoring deprecated --wait-interval option. Error: cmp sreg undef iptables v1.8.11 (nf_tables): Parsing nftables rule failed Perhaps iptables or your kernel needs to be upgraded. > table="filter" srcChain="FORWARD" dstChain="KUBE-EXTERNAL-SERVICES" I1005 15:10:57.113224 3201084 proxier.go:820] "Sync failed" retryingTime="30s" ``` after reverting the upgrade, the error is gone & the cluster services are available from extern again using the following k3s args: ``` --flannel-backend=none --disable-network-policy --prefer-bundled-bin ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: SVI/netbird#1439