Bug 12563 - Core 153 breaks outgoing video conferencing (zoom, fb messenger and whatsapp)
Summary: Core 153 breaks outgoing video conferencing (zoom, fb messenger and whatsapp)
Status: CLOSED FIXED
Alias: None
Product: IPFire
Classification: Unclassified
Component: --- (show other bugs)
Version: 2
Hardware: unspecified All
: - Unknown - Major Usability
Assignee: Michael Tremer
QA Contact:
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-01-07 21:53 UTC by clgedeon
Modified: 2022-03-30 07:09 UTC (History)
10 users (show)

See Also:


Attachments
Screenshot of a Zoom session (2.51 MB, image/png)
2021-01-11 14:53 UTC, Michael Tremer
Details
Screenshot of Zoom session from pc 1 (541.29 KB, image/png)
2021-01-11 19:07 UTC, clgedeon
Details
Screenshot of Zoom session from pc 2 (415.48 KB, image/png)
2021-01-11 19:08 UTC, clgedeon
Details
unbound lines extracted from /var/log/messages (11.40 KB, text/plain)
2021-01-15 22:26 UTC, clgedeon
Details
log of unbound patch try (1.86 KB, text/plain)
2021-01-27 18:15 UTC, clgedeon
Details
console log of rebuilding initrd (1.77 KB, text/plain)
2021-01-27 18:34 UTC, clgedeon
Details
screen shot of patched rebuild kernel not booting (1.73 MB, image/jpeg)
2021-01-27 18:35 UTC, clgedeon
Details
console log of correctly rebuilding initrd (2.46 KB, text/plain)
2021-01-29 02:05 UTC, clgedeon
Details
discord log file (141.71 KB, text/plain)
2021-04-12 03:03 UTC, Jon
Details
PCAPs of DTLS handshake failure on 153, success on 152 (7.04 KB, application/zip)
2021-04-25 18:42 UTC, Brian Pruss
Details
PCAPs of Google Meet video failure on 156, success on 152 for comparison (1.90 MB, application/x-zip-compressed)
2021-06-21 00:12 UTC, Brian Pruss
Details
PCAP of Google Meet video failure on 157, output from 'ip link', 'iptables', and 'netstat' (1.81 MB, application/x-zip-compressed)
2021-07-10 23:50 UTC, Brian Pruss
Details
PCAPs of Google Meet video failure on c157 with the latest dev kernel, and on testing c159 (4.12 MB, application/x-zip-compressed)
2021-07-12 13:24 UTC, Brian Pruss
Details
Proposed patch for dhcpcd (622 bytes, patch)
2022-02-22 12:03 UTC, Michael Tremer
Details

Note You need to log in before you can comment on or make changes to this bug.
Description clgedeon 2021-01-07 21:53:46 UTC
Overview:

After applying core update 153 on dec. 22d, communications with zoom meetings, fb messenger and whatsapp broke. When clients connected from inside LAN to zoom (desktop or mobile alike) whatsapp or messenger, people outside LAN would not see them but only a black screen. Not surprisingly, Even people inside LAN would not see each other.

Resetting modems, switches, and even ipfire pc itself did not help.

Stopping Guardian, IPS and Location block (the only services that we use) did not help either.

Upgrading zoom client did not do the trick. Mobile apps are up to date.

We noticed however that videoconferencing through web browser did work, using jitsi or whereby. We also checked using speedtest.net, provider speed tool and other sites that expected bandwidth was there and everything was OK. This rules out possible bandwidth problems.

Reverting to core 152 last monday solved the problem and now all videoconferencing clients of various types work again both way…

Others seem to have faces similar issues (https://community.ipfire.org/t/google-meet-stopped-working-after-core-update-152-153/4272) with Google Meet and discord and reverting to C152 have also solved the issue

Steps to Reproduce:
1) Install C153
2) Connect from inside lan to zoom meeting

Actual Results:
For any other meeting member, video of user connected from inside lan appears completely black.

Expected Results:
Video of User connected from inside lan should appear as that of other users.

Builds Date & Hardware:
All various android versions and linux builds alike
Comment 1 clgedeon 2021-01-07 23:00:37 UTC
As a follow up: I realise that in fact, we use also then time server and openvpn. However, the problem occurs whether or not there are openvpn client connected to the LAN.
Comment 2 Jon 2021-01-08 17:53:30 UTC
FYI...
I have core 153. I just signed up for a Zoom account, installed the Zoom app on my iMac, and did a 5 minute video call with my son (he is at work). All worked A-OK on both sides. I did not try the Zoom browser version.

Jon iMac --> IPFire --> Cable Modem --> Internet ->> Son at work

So maybe there is something configured differently in the IPFire box??

My side:
Guardian - No
IPS - RUNNING on red only.
Location Block - enabled
Firewall Rules - all off except for DNS/NTP redirect (testing)
P2P networks - all disabled (not checked)
Default firewall behaviour - FORWARD & OUTGOING is Allowed
Web Proxy - Transparent
URL Filter - not enabled (not checked)
Comment 4 Michael Tremer 2021-01-11 14:09:12 UTC
Is anyone able to provide more technical detail?

We have been using Zoom on a regular basis at the office here without any problems.

How does Zoom connect to other clients on the same network?
Comment 5 Michael Tremer 2021-01-11 14:53:35 UTC
Created attachment 847 [details]
Screenshot of a Zoom session

I just hosted a zoom session on the local network, two computers behind an IPFire firewall and two phones on 4G joining the same session.

We did not see any problems. Therefore I cannot reproduce this problem. But I need to shave.
Comment 6 clgedeon 2021-01-11 19:07:37 UTC
Created attachment 848 [details]
Screenshot of Zoom session from pc 1
Comment 7 clgedeon 2021-01-11 19:08:04 UTC
Created attachment 849 [details]
Screenshot of Zoom session from pc 2
Comment 8 clgedeon 2021-01-11 19:09:04 UTC
Hello,

We just made a session with the problematic C153: 2 pc from green, 1 mobile from blue and 1 pc outside lan.

First screenshot is from pc1 and second from pc2. In both screen shots, the third icon is that of external pc, and fourth is the mobile.

As you can see internal devices can't see each others. External pc could only see black screens for internal devices.

Screenshot 2 shows that external connection is OK while screenshot 1 shows that internal connection is bad (!) and inconsistencies between icon and speakerview as well as identification of speaker.

This strange behaviour never shows up with core 152.
Comment 9 clgedeon 2021-01-12 02:17:27 UTC
This pm, I made a backup of ipfire installation and then began the following process:
Make a fresh install of C153; add Google DNS as FAI's is broken; allow GREEN 
  to access RED.
--> Test zoom session: everything works as expected
Next restore config from backup
--> Test zoom session: everything works as expected
Next install mc package
--> Test zoom session: everything works as expected
Next install lshw package
--> Test zoom session: everything works as expected
Next install wio package
--> Test zoom session: everything works as expected
Next install guardian (which triggers install of three perl libraries)
--> Test zoom session: everything works as expected
Reboot ipfire box
--> Test zoom session: session is broken with black screens again
  Well, I noticed during the shutdown, that it was reported that IPS was not
  running. Perhaps I should have rebooted after restoring settings.
Uninstall guardian and three perl libraries and reboot
--> Test zoom session: everything works as expected
Install guardian and related libraries and reboot
--> Test zoom session: session is broken with black screens again
Disable guardian and reboot
- Test zoom session: everything works as expected

Obviously, guardian is the culprit as far as the fresh install is concerned.

Yet, I must mention that between Dec. 22d and Jan. 4th, I disabled alltogether guardian, IPS and location block and rebooted. May be the fact that it was an upgrade from 152 makes the behaviour different?

Anyway, for the record the installation is as follows:

GREEN+RED+BLUE+ORANGE

Guardian - No (if yes, problems occur)
IPS - RUNNING on red only.
Location Block - enabled
Firewall Rules - allow access to red from lan partitions 
                 allow access to DNS and NTP on ipfire from green and blue
P2P networks - all disabled (not checked)
Default firewall behaviour untouched
Web Proxy - not enabled
URL Filter - not enabled (not checked)
Openvpn - Running on red and blue
dhcp - Running on blue
Comment 10 Michael Tremer 2021-01-12 10:34:02 UTC
That is a great testing report :) Thank you for investing that time.

Could you post any guardian log files? Generally guardian should not interfere with this at all, and we haven't changed it in Core Update 153. Therefore I am very surprised that something else must have changed that affects this.

I will loop in Stefan who is the maintainer for Guardian.
Comment 11 Stefan Schantl 2021-01-12 17:49:02 UTC
Hello Michael, Hello @all,

I cannot imagine how guadian should affect the audio/video stream here.

When activated it only detects if a Brute-Force attack against the local SSH login or WUI is running and will block this host.

However the easiest way to control if guardian really affects the streams in any way is to switch the "Logtarget" to "File" and set the "loglevel" for guardian to "Debug".

Then please restart guardian and run your testing scenario once again. If guardian causes troubles here, there will be entries in the log file.

If so, please submit the logfile to this bug tricked for more debugging.

Thanks in advance,

-Stefan
Comment 12 clgedeon 2021-01-12 22:46:34 UTC
Hello,

Bad news today. I received email from inside LAN that problem reappeared again this morning between two LAN connected client of a zoom session. Yet I was also told that a zoom meeting between 2 pc one inside and one outside was ok. I have no more ideas on how to debug this.

Fortunately, I had installed C152 on a spare disk, and it is still there. We will revert to it. But I have saved all logs and will try to see if I can find a clue. Yet I am not a specialist.
Comment 13 Michael Tremer 2021-01-13 11:15:47 UTC
(In reply to clgedeon from comment #12)
> Bad news today. I received email from inside LAN that problem reappeared
> again this morning between two LAN connected client of a zoom session. Yet I
> was also told that a zoom meeting between 2 pc one inside and one outside
> was ok. I have no more ideas on how to debug this.

Do you still have any log files from the IPS? Without being able to reproduce this problem myself there isn't much that I can do.
Comment 14 clgedeon 2021-01-15 21:37:49 UTC
Hello,

Some feedback of the tests I performed yesterday. Well, I was totally unable to reproduce the problem that was reported to me (two lan computer unable to see each other in a zoom session). I even started again guardian and did not have any of the behavior I had during those tests: https://bugzilla.ipfire.org/show_bug.cgi?id=12563#c9. (On this I will comment in next post).

I decided to suspect a hardware intermittent failure as it is rather old (~12 years). Since we have for now, because of mandatory remote work, a number of unused, more recent and more powerful desktops, I decided to temporarily move Ipfire's disk and nick in one of them. I guess that if nothing nasty happens within two weeks, we can conclude in hardware failure, at least in our case.
Comment 15 clgedeon 2021-01-15 22:26:30 UTC
Created attachment 850 [details]
unbound lines extracted from /var/log/messages
Comment 16 clgedeon 2021-01-15 22:27:35 UTC
(In reply to Michael Tremer from comment #13)
 
> Do you still have any log files from the IPS? Without being able to
> reproduce this problem myself there isn't much that I can do.

I did a backup of the whole /var/log tree. Strangely, IPS log (suricata?) contains only one file with one line the timestamp of which is totally irrelevant to the problem. Obviously nothing relevant. Guardian log is empty.

However, I noticed strange unbound lines in /var/log/messages corresponding exactly to the sequence:
   Reboot ipfire box
1  --> Test zoom session: session is broken with black screens again
     Well, I noticed during the shutdown, that it was reported that IPS was not
     running. Perhaps I should have rebooted after restoring settings.
   Uninstall guardian and three perl libraries and reboot
2  --> Test zoom session: everything works as expected
   Install guardian and related libraries and reboot
3  --> Test zoom session: session is broken with black screens again
   Disable guardian and reboot
4  - Test zoom session: everything works as expected
described in https://bugzilla.ipfire.org/show_bug.cgi?id=12563#c9.

In both step 1 and 3, unbound seems to have problems with zoom.us domain. But these problems disappear after, even if people reported problems January 12th, nothing shows up anymore until we switched again to c152, late in the afternoon on Jan 12th.

In https://bugzilla.ipfire.org/attachment.cgi?id=850 you have the extracted unbound lines. I have indicated here where the steps did occur. By the way, I did stop the box after step 4 to close the lid (showed as step 5).

=================== step 1
Jan 11 16:43:19 passerelle2 unbound: [2138:0] notice: init module 0: validator
Jan 11 16:43:19 passerelle2 unbound: [2138:0] notice: init module 1: iterator
...
=================== set 2
Jan 11 16:43:35 passerelle2 unbound: [2138:0] notice: init module 0: validator
Jan 11 16:43:36 passerelle2 unbound: [2138:0] notice: init module 1: iterator
...
=================== step 3
...
=================== step 4
Jan 11 16:50:46 passerelle2 unbound: [2140:0] notice: init module 0: validator
Jan 11 16:50:46 passerelle2 unbound: [2140:0] notice: init module 1: iterator
...
=================== step 5
Jan 11 17:04:47 passerelle2 unbound: [2139:0] notice: init module 0: validator
Jan 11 17:04:47 passerelle2 unbound: [2139:0] notice: init module 1: iterator
Comment 17 Michael Tremer 2021-01-25 21:37:57 UTC
Thank you for looking into this. I am not sure whether it is related or not because there are no Zoom hosts that could not be resolved.

However, there is a patch that will fix that unbound keeps trying to reach the DNS servers:

> https://patchwork.ipfire.org/patch/3786/

Why did you loose connectivity though? The IPS could have dropped the requests, or the link could have dropped them, or the DNS resolvers actually did not respond. Do you have an unstable internet connection by any chance?
Comment 18 Arne.F 2021-01-27 11:13:19 UTC
I found a patch in the kernel changelog that is maybee related to this:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v4.14.217&id=98e7abff10db3c791e133a45bc40d83f64ac4e70

You can try a experimental kernel update:
https://people.ipfire.org/~arne_f/highly-experimental/udp-ip-problem/

untar it to the root of the IPFire (tar xvf FIILE -C / ) and rebuild the initrd.
grub2-mkconfig -o /boot/grub/grub.cfg
Comment 20 clgedeon 2021-01-27 18:15:30 UTC
Created attachment 854 [details]
log of unbound patch try
Comment 21 clgedeon 2021-01-27 18:22:31 UTC
(In reply to Michael Tremer from comment #17)
> Thank you for looking into this. I am not sure whether it is related or not
> because there are no Zoom hosts that could not be resolved.
> 
> However, there is a patch that will fix that unbound keeps trying to reach
> the DNS servers:
> 
> > https://patchwork.ipfire.org/patch/3786/
> 
> Why did you loose connectivity though? The IPS could have dropped the
> requests, or the link could have dropped them, or the DNS resolvers actually
> did not respond. Do you have an unstable internet connection by any chance?

Hello,
I did try the patch this morning. But unbound could not start, complaining about the unknown infra-keep-probing command as shown in the console output in attachement https://bugzilla.ipfire.org/attachment.cgi?id=854

Now I noticed that if requests are sent too rapidly after ipfire startup those lines would show up, but after say 2 minutes, they don't show up anymore. Si it might well not be related.
Comment 22 Michael Tremer 2021-01-27 18:26:54 UTC
Hello,

For testing, you can simply remove line 64 in /etc/unbound/unbound.conf and restart the dameon. We will investigate this in another bug.

Did you install the kernel that Arne uploaded?
Comment 23 clgedeon 2021-01-27 18:34:36 UTC
Created attachment 855 [details]
console log of rebuilding initrd
Comment 24 clgedeon 2021-01-27 18:35:54 UTC
Created attachment 856 [details]
screen shot of patched rebuild kernel not booting
Comment 25 clgedeon 2021-01-27 18:36:16 UTC
(In reply to Arne.F from comment #18)
> I found a patch in the kernel changelog that is maybee related to this:
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/
> ?h=v4.14.217&id=98e7abff10db3c791e133a45bc40d83f64ac4e70
> 
> You can try a experimental kernel update:
> https://people.ipfire.org/~arne_f/highly-experimental/udp-ip-problem/
> 
> untar it to the root of the IPFire (tar xvf FIILE -C / ) and rebuild the
> initrd.
> grub2-mkconfig -o /boot/grub/grub.cfg

Hello,
I tried the kernel with no luck.
1) Sorry. Im a newbe as far as kernel manipulations are concerned. I dowloaded and tried to figure out howto rebuild initrd. (see attachement for console output) and updated grub. However, the kernel would not boot.
2) I decided to replace the initrd rebuilt with the one downloaded and try it without rebuilding. ipfire did start correctely. A first test of zoom session did work. However, all following ones showed black screen as initialy reported. No luck
Comment 26 Michael Tremer 2021-01-27 18:39:19 UTC
(In reply to clgedeon from comment #25)
> (In reply to Arne.F from comment #18)
> > I found a patch in the kernel changelog that is maybee related to this:
> > https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/
> > ?h=v4.14.217&id=98e7abff10db3c791e133a45bc40d83f64ac4e70
> > 
> > You can try a experimental kernel update:
> > https://people.ipfire.org/~arne_f/highly-experimental/udp-ip-problem/
> > 
> > untar it to the root of the IPFire (tar xvf FIILE -C / ) and rebuild the
> > initrd.
> > grub2-mkconfig -o /boot/grub/grub.cfg
> 
> Hello,
> 1) Sorry. Im a newbe as far as kernel manipulations are concerned. I
> dowloaded and tried to figure out howto rebuild initrd. (see attachement for
> console output) and updated grub. However, the kernel would not boot.

No worries, you have a typo in the version number of the kernel:

> [root@myhost boot]# mkinitrd /boot/initramfs-4.14.217-ipfire.img 414.217-ipfire

The last "414" should be "4.14" and then the kernel should boot.
Comment 27 clgedeon 2021-01-27 19:10:30 UTC
Some updates concerning tests I did last week.

Actually, 10 days ago, I decided to see if it could be hardware problem and put disk and nics in more recent hardware unused because of confinement. Problems did however show up, ruling out the hypothesis of hardware failure.

We also tried this: after having verified that problem did occur, replaced ipfire by a tplink archer50 router. The problem stopped. Removed tplink router and put back ipfire. The problem reappeared! This rules out internal network problems which the fact thatl c152 works pretty well do also...

I tried to stop openvpn all together (as previously, I only checked that having or not openvpn clients connected did not influence the result). But here also, the problem did persist with openvpn service stopped.

I also verified the internet connection speed when the problem occured, and got 50/200 u/d, which is what it is supposed to be.

Finally, last friday I did a new install of C153, but redoing all the configuration by hand instead of retrieving it from backup making the assumption that may be, on the course of successive upgrades, something weird happened in some file that did not show up until now. With this install in place since then,  we had last sunday a zoom meeting with about 15 connection from which one from inside LAN and everything worked as expected. People inside LAN however are on vacation this week. And therefore, I wont be able to certify that it consistently works before about 10 days from now.
Comment 28 clgedeon 2021-01-27 19:13:38 UTC
(In reply to Michael Tremer from comment #26)
> No worries, you have a typo in the version number of the kernel:
> 
> > [root@myhost boot]# mkinitrd /boot/initramfs-4.14.217-ipfire.img 414.217-ipfire
> 
> The last "414" should be "4.14" and then the kernel should boot.

Thank you! I will try this the next time I can access LAN physically. I prefer to have physical access to the box, just in case I do some other mistake...
Comment 29 Michael Tremer 2021-01-27 21:15:24 UTC
May I ask for the output of "ip link" on your IPFire system. It looks like your system is fragmenting packets and that might be broken, but I am not sure why that is, yet.
Comment 30 clgedeon 2021-01-29 02:04:08 UTC
(In reply to Michael Tremer from comment #29)
> May I ask for the output of "ip link" on your IPFire system. It looks like
> your system is fragmenting packets and that might be broken, but I am not
> sure why that is, yet.

Here it is:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: red0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:14:2a:7a:01:75 brd ff:ff:ff:ff:ff:ff
3: green0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether d8:47:32:89:f8:f3 brd ff:ff:ff:ff:ff:ff
4: blue0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether f0:b4:d2:5a:d2:f2 brd ff:ff:ff:ff:ff:ff
5: orange0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0a:cd:35:ae:ea brd ff:ff:ff:ff:ff:ff
6: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none 

Note: we have set up orange zone but it is not used for the time being.
Comment 31 clgedeon 2021-01-29 02:05:03 UTC
Created attachment 857 [details]
console log of correctly rebuilding initrd
Comment 32 clgedeon 2021-01-29 02:33:07 UTC
Some comments about testing patched kernel. I seems that the kernel made internet browsing extremely slow. Simple google query (normally 1 or 2 seconds execution) did take more than 10s. An page like this one https://ici.radio-canada.ca/ was less than 25% complete after 2min. I tested a zoom session which, surprisingly, seemed to work.

After that, I performed some tests on the fresh unpatched c153 and configured by hand version. Simple test: mobile on blue, desktop on green. I noticed that when zoom session is initiated from blue , the session seems to always work ok, while when initiated from red it sometimes works ok, and some other is broken. Before leaving work, to be sure, I made 5 sessions in a row initiated from blue. All worked. Then I made 5 sessions in a row initiated from green. 2d and 5th were broken. Didn't have time for more testing. However, if this proves to be consistent behaviour, it might explain why the session I report at the end of https://bugzilla.ipfire.org/show_bug.cgi?id=12563#c27, because, in fact, that session was initiated from outside LAN. I Wonder though if it is relevant and can really make sense.
Comment 33 Michael Tremer 2021-01-29 11:06:29 UTC
(In reply to clgedeon from comment #30)

> Here it is:
> 2: red0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode
> DEFAULT group default qlen 1000
>     link/ether 00:14:2a:7a:01:75 brd ff:ff:ff:ff:ff:ff

This looks as expected. So the kernel is not fragmenting any UDP packets here. At least not the IPFire kernel. It is quite likely that this is happening further down the line.
Comment 34 Jon 2021-03-29 16:26:50 UTC
From community: "I can also confirm that these problems appeared after the update with the following applications for me: Microsoft Teams, Discord, Fortnite.

I updated from core 152 to 155 today."

https://community.ipfire.org/t/after-core-153-discord-cam-no-longer-working/4182/14?u=jon
Comment 35 Michael Tremer 2021-03-29 19:57:28 UTC
I still do not have any evidence to investigate this any further...
Comment 37 Michael Tremer 2021-04-12 09:09:58 UTC
I am going to ignore this report until someone is able to capture a packet trace showing this problem. I cannot reproduce this and I cannot work with "It just doesn't work".

The discord log file shows nothing useful whatsoever.
Comment 38 Jon 2021-04-12 18:58:59 UTC
Can someone put together an example command to capture a packet trace?

I didn't want the user to create a HUGE file by typing in the wrong trace command.  
Maybe:

tshark -f "host <clientIP>" -w capture-output.pcap
Comment 39 Brian Pruss 2021-04-25 18:39:44 UTC
(In reply to Michael Tremer from comment #37)
> I am going to ignore this report until someone is able to capture a packet
> trace showing this problem. I cannot reproduce this and I cannot work with
> "It just doesn't work".
> 
> The discord log file shows nothing useful whatsoever.

I posted a set of PCAPs in the forum thread a while ago: https://community.ipfire.org/t/google-meet-stopped-working-after-core-update-152-153/4272/39?u=prussbw . I will attach the file to this report as well.

These have been scrubbed/anonymized, but I can send you the originals directly if you need them.
Comment 40 Brian Pruss 2021-04-25 18:42:51 UTC
Created attachment 881 [details]
PCAPs of DTLS handshake failure on 153, success on 152
Comment 41 Scott McCaffrey 2021-06-08 00:16:25 UTC
Thanks for all that have contributed over the years, I am still having the same issues. I don't know how to help as I am more of a user and less a contributer. 

Teams, Meets, Zoom all break on any update past 152 up to 156 & 157 Experimental as well.

What I have noticed is that other users outside of my location are unable to see me and my video towards them freezes. Reverting back to 150 or 152 and restoring an old backup fixes the issue.
Comment 42 Michael Tremer 2021-06-08 09:07:17 UTC
Hmm, okay. Thank you all for your feedback.

There is a new kernel in 157, which could have made a change. Did anyone try the new 5.10.x kernel, yet?

(In reply to Brian Pruss from comment #40)
> PCAPs of DTLS handshake failure on 153, success on 152

Those look like they were taken on the GREEN/BLUE interface. Would you be able to capture the RED interface simultaneously to see what kind of packet does not make it through?
Comment 43 Brian Pruss 2021-06-21 00:10:50 UTC
Update: I’ve captured a success and failure case, with traces taken from both the Green and Red subnets each time. I’ve found that there is a difference in the fragmentation of outgoing packets between 152 and 156.

I’ve recently managed to get a Red/Green capture of a failed Google Meet session running through Core Update 156, and then rolled back to 152 to get a Red/Green capture of successful session. Some notes:

    - In all cases, I set up a Meet session on my phone connected to LTE (not connected to WiFi and not going through IPFire). I then connected to that session from a PC connected to IPFire via wired Ethernet.
    - The fail-case symptoms I noted were slightly different from my original issue. Previously, I was unable to send or receive audio or video on the PC. This time, I was able to receive video and send audio, but was still unable to send video. This is similar to the issue I saw with Discord video chat.
    - I also no longer see the DTLS handshake failure. I don’t have an explanation for that - perhaps something changed on Google’s side that affected it.
    - The difference I’ve found between 152 and 156, is that in 156, UDP packets over a certain size (>580 bytes) are getting fragmented when traversing the firewall, whereas in 152 they are passed whole. In both cases the packets are smaller than most common MTUs, so I don’t have an explanation for the change.
    - I theorize that these larger packets are part of the outgoing video stream, and Google is rejecting them when fragmented. The smaller packets are likely the audio stream and are not fragmenting, thus why I can get audio out but not video.

The captures that I will upload shortly have again been trimmed and anonymized. Please reach out to me directly if you need the originals.
Comment 44 Brian Pruss 2021-06-21 00:12:00 UTC
Created attachment 912 [details]
PCAPs of Google Meet video failure on 156, success on 152 for comparison
Comment 45 Michael Tremer 2021-06-21 16:43:10 UTC
Thank you Brian for your effort. I would agree with your opinion, however I do not have an idea what could cause this.

Today, we have released c157 which comes with an updated kernel. Could you please install it and test?
Comment 46 Brian Pruss 2021-07-10 23:50:23 UTC
Created attachment 927 [details]
PCAP of Google Meet video failure on 157, output from 'ip link', 'iptables', and 'netstat'

I’ve just tried 157, and have confirmed that I am still seeing the issue.

I’m attaching a new set of PCAPs, but they just show the same fragmentation issue as before. I’ve also thrown in the output from ‘ip link’, ‘netstat -s’, ‘iptables -L’ and ‘iptables -S’ in case you can find anything useful in them. All I’ve found is that the MTU is still set to 1500 as expected, so I can’t figure out why fragmentation is being applied. (I also did a “grep -IR /var/log/” and verified that nothing showed up.)

I’ll leave 157 in place until tomorrow evening local time (US), so if there’s anything you want me to try, please let me know. I’ll need to roll it back to 152 by Monday morning because I need Meet for work.
Comment 47 Michael Tremer 2021-07-11 13:18:09 UTC
Thank you. Could you please as well test c159 which brings a new kernel:

> https://nightly.ipfire.org/next/latest/

The whole IPFire team had a video conference the other day with a Jitsi server being hosted being an IPFire instance in the data center and everyone of us was behind their own IPFire firewalls, too. We did not have any issues whatsoever.
Comment 48 Brian Pruss 2021-07-11 21:07:03 UTC
(In reply to Michael Tremer from comment #47)
> Thank you. Could you please as well test c159 which brings a new kernel:
> 
> > https://nightly.ipfire.org/next/latest/
> 
> The whole IPFire team had a video conference the other day with a Jitsi
> server being hosted being an IPFire instance in the data center and everyone
> of us was behind their own IPFire firewalls, too. We did not have any issues
> whatsoever.

I have tried running ca159, but I saw the same behavior. Before that I also tried updating the c157 kernel to 5.10.45 using the package at https://people.ipfire.org/~arne_f/highly-experimental/kernel-5.10/. Unfortunately that did not make a difference either. In both cases I captured a set of PCAPs showing the same fragmentation behavior, but at this point I don't think posting them will add much to the discussion. (Let me know if you want me to post them anyway.)

I need to roll back to 152 now, but I'm still open to trying again later.

Thanks again for your attention on this.
Comment 49 Michael Tremer 2021-07-12 10:34:13 UTC
(In reply to Brian Pruss from comment #48)
> (In reply to Michael Tremer from comment #47)
> > Thank you. Could you please as well test c159 which brings a new kernel:
> > 
> > > https://nightly.ipfire.org/next/latest/
> > 
> > The whole IPFire team had a video conference the other day with a Jitsi
> > server being hosted being an IPFire instance in the data center and everyone
> > of us was behind their own IPFire firewalls, too. We did not have any issues
> > whatsoever.
> 
> I have tried running ca159, but I saw the same behavior. Before that I also
> tried updating the c157 kernel to 5.10.45 using the package at
> https://people.ipfire.org/~arne_f/highly-experimental/kernel-5.10/.
> Unfortunately that did not make a difference either. In both cases I
> captured a set of PCAPs showing the same fragmentation behavior, but at this
> point I don't think posting them will add much to the discussion. (Let me
> know if you want me to post them anyway.)

Thanks for testing. If you collected the traces, please post them. If not, I guess that is fine, too.

> I need to roll back to 152 now, but I'm still open to trying again later.
> 
> Thanks again for your attention on this.

I guess we can conclude that we are probably not looking for a kernel issue anymore, because I do not think that this problem can survive for that long.

It could be a side-effect of any other features. Did you try to disable certain things like QoS, IPS, etc.? Basically anything that deals with the packets?
Comment 50 Brian Pruss 2021-07-12 13:24:43 UTC
Created attachment 928 [details]
PCAPs of Google Meet video failure on c157 with the latest dev kernel, and on testing c159
Comment 51 Brian Pruss 2021-07-12 13:41:13 UTC
(In reply to Michael Tremer from comment #49)
> Thanks for testing. If you collected the traces, please post them. If not, I
> guess that is fine, too.

Done.

> I guess we can conclude that we are probably not looking for a kernel issue
> anymore, because I do not think that this problem can survive for that long.
> 
> It could be a side-effect of any other features. Did you try to disable
> certain things like QoS, IPS, etc.? Basically anything that deals with the
> packets?

I did not try that this time, but I did try that on 153 back when I first started seeing the issue. I am running QoS and IPS, but no Proxy. OpenVPN is configured but disabled (not using it for the time being but probably will again in the future). I only have one extra firewall rule, which blocks WUI access from the Blue subnet (which is what I use for guest wireless devices).
Comment 52 Scott McCaffrey 2021-08-04 15:02:24 UTC
I started from scratch with 158 and am still seeing the issues even after updating to testing 159. Now my Wyze Cams cannot connect externally and I am still unable to share with Microsoft Teams and other video connections. I will try to help let me know.
Comment 53 Michael Tremer 2021-08-05 09:35:26 UTC
I have tried to reproduce this again and wasn't successful.

However, I have a couple of ideas that could help us to get to the bottom of this:

* Could everyone who can please share their fireinfo profile on here?

I am starting to believe that we do not seem to have a general software issue. The vast majority of users isn't affected. So something needs to trigger this. We have however disabled loads of features that are dealing with those packets, so I am not very sure what else we can switch off.

* Could you also send the output of "ethtool -k red0" and let me know which network interface is assigned to red0 (I am interested in the chipset).

* Could you then run the following command and test again: ethtool -K red0 gso off gro off tso off
Comment 54 B N 2021-09-21 21:36:32 UTC
I was looking at my routing tables using the “ip route” command. I got the following response with cleaned up IP addresses.

```
default via ##.##.##.## dev red0 proto dhcp src ##.##.##.## metric 202 mtu 576
##.##.##.0/20 dev red0 proto dhcp scope link src ##.##.##.## metric 202 mtu 576
192.168.7.0/24 dev green0 proto kernel scope link src 192.168.7.1
192.168.8.0/24 dev blue0 proto kernel scope link src 192.168.8.1
```

The mtu was a problem. I had been having issues with my phone’s VPN. After clearing that to the following settings I was able to get the VPN to work again. Tomorrow I will check out if this resolves for other services as well. I am not 100% of these settings. My hardware still says 1500 for the MTU.

```
default via ##.##.##.## dev red0 proto dhcp src ##.##.##.## metric 202
##.##.##.0/20 dev red0 proto dhcp scope link src ##.##.##.## metric 202
192.168.7.0/24 dev green0 proto kernel scope link src 192.168.7.1
192.168.8.0/24 dev blue0 proto kernel scope link src 192.168.8.1
```

I upgraded to 159 and the settings seem to still work as long as it is done manually every time the connection is reestablished. I don't know which config is responsible for setting this MTU.
Comment 55 Brian Pruss 2021-10-04 05:12:27 UTC
A few new data points:

 - I have found that this issue can be replicated in a more simple manner by using ping packets with large payloads. To demonstrate the issue with ping, I can just use 'ping -s 1000 8.8.8.8' (or 'ping -l 1000 8.8.8.8' on Windows) from one of the hosts on the green subnet. On core 152, I get a normal response back (albeit truncated to a 68 byte payload). On later cores with the issue I'm seeing, the outgoing ICMP echo request packet gets fragmented (as confirmed with packet captures) and 8.8.8.8 ignores it.  

 - While still running Core 152, I also checked the 'ip route' command output and saw something very similar to what @bjaminn saw, with the MTU set to 576. I did a small experiment and saw some interesting results: If I send a large ping as described above from a host on the green interface subnet, it gets routed out the red interface without being fragmented. However, if I send the ping from the IPFire gateway itself, it gets fragmented and ignored. If I manually modify the routing table to remove the 'mtu 576' setting (as @bjaminn did), then large pings from inside IPFire work again. 

So - the erroneous MTU setting appears to have been present prior to Core 153, when the issues started being seen. However, something else seems to have changed starting at 153, such that it also started affecting packets being routed out from internal subnets. That may be a red herring, though - the real question seems to be why the MTU setting is getting put into the routing table in the first place.

I did a quick search (grepping through /etc/ for things like "route add", "mtu", and "576"), and while I can find the place in the init.d scripts where the routes are getting added, I can't see anywhere where the MTU is set. If we can fix that, I think we may have this problem solved.

I have not yet tried upgrading to 159, but I have no reason to believe that I would see anything different from what @bjaminn saw.
Comment 56 Brian Pruss 2021-12-11 23:59:40 UTC
I believe I may have a fix (or at least a workaround) for this problem, and it seems to be related to dhcpcd. After making a manual change to /var/ipfire/dhcpc/dhcpcd.conf , I was able to upgrade to Core 161 and use Google Meet successfully. I am no longer seeing fragmentation on large UDP or ICMP packets. 

The change I made was to comment out the "option interface_mtu" line in dhcpcd.conf under /var/ipfire/dhcpc/. After that, I rebooted, and verified that the "mtu 576" option was no longer present in the output of "ip route". I was able to perform large pings from both the Green subnet and IPFire itself. 

At this point I upgraded from Core 152 to Core 159. After the upgrade and reboot, I saw that the dhcpcd.conf change had been reverted, so I once again commented out "option interface_mtu", rebooted, and tested both pings and Google Meet. All tests were successful. I then upgraded to Core 161. This time the dhcpcd.conf change was not clobbered, and I was able to once again run my tests successfully.

The odd thing about this is, after capturing a DHCP exchange between IPFire and my ISP, I don't see the MTU option (26) being requested or sent at all. It's obviously dhcpcd that's passing the setting to the rest of the system, but I can't see why it's doing that. It appears to have been doing this since at least Core 152 if not earlier, it's just that some other change in the packet flow appears to have made the route MTU setting affect Green<->Red packets starting in Core 153.

At this point I'm finally able to get current on my router software, so I'm satisfied for the moment. I don't know what the best way would be to fix this problem in the IPFire core, but I'm willing to be a tester for any changes the developers may propose.
Comment 57 Michael Tremer 2022-02-15 16:24:14 UTC
Hello Brian,

apologies for picking this one up again so long after.

(In reply to Brian Pruss from comment #56)
> I believe I may have a fix (or at least a workaround) for this problem, and
> it seems to be related to dhcpcd. After making a manual change to
> /var/ipfire/dhcpc/dhcpcd.conf , I was able to upgrade to Core 161 and use
> Google Meet successfully. I am no longer seeing fragmentation on large UDP
> or ICMP packets. 
> 
> The change I made was to comment out the "option interface_mtu" line in
> dhcpcd.conf under /var/ipfire/dhcpc/. After that, I rebooted, and verified
> that the "mtu 576" option was no longer present in the output of "ip route".
> I was able to perform large pings from both the Green subnet and IPFire
> itself. 

This is great news. I was wondering why we are not seeing any smaller MTU on the interface then, and the explanation seems to lie in this script:

> https://git.ipfire.org/?p=ipfire-2.x.git;a=blob;f=config/dhcpc/dhcpcd-hooks/10-mtu;hb=4b94860d07b5124e90711c802e87cce8547c3043#l20

When it is only set to 576, we will ignore this. However, after studying the source of dhcpcd, I could find a couple of places where they insert routes using the MTU which then did not get corrected:

> https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=blob;f=src/ipv4.c;hb=88d1590824689f47fc3e3ceff0bdaa7dad6a34f5#l355

> At this point I upgraded from Core 152 to Core 159. After the upgrade and
> reboot, I saw that the dhcpcd.conf change had been reverted, so I once again
> commented out "option interface_mtu", rebooted, and tested both pings and
> Google Meet. All tests were successful. I then upgraded to Core 161. This
> time the dhcpcd.conf change was not clobbered, and I was able to once again
> run my tests successfully.

Yes, we consider this configuration file a system configuration file and it will be overwritten if there were any changes in dhcpcd.

> The odd thing about this is, after capturing a DHCP exchange between IPFire
> and my ISP, I don't see the MTU option (26) being requested or sent at all.

That I find surprising. It must be there.

I studied the source of dhcpcd and there is no default value for the MTU nor does the number 576 show up in the code at all.
Comment 58 Michael Tremer 2022-02-15 17:14:09 UTC
Hello,

could you please apply these changes to /etc/init.d/networing/functions.network

> https://git.ipfire.org/?p=people/ms/ipfire-2.x.git;a=commitdiff;h=b92b38c45eef3038db03319d451e91a181706292

Then, please run "setup" and in the address settings for the RED interface, set an MTU of 1500. The system should then reconnect and the routes should be configured correctly.

This is now working around a change that has been introduced to dhcpcd in 2015. I have no idea how this didn't cause any problems before.

However, there still is the problem that dhcpcd receives an incorrect MTU (presumably) from the cable modem and just configures it as instructed. This is now making a workaround possible.

If you could please confirm that this works for you, I will submit patches to the mailing list.
Comment 59 Brian Pruss 2022-02-21 03:10:59 UTC
(In reply to Michael Tremer from comment #58)
> Hello,
> 
> could you please apply these changes to
> /etc/init.d/networing/functions.network
> 
> > https://git.ipfire.org/?p=people/ms/ipfire-2.x.git;a=commitdiff;h=b92b38c45eef3038db03319d451e91a181706292
> 
> Then, please run "setup" and in the address settings for the RED interface,
> set an MTU of 1500. The system should then reconnect and the routes should
> be configured correctly.
> 
> This is now working around a change that has been introduced to dhcpcd in
> 2015. I have no idea how this didn't cause any problems before.
> 
> However, there still is the problem that dhcpcd receives an incorrect MTU
> (presumably) from the cable modem and just configures it as instructed. This
> is now making a workaround possible.
> 
> If you could please confirm that this works for you, I will submit patches
> to the mailing list.

Thanks again for looking into this.

I tried applying your changes (I manually updated /etc/init.d/networking/functions.network according to the diff you posted), but dhcpcd threw an error whenever I forced the Red MTU to 1500: "dhcpcd[xxxx]: invalid MTU 1500". If I removed the MTU setting, the error went away but of course the 576 MTU was back.

I've also double checked the DHCP Offer and ACK that I'm getting from my ISP on the Red interface, and I can re-confirm that the MTU option is not present. 

Let me know if there's anything else I can check.
Comment 60 Michael Tremer 2022-02-21 11:15:46 UTC
(In reply to Brian Pruss from comment #59)
> Thanks again for looking into this.

Thanks for testing.

> I tried applying your changes (I manually updated
> /etc/init.d/networking/functions.network according to the diff you posted),
> but dhcpcd threw an error whenever I forced the Red MTU to 1500:
> "dhcpcd[xxxx]: invalid MTU 1500". If I removed the MTU setting, the error
> went away but of course the 576 MTU was back.

Hmm, this is odd. There is only one line where this could have happened:

> https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=blob;f=src/if-options.c;hb=88d1590824689f47fc3e3ceff0bdaa7dad6a34f5#l1177

Interestingly MTU_MAX is smaller than 1500 bytes and therefore the value is out of range:

> https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=blob;f=src/dhcp-common.h;hb=88d1590824689f47fc3e3ceff0bdaa7dad6a34f5#l51

I used 1400 for my testing, just because I needed something else than 1500.

Could you try setting 1472 and see if the size of the UDP header is being added up again later? I do not see any reason to limit this at all.

Weirdly, the minimum value is 604 (576 + 28) which makes it even more surprising that 576 was set in the first place.

> I've also double checked the DHCP Offer and ACK that I'm getting from my ISP
> on the Red interface, and I can re-confirm that the MTU option is not
> present. 

No it is. In the dump that you sent me, the DHCP request packet has option 57 set with an MTU of 1472. Looks like parsing that goes wrong somewhere again.

The name of option 57 is Maximum DHCP Message Size. This is parsed here:

> https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=blob;f=src/dhcp.c;hb=88d1590824689f47fc3e3ceff0bdaa7dad6a34f5#l936

I cannot see how this would result in a value of 576.

But it looks like we are getting closer :) As a workaround, I would say that you can set your MTU to 1472 and that should run fine. Not 100% ideal, but 99.99%.
Comment 61 Brian Pruss 2022-02-22 03:13:05 UTC
(In reply to Michael Tremer from comment #60)

> 
> Could you try setting 1472 and see if the size of the UDP header is being
> added up again later? I do not see any reason to limit this at all.
> 
> Weirdly, the minimum value is 604 (576 + 28) which makes it even more
> surprising that 576 was set in the first place.

I tried this as you described - updates on functions.network and setting the Red MTU to 1472. I saw MTU 1472 on my default route. ("default via xx.xx.xx.xx dev red0 proto dhcp src xx.xx.xx.xx metric 1003 mtu 1472") 

However, this is constraining my network in a way that I'd prefer not to have to do. Without the MTU 1472 setting, I can run "ping -s 1472 8.8.8.8" successfully, but with the setting the maximum is "ping -s 1444 8.8.8.8". I'd be concerned that other applications might not handle the fragmentation correctly.

> 
> > I've also double checked the DHCP Offer and ACK that I'm getting from my ISP
> > on the Red interface, and I can re-confirm that the MTU option is not
> > present. 
> 
> No it is. In the dump that you sent me, the DHCP request packet has option
> 57 set with an MTU of 1472. Looks like parsing that goes wrong somewhere
> again.
> 
> The name of option 57 is Maximum DHCP Message Size. This is parsed here:
> 
> > https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=blob;f=src/dhcp.c;hb=88d1590824689f47fc3e3ceff0bdaa7dad6a34f5#l936
> 

Unless I'm mistaken, option 57 should only apply to the DHCP messages themselves (https://datatracker.ietf.org/doc/html/rfc2132#section-9.10). A network MTU would be set using option 26 (https://datatracker.ietf.org/doc/html/rfc2132#section-5.1), which isn't being sent by my ISP's DHCP server.

> But it looks like we are getting closer :)

I hope so as well! :)
Comment 62 Michael Tremer 2022-02-22 12:03:01 UTC
Hey,

(In reply to Brian Pruss from comment #61)
> (In reply to Michael Tremer from comment #60)
> 
> > 
> > Could you try setting 1472 and see if the size of the UDP header is being
> > added up again later? I do not see any reason to limit this at all.
> > 
> > Weirdly, the minimum value is 604 (576 + 28) which makes it even more
> > surprising that 576 was set in the first place.
> 
> I tried this as you described - updates on functions.network and setting the
> Red MTU to 1472. I saw MTU 1472 on my default route. ("default via
> xx.xx.xx.xx dev red0 proto dhcp src xx.xx.xx.xx metric 1003 mtu 1472") 

Okay, so it is good to know that this mechanism generally works - with a couple of limitations.

> However, this is constraining my network in a way that I'd prefer not to
> have to do. Without the MTU 1472 setting, I can run "ping -s 1472 8.8.8.8"
> successfully, but with the setting the maximum is "ping -s 1444 8.8.8.8".
> I'd be concerned that other applications might not handle the fragmentation
> correctly.

Yes, we are still fragmenting packets. However, this is not a problem at this point because 1472 is in the range of the MTU of a standard DSL line. Mobile connections are even lower and so this won't cause any problems with video conferencing at all.

The only caveat is that you won't fill packets all the way. There is still space for another 28 bytes. However, video streaming won't fill them anyways. Large downloads would and you would have a slightly lower throughput by about (1500 / 1472 =) 1.9%. Nothing I would worry about because any other factor is larger than this.

However, this is still a workaround for now.

> > > I've also double checked the DHCP Offer and ACK that I'm getting from my ISP
> > > on the Red interface, and I can re-confirm that the MTU option is not
> > > present. 
> > 
> > No it is. In the dump that you sent me, the DHCP request packet has option
> > 57 set with an MTU of 1472. Looks like parsing that goes wrong somewhere
> > again.
> > 
> > The name of option 57 is Maximum DHCP Message Size. This is parsed here:
> > 
> > > https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=blob;f=src/dhcp.c;hb=88d1590824689f47fc3e3ceff0bdaa7dad6a34f5#l936
> > 
> 
> Unless I'm mistaken, option 57 should only apply to the DHCP messages
> themselves (https://datatracker.ietf.org/doc/html/rfc2132#section-9.10). A
> network MTU would be set using option 26
> (https://datatracker.ietf.org/doc/html/rfc2132#section-5.1), which isn't
> being sent by my ISP's DHCP server.

That is what I thought as well. The option for the actual MTU is 26.

So why do we get this value then?

I googled around and there is a large number of devices, DHCP clients, etc. affected by their ISP setting the MTU to 576. So it *must* come from there. After I studied the source of dhcpcd much more, there is no other place where the MTU is being handled.

The option that is obvious is to disable the interface_mtu option like you did earlier. However, that causes the problem that we would ignore the MTU everywhere which is not what we want. AWS for example uses the MTU option to configure their servers.

The only path that we have left is to make it configurable for either dhcpcd to entirely ignore the MTU and leave it at 1500, or we can allow users to set their own. I prefer the second option, because it is more flexible.

This also works (see above), but with the limitation the we cannot set anything larger than 1472. This was implemented in this commit:

https://git.ipfire.org/?p=thirdparty/dhcpcd.git;a=commitdiff;h=416a319e23031e0dff2d5c5f95d73ddcde052cd4

Sadly, there is no explanation why MTU_MAX was decreases. If there needs to be some room for another packet header, why is that value added to MTU_MIN and not taken off as well?

Looking at older revisions of dhcpcd, there was a reason to limit the MTU. However, MTU_MAX is no longer used. I would therefore propose a patch that allows us to freely set the MTU.

Because testing this is a little bit more complicated, and since we have already tested this in theory, I would post this straight to the list and ask you to test the nightly build which will contain the patches.

If this works well, I would like to upstream the patch to dhcpcd.

> > But it looks like we are getting closer :)
> 
> I hope so as well! :)
Comment 63 Michael Tremer 2022-02-22 12:03:23 UTC
Created attachment 984 [details]
Proposed patch for dhcpcd