After updating to Core 150 out networking wasn’t not functioning anymore. We could not even reach the web interface / GUI of ipfire. This could only be achieved by whitelisting all connections from GREEN in iptables’ CUSTOMINPUT chain.(https://community.ipfire.org/t/core-150-geoblocking-blocks-everything-from-green/3430)
After 2 hours of searching it turned out that everything worked again by turning off the GeoIP-blocking. If you view the iptables page in the web GUI one can see that the LOCATIONBLOCK chain listens on all interfaces or ‘*’. I think it should only listen on RED.
Other users have the same issue: https://community.ipfire.org/t/core-150-geoblocking-blocks-everything-from-green/3430
Could you please post your list of countries that you are blocking and the output of "iptables -L -nv"?
I had the same issue and reverted to core149, so not sure if my "iptables -L -nv" is usefull.
I am blocking all countries except Austria and Germany.
Nothing has changed in regards of iptables between those two releases, but we have changed libloc which creates the database that is being loaded into the kernel.
Packets from GREEN should never be passed through the location filter anyways, but I want to confirm.
Created attachment 788 [details]
core149 iptables -L -nv with active location-filter
Created attachment 789 [details]
core150 iptables -L -nv with active location-filter
For giggles (it is a homelab-firewall after all) I updated again to core150 and posted the iptables -L -nv output before and after.
When I'm fast (using already established connection) it is possible to toggle the location-filter and reload the rules from the GUI before it locks me out, but ping to google from system behind firewalls fails if location-filter is enabled.
Could you try forcing an update of the location library?
It looks like there is some garbage in there.
To be clear, this also happens on a clean core150 install (no upgrade)...
I will again upgrade the device to test your hypothesis :)
How would I go about forcing this update of the location library?
As advised I upgraded and then ran 'location update' from CLI. Got the Database from Tue, 13 Oct 2020 08:21:56 GMT
With this database the location filter works as intended as far as I can see, no GUI or SSH lockout, traffic flows as it should.
This seems to be a dud database in the upgrade and/or install files and insufficient checks on the validity of the database when such a thing can bring down all connectivity...
Afterthought: The upgrade from 149->150 ran while location-filter was disabled. Then first thing after reboot I updated the database, then re-enabled location-filter.
I forgot to mention: As a quick fix, please simply unselect the EU and reload.
I looked at the patch, unfortunately my perl is really bad so it is quite possible I misread it.
Will this exclusion of private address blocks disable location-filter in the case of a double NAT? At the moment I run my port forwarding into a DMZ-LAN (WAN -> 10.0.20.1/24[DMZ-Net] -> 10.0.20.254[RED] -> 10.0.0.237[GREEN]) and rely on IPFire to geoblock the forwarded traffic.
It will work on all connections coming in on RED.
I thought it was already doing this, but since the source IP address (which is what we are checking here) of any local network (GREEN, BLUE, or ORANGE) is never globally routable, (or if it is it should not be blocked) we do not need to check this.
So, the behaviour of the filter hasn’t changed in this patch.
Thank you for the explanation :)
How comes that 'location update' yesterday resolved my issues? A side effect?
What does "location update" do? I guess this refreshes the IP to country matching? In this case I would guess this should run on a daily cron job, shouldn't it?
On the other hand why isn't the locationblock limited to the red interface?
"location update" simply updates the database when there is a new version available.
But that does not automatically load it into the kernel. That is being done by another script called update-location-database:
Would I be right on the money that a borked (default/upgraded) database led us here?
I'm just wondering about root cause and a total lack of problems so far on my system 24 hours in :)
(In reply to hudri wudri from comment #18)
> Would I be right on the money that a borked (default/upgraded) database led
> us here?
Partly. We have rolled out a couple of changes which were pushed into the database two days ago. It is a lot more accurate now.
The downside is that it has a couple of rather stupid entries like 192.168.0.0/15 and 0.0.0.0/5. They are allocated address space, so the changes are technically correct.
The problem is only that 192.168.0.0/16 has been allocated in RFC1918 for private use and we do not have those in the database. Since the database is organised in a tree structure, we cannot have any gaps.
However, for performance reasons there is no need to search through hundreds of lists for the private address space. Hence the proposed patch.
For the records: I am currently working on additional sanity checks for filtering those networks...
> For the records: I am currently working on additional sanity checks for filtering those networks...
Those are filed as #12500 and #12501 and appear to be solved on our testing/QA location machine. As noted before, #12506 is currently blocking further progress from my point of view.