Created attachment 1156 [details] kernel error messages when trying to download a large backup file This was flagged up by a couple of people on the forum. Bug was not raised after 2 weeks so I am raising it. I have confirmed the effect. On CU173 all of my backup files were able to be downloaded with no problems, irrespective of the size. On CU174 trying to download a backup file that was 99MB or greater caused an OOM killer event that stopped the backup.cgi process when 100% of memory and a large amount of the swap were being used. 61MB was able to be downloaded with no problems. Where between 99MB ad 61MB the actual boundary is I don't know as I did not have any backups with a size between these. When the problem occurs the downloads box to select whether to open it or to save it in a specific location never comes up.
I see the same issue. I'd be happy to help test.
So this can be easily fixed as the problem is here: > https://git.ipfire.org/?p=ipfire-2.x.git;a=blob;f=html/cgi-bin/backup.cgi;hb=78218433ad12bd4e34e50fac8f72668eac988eb2#l357 In this function, we load the entire backup file into memory and we will then try to send it to the client. I suppose some changes in the allocator will make Perl panic or something. Generally loading 100 MiB of data should not be a problem at all... However, what we can do is the following: > open(FILE, "$file"); > binmode(FILE); > binmode(STDOUT); > > while (<FILE>) { > print; > } > close(FILE); That will open the file, read a bit, send that to the client and then read the next bit and so on... I forgot who volunteered to work on this in yesterdays, call. So I wasn't sure whether I should just go ahead :)
(In reply to Michael Tremer from comment #2) > > I forgot who volunteered to work on this in yesterdays, call. So I wasn't > sure whether I should just go ahead :) I don't think a specific name was defined. It was just said that someone should fix it for CU175.
I suggest volunteering Michael since the: > while (<FILE>) { > print; > } > close(FILE); ... loop make no sense to me!
(In reply to Jon from comment #4) > ... loop make no sense to me! Oh yeah, that is Perl for you :) Most of it makes very little sense. @Stefan: Would you like to take this on, please?
https://patchwork.ipfire.org/project/ipfire/patch/20230510095203.2567-1-stefan.schantl@ipfire.org/
https://git.ipfire.org/?p=ipfire-2.x.git;a=commit;h=c797789c1f45dc76f4cf933ad3e3d24376c2b76e Thank you very much indeed, Stefan! :-)
I have tested this out on my vm testbed using Core-Update 175 Development Build: next/ccd793b3 and can confirm that large files are again being downloaded successfully that in CU174 were failing with the OOM.
Core Update 175 Testing has been released. https://blog.ipfire.org/post/ipfire-2-27-core-update-175-is-available-for-testing
Tested downloading a large backup file on my vm testbed with CU175 that previously gave the oom killer response with CU174. The fix successfully downloaded a 210MB file.
CU175 released https://blog.ipfire.org/post/ipfire-2-27-core-update-175-released