it doesn’t run a job it waits for your jobs to end. You can set the default want time. Its the same thing on windows that asks programs to close before shutting down. If a critical application got stuck systemd has nothing to do with it
I know what it is. But it literally says “A stop job is running” and since english is not my first language, I had no good idea how to better express the technicalities of it in a short sentence.
As for it having nothing to do with systemd:
I am dual booting arch and artix, because I am currently in the middle of transitioning. I have the exact same packages on both installs (+ some extra openrc packages on artix).
About 30% of the shutdowns on arch do the stop job thing. It happens randomly without any changes being done by me between the sessions.
0% of the shutdowns on artix take more than 5 seconds.
I know that I can configure it. But why is 90 seconds a default? It is utterly unreasonable. You cite windows doing it, but compare it instead to mac, which has extremely fast powerups and shutdowns.
And back to the technicalities, openrc doesn’t say “a stop job is running”, so who runs the stop job if not systemd?
The question you should be asking is what’s wrong with that job which is causing it to run for long enough that the timeout has to kill it.
Systemd isn’t the problem here, all it’s doing is making it easy to find out what process is slowing down your shutdown, and making sure it doesn’t stall forever
I will not debug 3rd party apps. I don’t even want to think about my OS nor ask any questions about it. I want to use a PC and do my job. That includes it shutting down asap when I need it to shut down asap.
systemd default - shutdown not always asap
openrc default - shutdown always asap
whatever the heck macs init system is - shutdown always asap
It may be not the “fault” of systemd, but neither does it do anything helpful to align itself with my needs.
The default is as long as it is because most people value not losing data, or avoiding corruption, or generally preserving the proper functioning of software on their machine, over 90 seconds during which they could simply walk away.
Especially when those 90 seconds only even come up when something isn’t right.
If you feel that strongly that you’d rather let something malfunction, then you’re entirely at liberty to change the configuration. You don’t have to accept the design decisions of the package maintainers if you really want to do something differently.
Also, if you’re that set against investigating why your system isn’t behaving the way you expect, then what the hell are you doing running arch? Half the point of that distro is that you get the bleeding edge of everything, and you’re expected to maintain your own damn system
If an app didn’t manage to shut down in 90seconds, it is probably hanging and there will be “DaTa LoSs” no matter if you kill it after 2 seconds or after 90.
Been running arch for over 5 years now.
I track all my hours and for arch maintenance I’ve spent a grand total of ~41 hours (desktop + laptop and including sitting there and staring at the screen while an update is running). The top three longest sessions were:
btrfs data rescue after I deleted a parent snapshot of my rollback (~20h)
grub update (~2h)
jdk update which was fucky (~30min)
|
It’s about 8.2 hours per year (or ~10minutes per week) which is less than I had to spend on windows maintenance (~22h/y afair, about half of that time was manually updating apps by going to their website and downloading a newer version).
Ubuntu also faired worse for me with two weekends of maintenance in a year (~32h), because I need the bleeding edge and some weird ass packages for work and it resulted in a frankenstein of PPAs and self built shit, which completely broke on every release upgrade.
btrfs data rescue after I deleted a parent snapshot of my rollback
Can you expand a bit on that? I thought it didn’t matter if you deleted parent snapshots because the extents required by the child would still be there.
Honestly, I have no idea why it went wrong or why it let me do that. Also my memory is a bit fuzzy since it’s been a while, but as best I can remember what I did step by step:
fuck around with power management configs
using btrfs-assistant gui app, rolled back to before that
btrfs-assistant created an additional snapshot, called backup something, I didn’t really pay attention
reboot, all seemed good
used btrfs-list to take a look, the subvolume that was the current root / was a child of the aformentioned backup subvolume
started btrfs-assistant and deleted the backup subvolume
system suddenly read only
reboot, still read only
btrfs check said broken refs and some other errors,
i tried to let btrfs check fix the errors, which made it worse, now I couldn’t even mount the drive anymore because btrfs was completely borked
used btrfs rescue, which got all files out onto an external drive successfully
installed arch again and rsync the rescued files over the new install, everything works as before, all files are there
I expect it to not run a stop job for 90 seconds by default every time I want to quickly shut down my laptop. /s
it doesn’t run a job it waits for your jobs to end. You can set the default want time. Its the same thing on windows that asks programs to close before shutting down. If a critical application got stuck systemd has nothing to do with it
I know what it is. But it literally says “A stop job is running” and since english is not my first language, I had no good idea how to better express the technicalities of it in a short sentence.
As for it having nothing to do with systemd:
I am dual booting arch and artix, because I am currently in the middle of transitioning. I have the exact same packages on both installs (+ some extra openrc packages on artix).
About 30% of the shutdowns on arch do the stop job thing. It happens randomly without any changes being done by me between the sessions.
0% of the shutdowns on artix take more than 5 seconds.
I know that I can configure it. But why is 90 seconds a default? It is utterly unreasonable. You cite windows doing it, but compare it instead to mac, which has extremely fast powerups and shutdowns.
And back to the technicalities, openrc doesn’t say “a stop job is running”, so who runs the stop job if not systemd?
The question you should be asking is what’s wrong with that job which is causing it to run for long enough that the timeout has to kill it.
Systemd isn’t the problem here, all it’s doing is making it easy to find out what process is slowing down your shutdown, and making sure it doesn’t stall forever
I will not debug 3rd party apps. I don’t even want to think about my OS nor ask any questions about it. I want to use a PC and do my job. That includes it shutting down asap when I need it to shut down asap.
systemd default - shutdown not always asap
openrc default - shutdown always asap
whatever the heck macs init system is - shutdown always asap
It may be not the “fault” of systemd, but neither does it do anything helpful to align itself with my needs.
You can shut down any computer in ten seconds by holding the power button.
The best solution!
The default is as long as it is because most people value not losing data, or avoiding corruption, or generally preserving the proper functioning of software on their machine, over 90 seconds during which they could simply walk away.
Especially when those 90 seconds only even come up when something isn’t right.
If you feel that strongly that you’d rather let something malfunction, then you’re entirely at liberty to change the configuration. You don’t have to accept the design decisions of the package maintainers if you really want to do something differently.
Also, if you’re that set against investigating why your system isn’t behaving the way you expect, then what the hell are you doing running arch? Half the point of that distro is that you get the bleeding edge of everything, and you’re expected to maintain your own damn system
If an app didn’t manage to shut down in 90seconds, it is probably hanging and there will be “DaTa LoSs” no matter if you kill it after 2 seconds or after 90.
Been running arch for over 5 years now.
I track all my hours and for arch maintenance I’ve spent a grand total of ~41 hours (desktop + laptop and including sitting there and staring at the screen while an update is running). The top three longest sessions were:
|
It’s about 8.2 hours per year (or ~10minutes per week) which is less than I had to spend on windows maintenance (~22h/y afair, about half of that time was manually updating apps by going to their website and downloading a newer version).
Ubuntu also faired worse for me with two weekends of maintenance in a year (~32h), because I need the bleeding edge and some weird ass packages for work and it resulted in a frankenstein of PPAs and self built shit, which completely broke on every release upgrade.
Can you expand a bit on that? I thought it didn’t matter if you deleted parent snapshots because the extents required by the child would still be there.
Honestly, I have no idea why it went wrong or why it let me do that. Also my memory is a bit fuzzy since it’s been a while, but as best I can remember what I did step by step: