• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle


  • My experience with C++ was when C++ was a relatively new thing. Practically the only notable feature provided by the standard library, was that unholy abuse of bit shift operators for I/O. No standard collections or any other data types.

    And every compiler would consider something else a valid C++ code or interpret the same code differently.

    I am little bit prejudiced since then… and that is probably where the author is coming from too.

    Then things were just getting more complicated (templates and other new syntax quirks), to fill the holes in attempts to make C a ‘high level language’.



  • Jajcus@kbin.socialtolinuxmemes@lemmy.worldHtop too
    link
    fedilink
    arrow-up
    81
    ·
    9 months ago

    Well behaving programs give control back to the kernel as soon as they are done with what they are doing. If they don’t the control is forcefully taken away after some assigned time.

    It looks something like this:

    Something happens – e.g. a key is pressed – a process waiting for this event is woken up and gets e.g. 100ms to do it stuff. If it can handle the key press in 50ms, kernel notes it used 50 ms of CPU time and can give control to another process waiting for an event or busy with other work. If the key press triggered long computation the process won’t be done in 100ms, the kernel notes it used 100ms of CPU time and gives control to other processes with pending events or busy with other work.
    After one second the kernel may have noted:

    Process A: used 50ms, then nothing, then 100ms, another 100ms and another 100ms
    Process B: was constantly busy doing something, so it got allocated 6 * 100ms in that one second
    Process C: just got one event and handled it in 50ms
    Process D: was not waken at all

    So total of 1000ms was used – the CPU was 100% busy
    Of that 60% was process B, 35% process A and 5% process C.

    And then that information is read from the kernel by top and displayed.

    How does the OS even yank the CPU away from the currently running process?

    Interrupts. CPU has means triggering and interrupt at a specific time. Interrupt means that CPU stops what it is doing and runs selected piece of kernel code. This piece of kernel code can save the current state of user process execution and do something else or restore saved execution of another process.


  • Have you ever worked with a computer with modern general-purpose OS like Linux and no RTC? It sucks. It is not strictly necessary, you can live without it, but you need workarounds for basic stuff timestamps in log files or in the file system. At least for a minute until NTP connection is established, but may be longer when internet connection is not available. And when routers are rebooted most often? When troubleshooting broken internet connection. This is also the time when properly timestamped logs could be useful.

    And battery backed RTC is cheap. It doesn’t fit on a Raspberry Pi board, but can easily fit into a router case. No excuse for omitting it.





  • Doesn’t sound like the ‘cheap small computer you can run your hobby electronics project on’ that the original Pi used to be. It is not as cheap and a power hungry beast, still small, though. More and more like a PC and less and less a small cheap embedded platform. For some people it is a plus (I guess for most people here), for some not so much.

    I tend to build my projects on Raspberry Pi Pico now, but sometimes I would need something more powerful and Raspberry Pi 5 will be too much.


  • The idea is you package the software once and it works forever, because all dependencies for it are provided in the exact right version. And the dependencies may include things that would not be included in the base system (like super new versions of some important libraries).

    That is true, but that is also the problem: both the package and all its dependencies may be left never updated.

    In traditional Linux distribution, like Debian, every package must be compiled within the same system, which usually means specific version of all key libraries. And when the key libraries are upgraded some packages compiled for older versions won’t work, the package might not even compile with newer version of the libraries. And it is often not possible/practical to provide multiple different version of libraries (or other shared system components). The result is distribution developers have a lot of hard work updating all the packages. When there is no one to fix a package for the next version of the package, the package will be removed from the distribution. That happens when package is not maintained upstream and/or no one cares enough to maintain it in the distribution. In that case – is it worth to keep it?

    Snap makes packaging applications much easier, and more decoupled from the operating system ‘core’. Less maintenance is needed… but that also means less maintenance will be done, which is not necessarily good.

    On the other hand, Snap allows application to be maintained more rapidly than the distro core – in that case it can make things safer – fix in applications and their dependencies can be fixed that it could be done in the normal Debian release process. But that depends on maintainers of the specific snap and its dependencies.




  • Differences between 2.4 and 2.6 were quite big, I don’t think there was another such big change in kernel releases any time later. But that was also the time when Linux was transitioning from being a hobby project (already useful for serious stuff) to being a serious professional operating system – the last moment for major refactoring.

    Linux kernel is still changing and being constantly refactored, but now the changes tend to be more gradual and version numbers matter much less.





  • Every major distro uses systemd, because before that it was nearly impossible to properly implement things that distros have to provide.
    Most startup scripts were incredible set of hacks to make services behave. Those were very inefficient (they could not be efficient being shell scripts calling other commands for various simple repetitive tasks) and would often break when circumstances were different from ideal.

    Systemd just makes building Linux distribution much easier, and the resulting system is more reliable, more consistent and more flexible. Why would distro developers chose anything else?



  • Kopia or Restic. Both do incremental, deduplicated backups and support many storage services.

    Kopia provides UI for end user and has integrated scheduling. Restic is a powerfull cli tool thatlyou build your backup system on, but usually one does not need more than a cron job for that. I use a set of custom systems jobs and generators for my restic backups.

    Keep in mind, than backups on local, constantly connected storage is hardly a backup. When the machine fails hard, backups are lost ,together with the original backup. So timeshift alone is not really a solution. Also: test your backups.