previously @jrgd@lemm.ee, @jrgd@kbin.social

Lemmy.zip

  • 2 Posts
  • 29 Comments
Joined 7 months ago
cake
Cake day: June 3rd, 2025

help-circle
  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    It does depend on the connection type, but the general rule is not completely, barring some connection types like DSL. Given it sounds like you have Fiber, DOCSIS, or similar; you likely fall under the general rule. That said, you can absolutely tune and test above the typical 10-15% safety margin many guides start with without actually incurring any noticeable bufferbloat. The 10-15% is usually a good value for ISPs that fluctuate heavily in available babdwidth to the customer, but for more consistent connections (or for those that overrate high enough that the bandwidth fluctuations sit out of range for what the customer is actually paying for), you can absolutely get much closer to your rated connection speed, if not meeting or even passing it.

    The general process is to tune one value at the time (starting with the bandwidth allocations for your pipes), apply the changes, noting the previous value, and performing a bufferbloat test with Waveform’s or others’ testing tools. Optionally, (this will drastically slow down the process, but can be worth it) one should actually hammer the network with actual load for a good few hours while testing some real-world applications that are sensitive to bufferbloat. Doing this between tweaked values will help expose how stable or unstable your ISP’s connection truly is over time.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 hours ago

    Yeah, not having cake sqm is the one thing that will probably kill Opnsense as a choice for some people. That’s not to say you cannot get excellent results with fq_codel, because you absolutely can (I actively use both OpenWRT and OPNSense on different network applications personally). It is definitely more work to get good results though. OPNSense’s wireguard support has been excellent for a number of years now, and it’s exclusively what I use for tunneling in a VPC I rent.

    If you’re particularly constricted on host hardware and need a lightweight router to manage multiple other VMs on said host, I could definitely see the benefits of running a minimal OpenWRT over OPNSense in that case.


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I mean, the mini PCs don’t come with a managed switch, and often without good wireless connectivity that most home routers will come equipped with. So in total with Wi-Fi APs and a decent switch, definitely more than €100 in total.

    Also unrelated, but if you’re running a x86 system with gigabytes of RAM, why not run Opnsense at that point?


  • jrgd@lemmy.ziptoSelfhosted@lemmy.worldOpenWRT router
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    Looking up the router, it was allegedly produced in 2024, according to the OpenWRT wiki. Barring any outliers, OpenWRT generally only sunsets hardware when a new version has higher hardware requirements than is provided by a device. The supported devices page lists out the hard requirements as well as recommendations. Currently 8 MiB flash storage is the minimum, with 16+ MiB recommended (for additional functions, user addons, etc.). 64 MiB is the minimum RAM target, with 128+ MiB recommended. According to the router’s wiki page, your chosen router exceeds both recommended requirements. Overall, the router should be suitable for a good while not barring any severe hardware or bootloader-level exploitable vulnerabilities are discovered with the device. There is no explicit date of when your router will no longer be supported, but you can check the history of the supported devices page to get the general trend of when OpenWRT bumps up the minimum requirements. For instance, it was just 4/8+ MiB flash storage and 32/64+ MiB RAM in early 2017.

    Depending on what you want to do with the router, getting something with more RAM and a stronger CPU could be beneficial for various tasks (e.g. adblock-fast, cake sqm, etc.). Definitely do research on what you want your router to do though before choosing to go with higher specs or not.


  • With LosslessCut, I’ve had good success with doing keyframe cuts with h.264 footage in MKV containers. Frame cuts end up in broken outputs pretty much every time. There’s also Avidemux, which might be worth a try. More than likely though, if you want frame-precision in your cuts, you’ll have to re-encode, at which point you could use something as minimal as Handbrake or a full NLE editor like Kdenlive.


  • jrgd@lemmy.ziptoLinux@programming.dev*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    21 days ago

    In reference to the article we’re discussing, I am not entirely talking about vulnerabilities in the implementations, but specifically about the lack of standard security features allegedly not present by design in D-Bus. Namely things like namespace reservation, access controls, and fully-defined transport encryption implementations.

    In an environment where desktop security is starting to be taken seriously (see Wayland, freedesktop protocols), D-Bus is lacking by comparison. Pulling from the article, any userland application that implements its own access to the user D-Bus can just dump the contents of your keychain (browser-stored passwords, Signal encryption keys, user contacts, manually stored secrets, etc.). I’d argue that for any untrustworthy application (deliberately run or not), shouldn’t be able to do something like that or otherwise tamper with any application it may feel like.

    Flatpak does seem to have ways to limit what applications can access through D-Bus, though I am not entirely sure of the extent of what limits are enforceable. I’ll have to read more into Flatpak’s D-Bus filtering to know exactly what it can and cannot do.

    Additionally, D-Bus policies are indeed a form of access control. Unfortunately there are limitations. The first is that they are statically defined config files. If an application desires D-Bus access restriction, the only way for that to happen is if a D-Bus policy file is installed via package manager with the software. Applications are not allowed control over access to their endpoints through D-Bus. Applications can absolutely build an authentication or access control layer on top of their D-Bus endpoints, though without a defined standard this quickly gets into the ‘vendor-specific behavior is encouraged’. (To note, KDE Wallet does this exact thing with an optional access control panel with snitching ability when applications access the user keyring.)

    As for the default user session policy (where applications like the user keyring are accessed), things aren’t looking that great. At least on OpenSUSE Leap 16, the session policy is left completely open with zero restrictions by default. This does mean that instead of being standard, every application that wants to use D-Bus is largely left to fend for themselves, which I have no doubt meaning that the level of security afforded can vary wildly between application sets (GNOME, KDE, Hyprland, COSMIC, Cinnamon, etc.). I’d argue this shouldn’t be the case and applications developers shouldn’t have to work around D-Bus in the goal of securely interfacing with it.


  • jrgd@lemmy.ziptoLinux@programming.dev*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    ·
    22 days ago

    To be fair, D-Bus is a protocol. Proper documentation and standards is half of implementation. Without any well-defined standards, a protocol is essentially useless and/or lawless. While not every case of non-compliance is the fault of D-Bus, the general lax nature of how endpoints are intended to be defined as well as the incompleteness for the actual standards applications should adhere to is a significant factor for why many applications are the way they are. In addition to the severe security flaws in D-Bus, this could be written with extensions to the protocol, becoming a new standard. Though if the problems are as deeply rooted as they are, it’s not entirely out of the question to create another standard that isn’t D-Bus.




  • To note, even if the claim of ‘more cheaters than Linux players’ at the end of lifecycle is true, it is a blatant lie by omission. I played Rust from 2016 til shortly after the game went out of Early Access. I stopped playing because Facepunch had completely ruined the Linux builds of the game by removing the long-standing OpenGL output and forcing the new (at the time to Unity) and completely untested Vulkan output as the only option on Linux. For anyone unfortunate enough to experience playing Rust at the end of its Linux run, the game would regularly have major graphical glitches and various rendering errors, including graphical artifacting that would be seizure inducing. If you are prone to epilepsy or otherwise sensitive to bright or flashing lights, please do not click this link. To note, the attached video is a mild case of what commonly happened when playing. That is, if the crashes and many hardware just no longer being able to launch the game properly didn’t impede that.

    Given all of that, I genuinely wouldn’t be surprised if the only “people” running the Linux client were actually cheat bots because there is no way many people were actually still playing the absolute rugpull of a game toward the end of its life.


  • Actually, you don’t have to via terminal! For OpenSUSE, you can use YaST to enable Packman and RPMFusion provides instructions to download the primary repo packages in a browser. Additionally, there is a more generic and slightly more technical way of providing repo URLs and managing additional repos from within PackageKit frontends like Discover. There is currently the point against RPMFusion that the Appstream data isn’t automatically configured upon update after adding the repos due to a bug in dnf5. Supposedly this is fixed now, but I haven’t verified the functionality again in a fresh setup. I’ll update this post later if it is indeed fixed.

    Edit:

    Tested Fedora 43 and Tumbleweed in VMs for quirk updates.

    Tumbleweed’s third-party repos (NVidia, Packman at least) still don’t have Appstream data, meaning packages have to be installed through YaST, but can be updated through PackageKit frontends.

    The particular DNF5 bug is fixed and functional, but PackageKit frontends don’t actually pull the appropriate packages in (perform group updates). This does mean that unfortunately there is at least one terminal command needed (dnf update @core) before jumping back to GUI and going from there.

    So, mostly terminal-free on Fedora and still terminal-free on OpenSUSE, just with little freedom of installer choice.


  • If you happen to remember, what DE’s/WM’s did you use back when testing with your NVidia cards (particularly the 2080 and 3070)? I’ve been trying to gauge a lot of differences in DE usability, and driver versions. In my recent testing, one user on Fedora KDE 42 with the NVidia-open drivers with a 4070 have had a nearly-flawless experience that would be pretty much on par with AMD or Intel. Meanwhile a 1080ti user genuinely had major problems with both KDE and GNOME on the same distro with the standard proprietary drivers.

    As for how much the average user needs to use the terminal on modern distros, especially with some of the graphical tools available, it genuinely is very little, if any at all. I think there is more of a problem with how many guides written go for the least common denominator approach of straight terminal commands for every tweak or fix somebody might look up. It is to a point where I might start attempting to write a series of guides and/or short-form videos for a lot of the more common ‘how-to’ and frequent problems that many users might encounter, both for GNOME and KDE at least.


  • I definitely forgot about it when writing but was definitely criteria for me when choosing my current desktop distro and the lineage of server distros was having some sort of MAC component (SELinux or AppArmor) with configured policies available in the distro repos. While it could be argued that a MAC component isn’t that necessary for desktop, I do believe for the rising marketshare of the Linux desktop that having the second stage of exploit protection will help mitigate some more severe malware attacks.

    I do wonder about PikaOS and CachyOS as recommendations for specifically how the packaging and rollback availability is done on them. I’ll be taking a look at both later in VMs to see how they function to an end-user. CachyOS seems to rebuild the Arch packages for newer x86 architecture and other optimizations specifically among other tweaks such as the modified kernel. Then there is PikaOS which is based on Debian Sid but apparently has patches on top of. I am not currently sure to what extent the patching is and if the project is attempting to catch breakages and regressions that make it into Sid.

    There is the other point I have of more ‘niche’ distros like PikaOS, CachyOS, and Nobara, Bazzite to a lesser extent. I do wonder of the longevity of many of them. If not from developer burnout, financials, or the other standard culprits but from much of what makes the distros currently unique being absorbed by more mainstream distros. The work that projects like CachyOS, Nobara, and PikaOS are certainly important, but I feel that things like the higher x86 build targets, kernel patches, etc. will eventually make it into the upstream projects as well. PikaOS will probably have a longer lifespan than say CachyOS due to Debian likely will be among the last distros to drop support for older x86_64 processors, but I think the point does stand. Will the current ‘testbed’ distros still remain in say 5-10 years?


  • My big thing with recommending Arch and ‘direct’ derivatives (those that don’t repackage the Arch repositories with their own package versions). Is that Arch explicitly recommends for users to always read the latest release notes on the archwiki homepage before any upgrade, due to breakages sometimes being let in. This either means that every user will need to be their own system maintainer and input their judgement into each update or will need snapshots to restore to and the hope that breaking changes will eventually fix themselves out, if they don’t want to reconfigure parts of their OS themselves. If direct derivatives implement automatic btrfs system snapshots that can be selected from like OpenSUSE Tumbleweed does, I think such a derivative could be recommended to more experienced computers users in lieu of other distros like Fedora or Tumbleweed.



  • I do believe I wasn’t specific enough in what I mean in some places. I did add the ‘(yet)’ portion for Pop! OS and Linux Mint because I am fairly aware of and tracking the efforts of COSMIC and Cinnamon (Wayland). While in the grand scheme it’s not going to be a major point, I do think System76 missed the mark on providing a 24.04 release in a timely manner. The current stable release target is still earlier than I would have expected, but will release much closer to upstream 26.04 than upstream 24.04, that some effort in porting Pop! Shell to 24.04’s GNOME version without any feature changes (essentially maintenance mode) would have been better than what will be ~1 year gap for LTS users to upgrade to the latest LTS release. Linux Mint by comparison still having regular releases while their Wayland version of the Cinnamon desktop is arguably a better route for their active userbase. By all means, when Pop! OS and Linux Mint get their Wayland releases out, I will be adding them possibly not as top recommendations depending on Wayland protocol inclusion, but decently high (and probably above straight GNOME).

    Also, for my LTS recommendations I should probably clarify that I do intend to recommend LTS specifically for those that aren’t going to care about the latest features, will probably have the install done by myself anyways, and won’t want to be hassled by regular feature upgrades. A lot of my older family members that would have originally happily kept using Windows XP if I didn’t have them stop connecting those systems to the internet for security reasons are the target audience of ‘basic computing’ that an LTS distro. When recommending for gamers, creatives, developers, and other more involved users that need more out of their computers, and likely have newer hardware is where I swing heavily toward Fedora and OpenSUSE Tumbleweed, because their needs and desires will genuinely get hindered by the older packages of most LTS unless they can rely solely on Flatpaks, which means they could get away with using Aurora or Bazzite even.






  • I recently dug through a sampled list of UE5 games released on Steam. It is shocking how many have such poor reviews (often for reasons not beholden to the engine). Sifting through stuff, I did find a handful of games that didn’t seem to have major performance, graphical flaws based on reviews and forum posts, though to note some of them also didn’t seem to leverage much of UE5 technologies to begin with (Lumen, Nanite, etc.). Some games did seem to leverage UE5 tech still and have minimal to no complaints.

    Sadly, a large portion of the released games I pulled from this list had referenced performance issues or otherwise major issues that tanked the Steam review score. I unfortunately didn’t note down my findings for the handful that didn’t, but if you want to look for yourself the list I searched from is linked.