• 0 Posts
  • 117 Comments
Joined 30 days ago
cake
Cake day: December 20th, 2025

help-circle

  • My brother or sister, this thread is literally about how the “solutions” to the “problem” you describe break one of the most common expectations users have of computers.

    The fact that python (and javascript!) create terrible dependency clashes is not a defense of static linking, it’s an indictment of those languages and the people who develop, maintain and use them.

    “Oh yeah? Try using the terrible software that breaks the computer!” Isn’t the powerful argument you think it is.

    Users hated Java because seeing the splash popup for it was the loading screen to what would inevitably be a barely functional pile developed by the lowest paid person in the company and because it was confusing to deal with, not because there were version conflicts. I remember Java being decent about that once the 0s hit at least, that you would need to upgrade the jre but never downgrade.


  • Ah, so back2nt4 like I said earlier?

    You don’t need to insult and attack in every reply. This isn’t reddit.

    It doesn’t make any sense to bring up avoiding dependencies in the context of personal computing (the context of this thread), because nowadays the user never sees it. Either deps are handled by the package manager or they’re shipped with the target software except shipping static libraries breaks the environment now so it’s a worse option.

    People don’t care if dependencies are installed, they care if the environment breaks. They care if the thing you just described, potential interference with normal operation, happens!

    Again: this was a solved problem for decades and now people are opening up the wound to implement stuff that’s only appropriate for use cases narrower than general purpose personal computing. It’s astounding and truly hard to explain.

    And no one but the poor schmuck computer janitor cares about making IT work easier. Shes being paid to do that work and the total extent of concern given to making the work easier is an equation that accounting solves each quarter. It’s the same as the countertops in the bathrooms: first, are they what the company wants? Second, do they meet the requirements, distant, unconsidered third: are they gonna cost too much to clean?


  • Rather than doing what you are asking about, why not swap them over to the 21h2 ltsc iot version of windows 10 that will receive updates till 2032?

    Doing that will improve their lives by rolling the computer back to what they expect and are familiar with, avoid the problems 11 is having and still keep them up to date.

    It’s probably best to do something like that instead of evangelizing linux to people who only want the computer to function in expected ways as opposed to learning a bunch of new stuff.


  • I’m gonna go out on a very stable limb here and recognize that containers, immutability and atomic(ism?) are solutions to wildly different problems and the set of circumstances that allowed them to be viewed as acceptable approaches stem from the costs and reliability of storage and bandwidth and not from some form of correctness.

    Now that at the very least storage and the memory required to page it are getting expensive, you can expect people to become more vocal about how badly implemented these solutions are, weather or not they’re able to actually articulate it in the face of you stamping your feet and saying “nuh-uh”, as I have, or not.

    I can tee you up, holy warrior of containerization, immutability and atomicisation: any vague gestures towards security from the aforementioned technologies are made redundant by two frameworks and invalidated by the compromised web of trust our entire world relies upon using identity as authentication.


  • Static linked libraries shipped with software exchange dependency hell for environment inconsistency.

    Extensive handlers and api calls can work around that, but then you start building the windows nt system all over again.

    The reason atomic/immutable became popular is because two generations looked out upon the plains and wept because there were no more useful programming problems to solve but had to suck it up and manufacture some so they could solve them to pad their resumes in order to get faang internships.


  • Not to get too awful off topic, but you can do that and it doesn’t work good.

    There are two problems with what you’re suggesting being fast: the first is that there are elements of motion or color change in the video that you don’t wanna trigger on, like the shadows of leaves blowing in the wind or the colors slowly getting orange because the sun is setting and afaik there’s not a good method to figure out if the change in bitrate you’re catching is because an imperceptible swarm of gnats moved under a street lamp and caused the sidewalk underneath to move exactly one bit down the chroma scale or because a man in a trenchcoat stepped out from behind the pole that lamp is mounted on. The second is that the video is already compressed so you gotta uncompress it to figure out what is on the frame to then figure out if the change from that frame to the next is enough to call “motion” and start flagging.

    It’s one of the reasons why you super want to be working with raw or minimally compressed video when you’re editing something because simple things are much harder from a computation perspective.

    E: the script to use ffmpeg another person posted checks every few seconds so it doesn’t actually crunch through all the frames, so sort and path finding techniques like that can be used to make it faster.



  • Eh, I think that’s a pretty bad use case.

    A long time ago my high school Spanish textbook used excerpts from some novel we had all been forced to read in earlier grades as practice reading sections. After I got to the end of the second or third level, I went back to read the first parts and realized that in order to make the text appropriate for what early learners had under their belts the translation had taken some liberties recognizable even to a high school student with a weak grasp of the language.

    Of course, there’s a good reason for that! Translation for the purposes of education is different from translation for the purposes of conveying the texts meaning.

    So it would seem like a tool intended to translate a text that’s relatively difficult to read for native speakers into one easy to read to native speakers wouldn’t be the best option for language learners.

    And rather than just go off that one experience, I can corroborate it with advice from language teachers to choose texts that aren’t above my own level.

    So I don’t think it’s a good tool for a language learner.


  • Op one of the reasons it’s frustrating for me to see so much focus put on flat packs, snaps, docker images and the like is that they manage to excel at doing their one expected thing, but throw everything else out by the wayside.

    Frankly I think their prominence is a direct result of the way their goal is structured: make sure the “🚀getting started” section of the git/wiki works 100% of the time.

    It’s a distillation of the poison ethos of technology companies dripping into the open source world. We are now moving fast and breaking things. Oh, the things we broke are the users environment? Well, it just so happens that we sell a premium product that integrates properly for a small subscription fee.


  • Yeah it’s always sucked and tbh the only place drag and drop has ever worked close to predictably is on the mac.

    As someone who uses linux, mac and windows, if you rely on drag and drop working right you’re probably best served by using macos. Windows is a distant second but if you get familiar with its eccentricities then it’s doable too.


  • Hey, someone already gave you the right answer, which is ffmpeg.

    I handle dozens of cameras and their video. I may be able to help you set your expectations appropriately.

    You do not need a gui. You would not feel more comfortable with a gui. There are so many options, methods and process available just within the ffmpeg package that you would be overwhelmed.

    “Reasonably fast” to me is seven times faster than the source material. That means it would take ffmpeg a day to go through a weeks worth of footage and dump out the parts with motion. That takes a very fast computer with lots of ram.

    Consider locating some footage with a few different sections of motion, feeding it into ffmpeg and making sure you get the output you want (files appropriately sized, time stamped, etc) then calculating how long it would take to do all your footage that way.

    Not only will working with a smaller, “known” section help you figure out how to do it and get it right, it will help you figure out if you need to rent time on a server or something to get the whole job done faster.

    E: I am trying to get you to do a test section in order to find out how fast your system will perform. Different factors like media speed, ram size, hardware acceleration and system load will have a significant impact.


  • In answer to your precise, specific question: yes, use the macos podcasts app in a vm or something. If you want something “simpler”, use podcast-dl or yt-dlp with the podcast rss link. You can find the rss link from the podcast author or publisher.

    Here’s how this dumb shit works:

    When your software wants to get a podcast you’ve subscribed to it uses the rss link to dial out to some web server that returns a list of episodes. Your software then chooses which ones it’s gonna ask for based on your settings or whatever and asks that server to give it the episodes themselves. The server could, if it was sufficiently motivated, store different versions of the episodes for each advertising zone and give the requester those episodes with regional ads appropriate to their public ip inserted in instead of the ad-free versions.

    Now your podcast from Botswana has ads for the biggest ford dealership in the tri state area in it. Weird!

    If the server were extremely motivated, it could split the file up and insert ads from the closest advertiser to your ip willing to pay a premium for geotargeted ads in that particular podcast then put it all back together in real time as it sends it to you.

    Now your podcast from Botswana has an ad for weird Jim’s fried chicken sandwitch restaurant that just opened across town. Alarming!

    Some ways to avoid this:

    Just be from a place no one advertises in. Someone said set your vpn to Portugal, that can work. If nothing else using a vpn will prevent accurate geolocation data from being recorded when you dl.

    Use a software. Someone said sponsorblock extensions for yt-dlp may catch and remove ads.

    Use a pirate feed of the paid or unpaid ad-free versions. They’re out there.

    Pay for the ad free versions.

    Set up a rss feed server that your device subscribes to instead of using the ad-inserting ones. You still have to get the content somehow though, that’s an exercise for the reader.




  • MIT/apache/bsd are bad licenses and people that defend them are bad people. The effect of those licenses are bad.

    Arguing that non free licenses are too popular is assuming nothing can change.

    Arguing that the kernel isn’t free enough to count arbitrarily sets the goalposts up and kicks right through em.

    Bad licenses are part of the infrastructure that allow the bad effects we see in the world to occur. Opposing them is good.

    You can hate hippies for their smell and unwillingness to get with the fucking program but they do be handing out Ls sometimes.