• 407 Posts
  • 1.83K Comments
Joined 7 years ago
cake
Cake day: August 24th, 2019

help-circle




  • In Germany men aged 18-45 apparently can’t leave the country for more than 3 months without submitting documents. It’s a new law.

    Shortly before (in 2025) they apparently approved a ‘hybrid’ system where 18 year olds are pre-drafted. Prior to that they had a fully volunteer force since 2011. There is also a mechanism in that law that the federal gov can introduce conscription if enlistment targets are not met.

    It’s getting really dire. they’re preparing for WW3, I’m not sure how else to interpret it. But this time it’ll be against anti-imperialism.

    If it starts getting too real skip the country and never ever come back tbh. Tell them you’ll come back in 3 months lol.





  • that’s even bigger than I last heard at 60k tons! It’s about a third of the yearly rice production in China.

    Reminder that China is not an oil exporting nation and has never been, they are a net importer (11m barrels imported vs 4m domestic production per day). There is no sense in China buying oil to try and send to Cuba, they would just be slapping a Chinese sticker on Russian oil that they already buy (China is Russia’s biggest buyer of oil and LNG).

    It’s better if they focus on providing what they do actually own and produce, such as solar panels and rice.

    Napkin math is easy enough, a portion of dry rice for one person is around 60g when used as a side dish. There are 11 million people living in Cuba, and I’m assuming that’s 90k metric tons, not imperial.

    At two portions a day, this is enough rice to feed the entire country for ~68 days. At three portions a day (or assuming slightly higher portions), this would last 45 days. And it’s not just rice of course; 60g is good for a side dish or if you’re really, really stretching it. A “full” portion if you eat only rice and nothing else might be 100-120g (around 360kcal in 100g of dry rice).




  • Project Gutenberg has a copy but it’s an older translation/edition. There’s a newer edition that flows much better imo, you can find it by just looking for a pdf, though it remains a dense read. I recommend the first book (maybe even reading it twice if needed), and the second book. First book is still required reading in modern military academies. I think I stopped at book two, I don’t remember what the third book is about right now but the fourth is probably not that interesting anymore since it deals with tactics in certain situations (like attacking or defending a river). I’m sure that was fire in the 1800s but today somehow I don’t think it applies as much lol. Although you never know.



  • There’s technically different ways to train models and they work different, but they’re all neural networks working on layers in the end. What I mean is ‘genAI’ isn’t really a thing beyond a vague boogeyman, which single it out as some unique ‘evil’ because detractors have to concede there’s actual uses for AI while still wanting to retain their apprehension against it. It doesn’t name the actual problem they have: either with big tech companies, or against the loss of their sense of superiority for not using AI. But if we have a problem with OpenAI, Anthropic, Amazon etc then we should be able to name them out and study them without lumping all of it into the ‘genAI’ label.

    As an example when you use a sentence-transformer to turn a sentence into a tensor (an array of vectors in N dimensions, which gives the sentence semantic meaning in pure numbers), you’re using genAI… if genAI had an objective, measurable definition. The sentence transformer generates vectors out of your prompt, based on how the model was trained.

    Yet you can use sentence-transformers for a lot of stuff that is not necessarily ‘generative’. Making a search engine, for example, which I did for a hobby project. I wouldn’t say Google is ‘generative’ though.

    So what is genAI? It’s whatever one doesn’t like. That way they can distance themselves from ‘genAI’ while conceding the actual usecases of AI, because there are indeed objectively beneficial uses for it, and they can’t persist in denying that reality forever, lest they look like fools (like when twitter didn’t understand how image generation worked early on and tried to claim that it was pasting together pieces from thousands of different pictures. They moved on from that very quickly when they learned about noise diffusion.)

    I know I’m a bit over the place because I haven’t synthesized this on paper yet, but basically I don’t like the distinction because it creates a divide between socially acceptable AI use and socially unacceptable AI use. But the difference doesn’t exist; bullying people into compliance is idealist and will not lead to lasting change, material conditions will.

    This leads us to being able to talk about the electricity/water consumption. I don’t doubt the MIT’s findings, though I will say estimates are always only estimates and calculating actual, final energy use is difficult even when you have all the data available.

    However like I often say, if we united all the countries of the world together, we could have the largest GDP in the universe. What I mean is that we must not miss the forest for the trees. 1 hour of running microwave seems like a lot because we usually don’t run the microwave for more than 3 minutes at home, but you know who runs microwaves all day long without a care in the world? The fossil fuel industry. Golf courses. The meat industry. A single grocery store throwing away hundreds of kilograms of food because it’s perishable has done more environmental harm than my microwave ever could in its lifespan of heating up my food.

    Even gaming takes more power than running a local neural network, whether an LLM or an image diffusion model. Youtube is hosted in datacenters too, and some years back it was all the rage on Linkedin to try and shame proles for watching too much youtube because “watching one hour of youtube consumes as much power as leaving the lights on when you don’t use them! So think about that when you leave it on as background noise!”

    We have to move away from individual citizen responsibility (i.e. instilling moral failure into people for not living up to some standard we impose on ourselves and each other) and towards systematic structural change. There is no ethical consumption under capitalism; people are allowed to watch Netflix and drive cars, and they will do it regardless of how many managers on Linkedin disapprove. That’s nothing compared to a billionaire flying a private jet for a 15 minute trip or the meat industry making a beef patty.

    That’s not to say there aren’t issues with the way AI is treated in the West. The US, in its usual way, has given AI companies carte blanche to do whatever they want regardless of law. This is why datacenters pollute; people like Elon Musk buy gas turbines to power their data center because the US grid could not power them even if it wanted to. They normally need EPA approval for gas turbines, but they just don’t care because they can absorb the fine, and they figure they won’t even be hit with one. And so far that’s been true. Musk’s datacenter in Memphis has 12 turbines when deploying even 1 is already a huge deal. But that’s the US, it’s not new and it’s not the only way of doing things, it’s just theirs. In China they are installing the US grid’s equivalent of solar every 18 months, so it’s very likely that a substantial portion of Deepseek or GLM (z.ai) is powered by solar (I tried to look for more information once before but it doesn’t really seem to exist). But, if we limited ourselves to saying “oh it’s just how genAI is, genAI is bad for the environment” we would miss all of that and never study the problem deeper.

    We agree though overall - there is lacking education in anything to do with AI and it’s going to be important to teach people (both in school and outside) about AI. I wanted to add this comment to answer @burlemarx@lemmygrad.ml’s comment as well.


  • War is simply the continuation of politics by other means; to win a war, you have to ‘shatter the enemy’s will to resist, when the enemy is no longer able or willing to fight’. Clausewitz adds: the simplest way to win is to disarm the opponent so that he cannot prevent you from imposing your will (you can see the dialectical thinking and even though Clausewitz was an idealist there is clearly a material reality here).

    If Iran can achieve the above, they can win anything. It depends who gets their will ‘shattered’ first and to what extent. When it comes to disarming I think the US is well on its way there with the interceptor shortage and the fact that invading Iran is complete nonsense from the get-go. Saddam tried and he had a full land border with it! I do want them to try invading though lol, just for them to taste the absolute defeat. Kharg is probably a misdirection, and they’re going to land way before the Strait in a relatively flat area, Chabahar. However then they will be confronted by harsh deserts - Iran is either mountains or flat deserts. They won’t make it 20 kilometers inland and I suspect if we do see an invasion, Iran will let them land relatively unchallenged so they can trap them there more easily. We shall see.

    But anyway. There is precedence in Vietnam, for example. Not only through the two principles outlined above but also through Clausewitz’s point that war progresses dialectically. Both parties don’t immediately commit the totality of their forces, they gradually mount them up and it snowballs as both need to commit more and more to outdo the adversary.

    In Vietnam the war became costly. It might be the typical liberal analysis of it but it’s the one I have lol. It was costly both in terms of money and equipment drained, but also in loss of life. I don’t know how much protests in the US participated, I think it’s often used as white savior reasoning i.e. “even when Vietnam won it was because we let them win”. When Vietnam won they forced the US to withdraw fully within 60 days, and then seized the comprador southern state shortly after unopposed.

    But right now will to fight is very high in the US. It’s going to be difficult to knock them down from their pedestal. But when that happens Iran can firstly very easily end the sanctions against it, at least for a time, and pursue nuclear freely in the way they want. We both know the UN is a tool of imperialism and will just go along with whatever the US wants.

    The bases around the Gulf are completely destroyed and keep being pummeled, so it’s entirely possible the US won’t even want to build them back up. It will take 10+ years by some estimates to rebuild some of the radars alone. They might want to rebuild them partially, with a scaled back presence. But the damage is done.

    With that I think it will be possible for them to charge a toll through the Strait. Who would oppose them? the other gulf states are refusing to get involved beyond harshly-worded letters, they know they don’t have any defenses left if Iran decided to go after them.

    “Israel” is a tougher case for me to analyze. I know that Iran is heavily shelling the entity, especially with cluster munition - these are more for soft (ie fleshy) targets. They do pack a punch but you also don’t really control where they fall, so their utility is in saturating an area and preventing congregation or passage there. But Iran has shown they can easily target whatever they want, especially with cheap Shaheds, so at this time I think their usage is more psychological on the settlers. But if you didn’t keep up with the clusters, there’s a ton of them being used. Every day I see new videos.


  • When this came out westerners were crying about ‘muh AI’ and how this was a terrible decision - because only they, through actively refusing to understand AI, actually understand AI (don’t laugh!). The school board of the third biggest city in the country, 21 million people, is immediately, irrevocably wrong because westerners have decided if they don’t like AI, then nobody should like AI either – this project got its start in the Beijing school board before being nationalized as is often the case in China with pilot projects.

    Westerners, including some ‘communists’, want schools to be places where kids only learn manual skills like how to file taxes, how to parallel park or how to cook a meal and nothing intellectual whatsoever. Beijing is not saying that kids will go into an LLM career and nothing else, they are giving them a little bit more taste of what exists in the world, what it has to offer. Some of them will be building prosthetics powered by AI, get a prize and discover a career in engineering.

    But to us everything has to justify its own cost and profit-making ability, even schooling. Rail has to be self-sufficient in five years to be very tentatively approved, and the economic stimulus it provides is not considered at all in the equation: it has to cover its own cost of operation. If there is no immediate gain from it, then we don’t want it. And not only that, but we don’t want others to have it either.




  • Okay, I ended up doing a bunch of research and writing and rewriting this comment a few times lol but I think we got to the bottom of it.

    In the final analysis, I think what dlss 5 shows is that nvidia is betting on moving away from ‘traditional’ GPUs to tensor-architecture processing units, made especially for running AI models.

    So what they would have is instead of rendering the ray-tracing, subsurface scattering, hair physics etc directly on the GPU all at once, they would have an AI model running on a TPU (Tensor Processing Unit) to render it on the frame/geometry. This would give some breathing room back in computing power, if it pans out.

    This would cement their status as a monopoly or near-monopoly in TPUs, but it would also bypass the bottleneck of current tech that’s not scalable indefinitely. The 5090 is already pushing manufacturing capabilities. The new dlss does help performance, especially on TPUs if they go that way, but even on the GPU, compared to the same ‘native’ options.

    This could work, but it’s very early work. So how this will pan out in practice is still anyone’s guess, it’s too early to be sure that we’ll just have to settle for TPUs and ‘slop’.

    Keep in mind nvidia is already the leader in GPUs, and games are made to their specifications for their hardware (for the most part). They’re the ones who released physX, and then shelved it. They’re the ones who make ray tracing and HDR that people can’t run, and this is tech we already have. So I don’t necessarily see the move to TPUs, if it even ends up happening, as wholly different to what we’ve been living with for 20+ years.

    In my opinion this showcase was more of a developers’ demo, though it seems I was right that nvidia engineers used the model on the games without going through the devs - the artists that worked on some of those games were surprised that their game was in the video. Engineers used an aggressive setting and capable devs could instead use it sparingly or the way they want it for their art direction. And others won’t care and just press the ‘enable everything’ button.

    However, the fact remains that most people won’t be able to run it. They have announced that dlss 5 will initially run on a single 5090, which is just out of price and even if it was affordable, not everyone could use it. So they’re sowing the seeds now, knowing that devs will end up using the SDK and thus get ‘locked’ into using nvidia - like they’ve been doing for the past 20 years of course.

    If this pans out for nvidia hardware requirements will go down, and along with it the model will get expanded to allow for more usecases, like LORAs or fine-tuning on the devs’ part to get it to look the way they want. In general with AI it’s the same situation as Photoshop in its time - people don’t know how to approach it at first and think it’s taking away their intent, then they get comfortable with it and find ways they can still show intent even with different tools. Companies started making digital drawing surfaces etc. It’ll be similar here, but in the final analysis what we see is capitalism doing its monopoly thing. I know it’s a duh moment lol, but it’s interesting seeing it play out perfectly from just a showcase video.

    What will likely end up happening in the short-term is studios will use ‘captured in-game (*with dlss 5)’ disclaimers in their trailers, and they will include it in the game, but just like motion blur it’s something people won’t turn on, not that they could for at least a good few years lol. From my research I found that graphics are a big selling point, even when people won’t be able to run the game at max settings. Of course we knew graphics sell, but it’s interesting that it doesn’t seem to matter if the customer will be able to run the graphics - they just like that it looks good, even if they know that they won’t get these graphics out of it.

    tl;dr: new paradigm shift that shakes up the market lenin-pointing . but yeah the big takeaway is a complete shift from GPUs to TPUs, with all that entails.