When I first got into local LLMs nearly 3 years ago, in mid 2023, the frontier closed models were ofcourse impressively capable.
I then tried my hand on running 7b size local models, primarily one called Zephyr-7b (what happened to these models?? Dolphin anyone??), on my gaming PC with 8GB AMD RX580 GPU. Fair to say it was just a curiosity exercise (in terms of model performance).
Fast forward to this month, I revisit local LLM. (Although I no longer have the gaming PC, cost-of-living-crisis anyone 😫 )
And, the 31b size models look very sufficient. #Qwen has taken the helm in this order. Which is still very expensive to setup locally, although within grasp.
I’m rooting for the edge-computing models now - the ~2b size models. Due to their low footprint, they are practical to run in a SBC 24/7 at home for many people.
But these edge models are the ‘curiosity category’ now.
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
commenting so i can come back to this later
I didn;t try any 7b ones lately, they may be better fit for 16gb I think. I was able to try the 2b ones as I mentioned (on cpu). they are subpar. like mentioned the usable ones were 31b, I think you need atleast 24gb vram for most models though. maybe someone else can suggest better.
deleted by creator
you can give “unloading some layers to RAM” a try though… that way you can get your hands on the “usable” 31b models. browse around to find some good 31b ones… GL
Do you have 24 GB?
deleted by creator
That’s your issue.
deleted by creator
For small model bonsai series seems getting the spotlight. Natively trained on1bit and ternary 1.58bit, 8B runs on ~1GB memory.
deleted by creator
funny I tried the 8B bonsai https://huggingface.co/prism-ml/Bonsai-8B-gguf when loaded it takes ~7GB RAM!! When prompting it stalls my llama.cpp container (I’m running on a weak 4th gen i5)
Interesting thanks!
For what stuff do you want to use them? I don’t think they come remotely close to today’s commercial models. Maybe for a specific purpose?
hey, thanks for your response… yeah that’s what I meant, the 2b models aren’t usable in today’s state, but more practical for everyday use if they work out…
I actually meant the 31b models are useful for my purpose. I don’t do full-on agentic coding, just interactive chat/prompting. Example, I make good use for making linux shell scripts (as I don’t know howto myself). Currently I use qwen3.5-flash via cloud. It’s as good as the frontier models back then if not better…
I wanted to use smaller models, but then do more work on the “thinking” process. I didn’t come far, because it get so slow with normal hardware and too expensive on dedicated one. Time consuming (I’m also not a programmer) but a fun project, but in the end I just decided to satisfy the privacy angle with protons ai Lumo.
Proton has AI? Damn, that’s gotta be bleeding their coffers
deleted by creator
They have been working on this. Only 3 months ago it was pretty terrible. Today it’s almost on par with chatgpt. A bit worse on rag, slower,… good enough for normal use.
deleted by creator
deleted by creator
This weekend I had an LLM walk me through setting up some home server stuff and networking. I tried using Proton’s Lumo and Qwen 3.6 locally. I have to say Qwen was the more impressive of the two models. When I first tried running models locally like llama 4, I remember thinking to myself that this was a dead end and big servers would always have the advantage, but it seems like we’re hitting a turning point where many things can be done locally.
cool what was your hardware, and which qwen size you used? thanks
I have a 24GB AMD 7900XTX, and it’s a 35b parameter model.
Ooo… I’m running a 7900 XTX as well. Having 24GB without the Nvidia tax has been super nice for AI stuff. I have a 16GB 6900 XT running in another computer, and a lot of my AI model selection is still sized for it. I may need to stop procrastinating and copy your setup sooner rather than later.
Before I forget, can I ask you what GPU driver version you’re running? I recently encountered some stability issues after a driver update (trying to support gaming and AI stuff at the same time), and the latest version I could find any stability claims for was 24.12.1.
For me, anything less than gpt oss 20b (a2b) is just for messing around with or for basic categorisation and basic text or data processing with highly structured prompts.





