🦞 OpenClaw on local models has been an absolute grind, but I finally got my Radeon 6750 XT actually using the GPU properly. Running everything on Ubuntu with Ollama now.
No more paying for API calls. No more watching the CPU melt while the GPU sits there doing nothing. Here'scurrent setup: AMD Radeon RX 6750 XT (12GB), Ubuntu Linux, Ollama backend, OpenClaw handling the agent stuff (Telegram integration, tools, etc.).
I just wanted something that runs fully offline and is actually usable day-to-day.
The first attempts were rough. Tried a couple smaller models that kinda worked but felt outdated and slow.
Moved to Qwen2.5-7B for better tool calling… still the GPU was barely waking up. radeontop sitting at like 10%, CPU going nuts 200%.. Everything felt laggy.
AMD + local LLMs can be brutal sometimes.
The turning point was switching to Ollama on Ubuntu. Spent a ton of time tweaking the model settings and cleaning up my OpenClaw config. Pushed as many layers as I could to the GPU, adjusted context and batch sizes to fit what OpenClaw needs.
Kept staring at the monitors… and finally the GPU started pulling real load.
That moment when utilization actually jumped? So satisfying. Responses feel way snappier now. Tool calling is reliable, everything stays local.
12GB isn't massive, but with the right quantized models this card can actually run a proper local agent.
Not gonna lie – it's still not plug-and-play. Linux + Ollama + AMD needs some driver fiddling and careful tuning (ROCm stuff, model choices, etc.). Bigger models would want more VRAM, but this setup shows consumer GPUs can handle real self-hosted agents.
This whole thing took around 12 hours of debugging and tweaking. Totally worth it though. Zero token costs, everything private, and my own hardware is finally doing the work instead of just sitting there.
Anyone else running OpenClaw with Ollama on AMD cards (especially 6750 XT or similar RDNA2 stuff)? Use what you got can always upgrade later..
What's your setup like? Any tips or gotchas I should know about?
RT if you've ever been stupidly happy about a GPU usage spike 😂 Local agents on regular hardware are actually becoming doable.
Leave a reply "OpenClaw Setup" if you want my config settings (make sure to follow too)
#OpenCla
w #LocalLL
M #Ollam
a #Radeo
n #RX6750X
T #SelfHostedA
I #Ubuntu