Jean-Marc | asiai
Jean-Marc | asiai @jmn67 ·
Même modèle, même machine, deux moteurs : LM Studio : 102 tok/s, 12W Ollama : 70 tok/s, 15W +46% de débit, -20% de conso. Sur un Mac Mini M4 Pro. Le moteur d'inférence compte autant que le hardware. #LocalLLM #EdgeAI #AppleSilicon
17
YaaS 📡
YaaS 📡 @yaas0x ·
Cloudflare just launched AI Security for autonomous agents. Zero egress R2 is eating S3 alive. Workers AI running Llama 4 at the edge. The "CDN company" is now an AI infra giant and most builders haven't noticed yet. ⚡ #Cloudflare #EdgeAI
1
47
Praveen Kumar Verma
Praveen Kumar Verma @Alacritic_Super ·
Replying to @Alacritic_Super
The shift: Cloud AI → Distributed intelligence AI is no longer somewhere else It is inside your devices. And that changes everything. Follow for more real-world AI breakdowns. #EdgeAI #IoT
10
JTCrawford
JTCrawford @JtCrawford ·
CERN burning neural nets directly into silicon for nanosecond particle detection at the LHC. When physics experiments demand edge AI faster than cloud latency allows, decentralization wins. Compute sovereignty isn't ideology—it's physics. #EdgeAI #Innovation
9
Avisheka
Avisheka @Avisheka284449 ·
which model quantization technique is best suitable for smartphones atp...specially if the model is finetuned as that tends to amplify outliers(if any) in weights..from a hardware compatibility pov currently whats most robust #LLM #EdgeAI #OnDeviceAI #Quantization #MobileAI
1
10
arlec 🥊 emoji
arlec 🥊 emoji @arleclec ·
Google TurboQuant 把端側模型內存佔用砍掉 6 倍,速度提 8 倍。真正的「筆記本 AI Agent」時代來了:不再依賴雲端 API,隱私和低延遲並存。未來每個人的電腦都是一個主權智能中心。 #AI #TurboQuant #EdgeAI #OpenClaw
AI Crypto Scanner AI Crypto Scanner @aicryptoscanner ·
AI BOTS HARVEST POLYMARKET PROFITS Algorithms extract yield from short-term crypto markets using execution speed advantages. Bots arbitrage price discrepancies in milliseconds, effectively front-running human traders. Retail participants lose edge as prediction markets prof...
40
Mukunda Katta
Mukunda Katta @katta_mukunda ·
The convergence of neuromorphic computing and foundation models is quietly reshaping edge AI. Imagine running a 7B parameter model on a chip that sips milliwatts — that's where we're headed by 2027. The real bottleneck isn't compute anymore. It's memory bandwidth. #AI #EdgeAI #Neuromorphic #DeepLearning #Tech
29
rizki firmansyah
rizki firmansyah @rizkifi27626964 ·
GPT-5.4 Mini: - 40% lebih kecil dari GPT-5 - 3x lebih fast inference - Bisa run di edge device (Jetson, Raspberry Pi 5) GPT-5.4 Nano: - 10x lebih kecil - Real-time inference di microcontroller - Perfect untuk embedded system #EdgeAI
18
MX3 Dev
MX3 Dev @Mx3Dev ·
A compressão de 20% do Qwen3.5-35B com apenas 1% de perda permite rodar o modelo completo em GPUs de 24GB VRAM. Isso democratiza o uso de LLMs robustos para mais empresas e desenvolvedores, abrindo novas portas para inferência on-premise. 🧠💡 #IA #EdgeAI x.com/huggingface/st…
0xSero 0xSero @0xSero ·
Qwen3.5-35B compressed 20% with 1%~ performance drop on average. Now you can fit this (4bits) with full context on 24GB of VRAM 700$~ or 1x 3090 huggingface.co/0xSero/Qwen-3.…
0xSero/Qwen-3.5-28B-A3B-REAP · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

From huggingface.co
20
Edge AI and Vision Alliance
Edge AI and Vision Alliance @edgeaivision ·
Agentic AI needs memory to be useful. This upcoming webinar examines how memory systems—short- and long-term—enable agents to reason, adapt, and act over time in real-world workflows. 🔗 edge-ai-vision.com/2026/03/upcomi… #EdgeAI #AgenticAI #AIArchitecture #MLSystems
Upcoming Webinar on Agentic Memory Systems - Edge AI and Vision Alliance

On April 16, 2026, at 1:00 pm EDT (10:00 am PDT) Boston.AI will deliver a webinar “Remembering to Forget: Agentic Memory Systems and Context Constraints” From the event page: As AI agents evolve from...

From edge-ai-vision.com
1
2
38
Efficiently Connected, Inc.
Efficiently Connected, Inc. @Eff_Connected ·
⚡ Cisco is pushing AI factories to the edge. With NVIDIA, it’s building: • Secure, distributed AI infrastructure • Networking that powers inference at scale • Agent-ready, policy-driven environments Shift → AI clusters ➡️ AI operating architectures #AI #EdgeAI #Networking
9
Subir Biswas | Techblogs
Subir Biswas | Techblogs @SubirBiswas ·
On-device AI is changing the game — faster, more private, and kinder to batteries. What that means for phones, wearables, and even rail & autos in 2026: wix.to/c0KmZHh #OnDeviceAI #EdgeAI #AI2026 #TechNews
Gadgets That Learn You: How On-Device AI Is Quietly Revolutionising Electronics in 2026

The smartest gadget of 2026 isn’t the one with the most features—it’s the one that understands you locally, keeps your data on your device when it can, and only calls home when it must. That’s the...

From vertexknowledge.com
1
Edge AI and Vision Alliance
Edge AI and Vision Alliance @edgeaivision ·
Always-on voice at the edge—without the cloud. @MicrochipTech’s lightweight keyword spotting solution targets low-latency, low-memory wake word detection on MCUs and MPUs—enabling practical voice interfaces in constrained systems. 🔗edge-ai-vision.com/2026/03/lightw… #EdgeAI #TinyML
Lightweight Keyword Spotting Solution from Microchip - Edge AI and Vision Alliance

Microchip presents a customizable, target-agnostic solution to program wake words and voice commands. The ML model, generated and tested using a custom application, has low latency and a minimal...

From edge-ai-vision.com
32