Today's Cloudflare outage brought much of the internet to its knees, taking down critical AI services along with it.
It's a stark reminder: what happens when the infrastructure we depend on fails?
Here's a thought worth considering: what if we ran AI models locally instead?
This is exactly what Liquid AI Foundation Models are pioneering. They're bringing state-of-the-art AI directly to your device.
No cloud dependency. No single point of failure. Just powerful AI that works when you need it, where you need it.
The future of AI might not be in massive data centers, but right in your hands.
This weekend, more than 70 engineers, researchers, and builders convened at our SF office for Hack the Edge, a 48-hour sprint co-hosted with AMD Developer.
Equipped with AMD mini-PCs powered by Ryzen™ AI, Liquid Foundation Models and the ROCm™ stack, teams built, fine-tuned, and deployed edge-native applications entirely on-device.
They produced more than 20 applications spanning multimodal search, sensor intelligence, real-time audio understanding, and low-latency agents – demonstrating what’s possible when speed, efficiency and capability converge on the edge.
Congratulations to our Hack the Edge cash prize winners!
🥇1st place: Semantic Video Search - On-device video clip retrieval with natural-language semantic search powered by Liquid Vision Models by Nihar palemRohan Sharma and Angelo C.
🥈2nd place: Liquid Sense - A private smart-home agent enabling camera-based scene understanding and function calling by Raahul Vignesh and Sarthak Mohanty
🥉 3rd place: Echo Finder - A visual-audio memory system helping users locate lost items through wearable video and multimodal reasoning by Reva A.Ayush Goel and Arjun Kohli
Read more about the weekend: https://lnkd.in/ezDwVYda
What could you build with LEAP? https://leap.liquid.ai/
We just wrapped an incredible two-day hackathon where 20 teams joined us in our SF office to build real-world applications using Liquid Foundation Models running on AMD NUCs.
The range of ideas and level of innovation was inspiring — creative builds, sharp conversations, and big ideas pushing the edge forward.
Thank you to our partners at AMD Developer and all our hackers who made this weekend such a blast!
Stay tuned to meet the winners and see what they built 👀
From hands-on building to big conversations about the future of AI, this weekend’s Hack the Edge event was full of energy and innovation.
A huge thank you to everyone who joined us, and to our friends at Liquid AI for bringing brilliant minds together at their office in San Francisco to explore what’s possible at the edge. ⚡️
Today, we’re announcing our partnership with Shopify to bring Liquid Foundation Models (LFMs) to core commerce experiences. Shopify will license LFMs to enhance search and recommendations, improving relevance, conversions, and customer experience at scale.
The first production deployment is a sub‑20ms LFM that enhances search.
Shopify and Liquid have also co-developed a generative recommender model with a novel HSTU architecture. In controlled tests, the model beat the previous stack, leading to higher conversion rates from recommendations.
Read the full story: https://lnkd.in/eeKMAQcg
Watch a conversation on the topic by Mikhail Parakhin (Shopify CTO) and Ramin Hasani (our CEO): https://lnkd.in/ekysG6j5
Hack the Edge by AMD × Liquid AI
Join the AMD and Liquid teams at the Liquid AI Office in SF for an exclusive hackathon Nov 15-16th.
Over these two days you will build unique local, private, and efficient AI applications directly on AMD hardware — with guidance from Liquid and AMD researchers.
The challenge will be revealed on site. Winners receive their share of $5K.
Apply to join: https://luma.com/smik3k94
Start building with LEAP: leap.liquid.ai
Our first in-person Liquid AI x Weights & Biases x Lambda Hackathon in Tokyo was a blast! 🇯🇵 東京の開発者たちの熱意をよく感じましたぞ!
Over two days, 20+ incredible teams explored Liquid AI’s nanos models and built apps that range from saving lives to translating manga, showing what’s possible when efficient tiny models are deployed on-device by creative builders.
Thank you to everyone who participated and made the event so much fun and congratulations to our winners!
🥇 1st Place - SafeGuide: An AI-powered disaster-guidance system that delivers clear, reliable safety instructions locally, even without connectivity. Changyu Hu
🥈2nd Place - AI Bike: A multimodal AI that embodies a bicycle - an exploration of embodied intelligence inspired by Disney's Cars. Hayato HongoLéo-Paul MARTIN六花牡丹
If you’re in San Francisco, keep an eye out for what’s next 👀
Get started building with LEAP: https://leap.liquid.ai/
Read how we think about a prosperous future for humanity as we build better machines through the eyes of our cofounder, Daniela Rus:
https://lnkd.in/dn9rAaGH
Ever wondered what an AI researcher actually does all day? (apart from staring at loss curves going down) Today is your lucky day ⬇️
In less than 3 hours the team at Liquid AI will be hosting an Ask Me Anything session.
Bring your questions and shoot!
Join the event ⬇️
https://lnkd.in/eHjkQt2K
default to LFM2-ColBERT for any on-device and latency critical embedding tasks!
specially if you care about multilingual capabilities! 🇪🇸🇵🇹🇰🇷🇯🇵🇮🇹🇩🇪🇫🇷🇦🇪🇸🇦🏴🇬🇧
Meet our newest nano model: LFM2-ColBERT-350M ⚛️
At only 350M parameters, LFM2-ColBERT-350M allows you to store documents in one language and retrieve them in many languages with high accuracy and inference speeds of models a fraction of its size.
> Best cross-lingual retriever in the sub-500M class
> Outperforms larger models in German, Arabic, Korean, Spanish, Portuguese, Italian, French and Japanese
> Performs on par with much larger models in English
> Compact 350M design ready for large-scale and on-device retrieval
> Scales linearly with batch size, sustaining over 1K docs/sec in document encoding
While most retrieval research focuses on bi-encoders or re-rankers. LFM2-ColBERT-350M uses late interaction which combines the strengths of both, keeping the efficiency of separate encoders while restoring token-level precision. This method:
> Preserves fine-grained interactions without full cross-attention
> Supports pre-computed document embeddings for scale
> Balances accuracy and speed in multilingual retrieval
Full Blog: https://lnkd.in/evbtttFr
HF: https://lnkd.in/eisNatiX
HF Demo: https://lnkd.in/eZTRaAv4