HuggingSnap app serves Apple’s best AI tool, with a convenient twist

Machine learning platform, Hugging Face, has released an iOS app that will make sense of the world around you as seen by your iPhone’s camera. Just point it at a scene, or click a picture, and it will deploy an AI to describe it, identify objects, perform translation, or pull text-based details.

Named HuggingSnap, the app takes a multi-model approach to understanding the scene around you as an input, and it’s now available for free on the App Store. It is powered by SmolVLM2, an open AI model that can handle text, image, and video as input formats.

Recommended Videos

The overarching goal of the app is to let people learn about the objects and scenery around them, including plant and animal recognition. The idea is not too different from Visual Intelligence on iPhones, but HuggingSnap has a crucial leg-up over its Apple rival.

It doesn’t require internet to work

SmolVLM2 running in an iPhone

All it needs is an iPhone running iOS 18 and you’re good to go. The UI of HuggingSnap is not too different from what you get with Visual Intelligence. But there’s a fundamental difference here.

Apple relies on ChatGPT for Visual Intelligence to work. That’s because Siri is currently not capable of acting like a generative AI tool, such as ChatGPT or Google’s Gemini, both of which have their own knowledge bank. Instead, it offloads all such user requests and queries to ChatGPT.

That requires an internet connection since ChatGPT can’t work in offline mode. HuggingSnap, on the other hand, works just fine. Moreover, an offline approach means no user data ever leaves your phone, which is always a welcome change from a privacy perspective. 

What can you do with HuggingSnap?

HuggingSnap identifying perfume bottle.
Nadeem Sarwar / DigitalTrends

HuggingSnap is powered by the SmolVLM2 model developed by Hugging Face. So, what can this model running the show behind this app accomplish? Well, a lot. Aside from answering questions based on what it sees through an iPhone’s camera, it can also process images picked from your phone’s gallery.

For example, show it a picture of any historical monument, and ask it to give you travel suggestions. It can understand the stuff appearing on a graph, or make sense of an electricity bill’s picture and answer queries based on the details it has picked up from the document.

It has a lightweight architecture and is particularly well-suited for on-device applications of AI. On benchmarks, it performs better than Google’s competing open PaliGemma (3B) model and rubs shoulders with Alibaba’s rival Qwen AI model with vision capabilities.

Running HuggingSnap app on iPhone.
Nadeem Sarwar / DigitalTrends

The biggest advantage is that it requires less system resources to run, which is particularly important in the context of smartphones. Interestingly, the popular VLC media player is also using the same SmolVLM2 model to provide video descriptions, letting users search through a video using natural language prompts.

It can also intelligently extract the most important highlight moments from a video. “Designed for efficiency, SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images, or function as a pure language model without visual inputs,” says the app’s GitHub repository.

Comments on "HuggingSnap app serves Apple’s best AI tool, with a convenient twist" :

Leave a Reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Google Chrome is getting an AI-powered scam sniffer for Android phones
COMPUTING

Google Chrome is getting an AI-powered scam sniffer for Android phones

Google’s Chrome browser has offered a rich suite of privacy and safety features for a while now. T...

Read More →
4 things I’m excited about in the new Microsoft Surface laptops
COMPUTING

4 things I’m excited about in the new Microsoft Surface laptops

Microsoft’s new Surface laptops have arrived, and they’re more than just routine refreshes. Acco...

Read More →
OpenAI makes its most advanced coding model available to paid ChatGPT users
COMPUTING

OpenAI makes its most advanced coding model available to paid ChatGPT users

OpenAI has made GPT-4.1 more widely available, as ChatGPT Plus, Pro, and Team users can now access t...

Read More →
I tested Gemini Advanced, ChatGPT, and Copilot Pro. Here’s which AI searched best
COMPUTING

I tested Gemini Advanced, ChatGPT, and Copilot Pro. Here’s which AI searched best

With AI chatbots now built into search engines, browsers, and even your desktop, it’s easy to assu...

Read More →
Google might have to sell Chrome — and OpenAI wants to buy it
COMPUTING

Google might have to sell Chrome — and OpenAI wants to buy it

It feels like all of the big tech companies practically live in courtrooms lately, but it also feels...

Read More →
Samsung might put AI smart glasses on the shelves this year
COMPUTING

Samsung might put AI smart glasses on the shelves this year

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rig...

Read More →
Study says AI hype is hindering genuine research on artificial intelligence
COMPUTING

Study says AI hype is hindering genuine research on artificial intelligence

A new AAAI (Association for the Advancement of Artificial Intelligence) study with hundreds of contr...

Read More →
Ray-Ban Meta AI glasses go high fashion with Coperni limited edition
COMPUTING

Ray-Ban Meta AI glasses go high fashion with Coperni limited edition

Meta delivered an unexpected runaway success with its Ray-Ban Stories smart glasses, and now, it is ...

Read More →
I saw Google’s Gemini AI erase copyright evidence. I am deeply worried
COMPUTING

I saw Google’s Gemini AI erase copyright evidence. I am deeply worried

Update: Google has responded to Digital Trends’ queries. The story has been updated with company�...

Read More →