Private LLM - Local AI Chat

Local Offline Private AI Chat

Veröffentlicht von: Numen Technologies

Beschreibung

Discover the Ultimate Privacy-Focused AI Assistant on iOS: Private LLM
Unlock a new realm of productivity and creativity on your iPhone and iPad with Private LLM, the premier AI assistant designed with your privacy in mind. Available for a one-time purchase, it offers a range of AI capabilities without needing a subscription. Experience advanced on-device AI that keeps your interactions confidential and offline.
Why Private LLM is Your Go-To AI Companion:
- Exclusive AI Model Selection: Choose from a diverse set of open-source LLM models optimized for performance and perplexity on iOS with state of the art OmniQuant quantization: including models from Llama 3.2, Llama 3.1, Google Gemma 2, Microsoft Phi-3, Mistral 7B, StableLM 3B and many more. Whether you need help with creative brainstorming, coding, or daily questions, customize your AI experience to meet your unique needs.
- Integrated with Siri & Shortcuts: Enhance your AI interactions with Siri commands and customizable Shortcuts. Private LLM seamlessly fits within your Apple ecosystem, making your digital assistant more accessible.
- Customizable Interactions: Tailor your AI's responses and interactions with customizable system prompts to match your preferences and needs.
- Uncompromised Privacy and Security: With Private LLM, your conversations stay confidential and on your device. Our advanced on-device AI performs robust computing without risking data compromise or needing an internet connection.
- Family Sharing & Offline Capabilities: Benefit from a one-time purchase that includes Family Sharing. Download models as needed and enjoy the full functionality of your AI assistant, even without internet access.
Supported LLM Model families:
- DeepSeek R1 Distill Based Models
- Phi 4 Based Models
- Qwen 2.5 Based Models (0.5B, 1.5B, 3B and 7B)
- Qwen 2.5 Coder Based Models (0.5B, 1.5B, 3B, 7B and 14B)
- Llama 3.1 8B Based Models
- Llama 3.2 1B and 3B Based Models
- Google Gemma 2 2B and 9B Based Models
- Mistral 7B Based Models
- Yi 6B Based Models
For a full list of supported models, including detailed specifications, please visit privatellm.app/models.
Private LLM is not just a chatbot; it's a comprehensive AI companion designed to respect your privacy while providing versatile, on-demand assistance. Whether you're enhancing your creative writing, tackling complex programming challenges, or just seeking answers, Private LLM adapts to meet your needs while keeping your data secure. Start your journey with Private LLM today and elevate your productivity and creative projects with the most private AI assistant for iOS devices.
Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Ollama, LLM Farm, LM Studio, RecurseChat, etc on three fronts:
1. Private LLM uses a faster mlc-llm based inference engine.
2. Models in Private LLM are quantised using the state of the art OmniQuant and GPTQ quantization algorithm, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift with deep integrations with iOS and iPadOS, while many of the competing apps are non-native and bloated Electron based apps.
Ausblenden Mehr anzeigen...

Screenshots

Private LLM Häufige Fragen

  • Ist Private LLM kostenlos?

    Ja, Private LLM ist komplett kostenlos und enthält keine In-App-Käufe oder Abonnements.

  • Ist Private LLM seriös?

    🤔 Die Qualität der Private LLM-App ist gemischt. Einige Nutzer sind zufrieden, während andere Probleme melden. Ziehen Sie in Betracht, einzelne Bewertungen für mehr Kontext zu lesen.

    Danke für die Stimme

  • Wie viel kostet Private LLM?

    Private LLM ist kostenlos.

  • Wie hoch ist der Umsatz von Private LLM?

    Um geschätzte Einnahmen der Private LLM-App und weitere AppStore-Einblicke zu erhalten, können Sie sich bei der AppTail Mobile Analytics Platform anmelden.

Benutzerbewertung

4 von 5

40 Bewertungen in Vereinigtes Königreich

5 star
24
4 star
6
3 star
2
2 star
2
1 star
6
Bewertungsverlauf

Private LLM Bewertungen

Please add a little more functionality

TonyHughesDesign on

Vereinigtes Königreich

Would love to see voice to chat, RAG, vector knowledge, chat with documents.. and a little polishing of the UI..

Slow and crash

MangleKuo on

Vereinigtes Königreich

It's slow to load and crash often. Apollo works a lot faster and more stable

A Powerful and Versatile App: Simple to Use, But Be Mindful of System Requirements

Tony the Vampire on

Vereinigtes Königreich

If you invest in this really awesome app, you get two versions, either you can run it on your Mac or iPad. Very simple to use, with a variety of models to try out that you wouldn’t have known existed. I like the fact that the developer adds a star alongside models, that they recommend. Keep in mind you need to have the right amount of memory. For example, running Smaug Llama 70b requires a laptop with at least 64 GB. The smaller 8b models, requires 32 GB, but you can get by with at least 16 GB. Message to developer: It would be awesome to see larger models available that will run on silicon macs with higher ram, in the example of 128GB of memory. Note to users: You must quit the app after use to free up memory! Suggestion for future update? If you could get this app to talk to your documents. Wow, it certainly would be a game-changer, potential justifying a price increase. Would also be nice to be able to print out the chat content. Be nice if there was some YouTube tutorials. I couldn’t get the option to chat to my screenshots. I don’t know if this was even possible. But if you did make some tutorials, this would benefit the users, and likely benefit your endeavours. Keep up the excellent work!

App crashes when downloading llama3

FintechCompared on

Vereinigtes Königreich

I tried to download all 5 listed llama3 models in this app, but it keeps crashing before 7% of the download has completed!

Amazing!

Tony the Vampire on

Vereinigtes Königreich

Can’t believe my iPad is so powerful!! Works a charm on my M1. I download Phi3 no problem. You can also get it talk via clicking on text then speech. I then downloaded another model, which wasn’t show in list of installed models. I had to quit the app and go back into it to see the new models, then voila! [It may seem obvious but worth mentioning, some users may not quit the app, and quick to act in leaving negative feedback.] Can i make a request? Can you add the best model of Aya23 for translations?

Room for improvement....

ctrl+alt+dileep on

Vereinigtes Königreich

Review of the Mac app and not the LLMs as that's outside of the dev's control. The app should have the option to close the window but not the app when you click on X. Model selection should be more intuitive by showing which are the latest. Better integration / system-wide controls would be awesome

Not good for coding

temocj on

Vereinigtes Königreich

The builtin model (StableLM Zephyr 3B 4-bit OmniQuant) is not good at code generation. It doesn't follow instructions closely. When I downloaded WhiteRabbitNeo-13B-v1 4-bit OmniQuant and tried to switch to it, the app crashes. Another drawback is that it's not possible to have separate chats for the same LLM version. Everything is in a single chat log which doesn't allow you to "configure" the chat bot with certain instructions for different topics. Also, I can't see a blinking text cursor so I can't easily edit my prompt before submitting it. I have the same issue here in the App Store review textbox, weirdly enough. The nice thing about this app is that it's private and saves you the hassle of figuring out how to install these chat bots by hand. For £9.99 that's not a bad deal for me, but given the aforementioned limitations I will probably continue to research the manual installation process, also because I wanted to try the new Llama 3 version which this app doesn't seem to support.

Not worth the money

GoblinPlonk on

Vereinigtes Königreich

Quite simply the app is nowhere near useful nor polished enough to warrant the price. For some more specific feedback to the developer: - there have been better performing low parameter models available for some time, why are these not used? - unless you are using sane defaults, at least make other models available - the UI for downloading larger models is poorly thought out, surely you can think of better ways of presenting this? - history would have been simple enough to implement Running these things locally is certainly desirable, and feasible, but it seems we will have to wait some time until it’s packaged nicely.

just crashes constantly

hiburn8 on

Vereinigtes Königreich

it worked on the default model until i tried downloading another one. then it crashed and now crashed whenever i try to open it. weirdly i deleted the whole app and tried again, same thing. on an M1 Pro/16gb. I'd like a refund.

No updates

vtard on

Vereinigtes Königreich

Hasn’t updated to the newer models

Store-Rankings

Ranking-Verlauf
App-Ranking-Verlauf noch nicht verfügbar
Kategorien-Rankings
Diagramm
Kategorie
Rang
Top Bezahlt
1
Top Bezahlt
1
Top Bezahlt
1
Top Bezahlt
1
Top Bezahlt
1

Private LLM Konkurrenten

Private LLM Installationen

Letzte 30 Tage

Private LLM Umsatz

Letzte 30 Tage

Private LLM Einnahmen und Downloads

Gewinnen Sie wertvolle Einblicke in die Leistung von Private LLM mit unserer Analytik.
Melden Sie sich jetzt an, um Zugriff auf Downloads, Einnahmen und mehr zu erhalten.
This page includes copyrighted content from third parties, shared solely for commentary and research in accordance with fair use under applicable copyright laws. All trademarks, including product, service, and company names or logos, remain the property of their respective owners. Their use here falls under nominative fair use as outlined by trademark laws and does not suggest any affiliation with or endorsement by the trademark holders.