Please add a little more functionality
Would love to see voice to chat, RAG, vector knowledge, chat with documents.. and a little polishing of the UI..
Ja, Private LLM ist komplett kostenlos und enthält keine In-App-Käufe oder Abonnements.
🤔 Die Qualität der Private LLM-App ist gemischt. Einige Nutzer sind zufrieden, während andere Probleme melden. Ziehen Sie in Betracht, einzelne Bewertungen für mehr Kontext zu lesen.
Private LLM ist kostenlos.
Um geschätzte Einnahmen der Private LLM-App und weitere AppStore-Einblicke zu erhalten, können Sie sich bei der AppTail Mobile Analytics Platform anmelden.
4 von 5
40 Bewertungen in Vereinigtes Königreich
Would love to see voice to chat, RAG, vector knowledge, chat with documents.. and a little polishing of the UI..
It's slow to load and crash often. Apollo works a lot faster and more stable
If you invest in this really awesome app, you get two versions, either you can run it on your Mac or iPad. Very simple to use, with a variety of models to try out that you wouldn’t have known existed. I like the fact that the developer adds a star alongside models, that they recommend. Keep in mind you need to have the right amount of memory. For example, running Smaug Llama 70b requires a laptop with at least 64 GB. The smaller 8b models, requires 32 GB, but you can get by with at least 16 GB. Message to developer: It would be awesome to see larger models available that will run on silicon macs with higher ram, in the example of 128GB of memory. Note to users: You must quit the app after use to free up memory! Suggestion for future update? If you could get this app to talk to your documents. Wow, it certainly would be a game-changer, potential justifying a price increase. Would also be nice to be able to print out the chat content. Be nice if there was some YouTube tutorials. I couldn’t get the option to chat to my screenshots. I don’t know if this was even possible. But if you did make some tutorials, this would benefit the users, and likely benefit your endeavours. Keep up the excellent work!
I tried to download all 5 listed llama3 models in this app, but it keeps crashing before 7% of the download has completed!
Can’t believe my iPad is so powerful!! Works a charm on my M1. I download Phi3 no problem. You can also get it talk via clicking on text then speech. I then downloaded another model, which wasn’t show in list of installed models. I had to quit the app and go back into it to see the new models, then voila! [It may seem obvious but worth mentioning, some users may not quit the app, and quick to act in leaving negative feedback.] Can i make a request? Can you add the best model of Aya23 for translations?
Review of the Mac app and not the LLMs as that's outside of the dev's control. The app should have the option to close the window but not the app when you click on X. Model selection should be more intuitive by showing which are the latest. Better integration / system-wide controls would be awesome
The builtin model (StableLM Zephyr 3B 4-bit OmniQuant) is not good at code generation. It doesn't follow instructions closely. When I downloaded WhiteRabbitNeo-13B-v1 4-bit OmniQuant and tried to switch to it, the app crashes. Another drawback is that it's not possible to have separate chats for the same LLM version. Everything is in a single chat log which doesn't allow you to "configure" the chat bot with certain instructions for different topics. Also, I can't see a blinking text cursor so I can't easily edit my prompt before submitting it. I have the same issue here in the App Store review textbox, weirdly enough. The nice thing about this app is that it's private and saves you the hassle of figuring out how to install these chat bots by hand. For £9.99 that's not a bad deal for me, but given the aforementioned limitations I will probably continue to research the manual installation process, also because I wanted to try the new Llama 3 version which this app doesn't seem to support.
Quite simply the app is nowhere near useful nor polished enough to warrant the price. For some more specific feedback to the developer: - there have been better performing low parameter models available for some time, why are these not used? - unless you are using sane defaults, at least make other models available - the UI for downloading larger models is poorly thought out, surely you can think of better ways of presenting this? - history would have been simple enough to implement Running these things locally is certainly desirable, and feasible, but it seems we will have to wait some time until it’s packaged nicely.
it worked on the default model until i tried downloading another one. then it crashed and now crashed whenever i try to open it. weirdly i deleted the whole app and tried again, same thing. on an M1 Pro/16gb. I'd like a refund.
Hasn’t updated to the newer models
Diagramm
|
Kategorie
|
Rang
|
---|---|---|
Top Bezahlt
|
|
1
|
Top Bezahlt
|
|
1
|
Top Bezahlt
|
|
1
|
Top Bezahlt
|
|
1
|
Top Bezahlt
|
|
1
|