Technology fan, Linux user, gamer, 3D animation hobbyist

Also at:

linuxfan@tube.tchncs.de

linuxfan@cheeseburger.social

  • 3 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle







  • Probably better to ask on !localllama@sh.itjust.works. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

    The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

    Short answer - yes, you can do it. It’s just a matter of how much RAM you have available and how long you’re willing to wait for an answer.













  • The number one app for companies right now is “we can replace a lot of people and save a ton of money”, specifically look at the chatbot assistants you see on websites. Once they get the kinks worked out there, I guarantee they’ll have a talking version that will replace call center workers. And that’s only the beginning.

    They’ve already run the numbers and figured the upfront costs are worth it. Occasional maintenance/cooling/upgrades/tech support is still going to be cheaper than FICA/Medicare/401K matching/PTO/maternity leave/overtime/workman’s comp/running a huge HR department/family day barbecues, etc.

    Just trading one type of equipment for another, in their eyes.