boem@lemmy.world to Technology@lemmy.worldEnglish · 1 year agoPeople are speaking with ChatGPT for hours, bringing 2013’s Her closer to realityarstechnica.comexternal-linkmessage-square148fedilinkarrow-up1542arrow-down129cross-posted to: technology@lemmy.world
arrow-up1513arrow-down1external-linkPeople are speaking with ChatGPT for hours, bringing 2013’s Her closer to realityarstechnica.comboem@lemmy.world to Technology@lemmy.worldEnglish · 1 year agomessage-square148fedilinkcross-posted to: technology@lemmy.world
minus-squarekamenLady.@lemmy.worldlinkfedilinkEnglisharrow-up1·1 year agoGonna look into that - thanks
minus-squareNotMyOldRedditName@lemmy.worldlinkfedilinkEnglisharrow-up3·edit-21 year agoCheck this out https://github.com/oobabooga/text-generation-webui It has a one click installer and can use llama.cpp From there you can download models and try things out. If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results. Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)
Gonna look into that - thanks
Check this out
https://github.com/oobabooga/text-generation-webui
It has a one click installer and can use llama.cpp
From there you can download models and try things out.
If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.
Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)