An on-demand recording of our Microsoft event keynote will be made available at 2pm PT today. An update with the link will be made to this blog post at that time. Today, at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI,...
But they DO want that sweet sweet customer data. If you think there’s not going to be some sort of user data and behavior profiling bullshit going on at the very least, I’ve got a bridge to sell you.
Actually it is pretty easy. You can either run it in a VM or you can run it in podman.
For a VM, you could install virtual manager and then Debian. From there you need to of course do the normal setup of SSH and disable the root login.
Once you have a Debian VM you can install ollama and pull down llava and mistral. Make sure you give the VM plenty of resources including almost all cores and 8gb of ram. To setup ollama you can follow the guides
Once you have ollama working you can then setup openwebui. I had to use network: host with the ollama environment variable pointed to 127.0.0.1 (loopback)
Once that’s done you should be able to access it at the IP of the VM port 8080. The first time it runs you need to click create account.
Keep in mind that a blank screen means that it can’t reach ollama.
The alternative setup to this would be podman. You theoretically could create a ollama container and a openwebui container. They would need to be attached to the same internal network. It probably would be simpler to run but I haven’t tried it.
I know most people are over AI, as am I, but if we’re gonna have it, I’m glad to see there’s a focus on it being local
It isn’t though
Copilot is a cloud service
The goal is to make it work on device in the next 4 years. That’s the point of an “AI PC”
Why would Microsoft want that? They make money from the cloud. There is a reason they want you to move to Azure.
They want you to foot the electric bill for the LLM processing, they’re still going to collect your data. Double-win for MS!
Because they don’t want to pay for running AI in the cloud.
But they DO want that sweet sweet customer data. If you think there’s not going to be some sort of user data and behavior profiling bullshit going on at the very least, I’ve got a bridge to sell you.
It’s not going to be private lol
The MS implementations won’t, but once they build the capability, we can make our own
That’s not how that works. Also we have our own. Its called ollama
How would one set this up with ollama?
On which platform?
Basically you need three things. You need the ollama software, a LLM model such as mistral and a front end like openwebui.
Ollama is pretty much just a daemon that has a web api apps can use to query LLMs.
Linux, specifically nobara (a gaming focused fedora distro) for me
Do you have any guides you would recommend?
Actually it is pretty easy. You can either run it in a VM or you can run it in podman.
For a VM, you could install virtual manager and then Debian. From there you need to of course do the normal setup of SSH and disable the root login.
Once you have a Debian VM you can install ollama and pull down llava and mistral. Make sure you give the VM plenty of resources including almost all cores and 8gb of ram. To setup ollama you can follow the guides
Once you have ollama working you can then setup openwebui. I had to use network: host with the ollama environment variable pointed to 127.0.0.1 (loopback)
Once that’s done you should be able to access it at the IP of the VM port 8080. The first time it runs you need to click create account.
Keep in mind that a blank screen means that it can’t reach ollama.
The alternative setup to this would be podman. You theoretically could create a ollama container and a openwebui container. They would need to be attached to the same internal network. It probably would be simpler to run but I haven’t tried it.
Yeah, that is a big deal for privacy reasons. There is no reason one needs to send such information to companies.