I recently posted about upgrading my media server and migrating off Windows to Proxmox. I’ve been following an excellent guide from TechHut on YouTube but have run into issues migrating my media into the new Proxmox setup.
Both my old Windows machine and new Proxmox host have 2.5Gb NIC cards and are connected together with a 2.5Gb switch and running on the same subnet. Following the guide, I’ve created a ZFS pool with 7x14TB drives and created an Ubuntu LXC which is running Cockpit to create Samba shares.
When transferring files from Windows, I’m only seeing 100MB/s speeds on the initial transfer and every other transfer after that caps out at >10MB/s until I reboot the Cockpit container and the cycle completes.
I’m not very knowledgeable on Proxmox or Linux but have run an iperf3 test between Windows > Proxmox and Windows > Cockpit container and both show roughly 2.5Gb transfer speeds yet I am still limited when transferring files.
Googling the issue brings up some troubleshooting steps but I don’t understand a lot of it. One fix was to disable IPv6 in Proxmox (I dont have this setup on my network), which was successful, but didn’t fix anything. I no longer see the interface when doing an ‘ip a’ command in Proxmox, though I do still see it when doing it in the SMB container.
Does anybody have any experience with this that can offer a solution or path toward finding a solution? I have roughly 40TB of media to transfer and 8MB/s isn’t going to cut it.
What’s your throughput to another Windows box from your source machine? Samba isn’t known as a fast transfer protocol. You could also try NFS services on the windows box and export your shares to try that.
You might also enable jumbo frames if it’s a lot of larger files.
Lots of variables here, we would need to pare them down:
- architecture of zfs pool? This can be done well or very poorly, zfs gives you a lot of rope, if you know what I mean.
- memory?
- have you tuned samba ?
- are you trying to do anything fancy like jumbo frames ?
- what networking equipment lies between the two?
- do you have the correct and correctly configured 2.5gb driver for linux ? Intel 2.5 NICs had some issues for a while.
- what does the windows net hardware look like?
In troubleshooting transfer speeds , there are solo many variables, as you can see.
Start with the network and reduce variables until you have a likely source of the problem.
I, like you, will be migrating from Windows to ProxMox soon. Can you give a link to the guide you mentioned?
Does the Proxmox host have the driver installed for your 2.5Gb NIC? Can’t use it if it’s not installed. Connect to the host and run
ethtool <device>
. Should show the link speed asSpeed:
.I have roughly 40TB of media to transfer and 8MB/s isn’t going to cut it.
If you need ultra-speed data, why not do a 100GbE switch with JBOD? 40TB isn’t a small amount of data. Generally no matter what setup you have it’s going to take a significant amount of time to swap data here.
I haven’t installed any drivers on the Proxmox machine. That one has the 2.5Gb NIC built into the motherboard so I probably misspoke when I called it a “card” in the OP if that makes a difference. I’ll try this when I get home but I have run ‘lspci’ and it shows the NIC on both the host and container (Intel Killer 3100 2.5Gb though it’s listen as Killer 3000 2.5Gb) plus my iperf3 tests were showing ~2.3Gb speeds between both machines and the container.
As far as the 100GBe switch, I only need to transfer the media off the old machine once so I was just trying to go with something inexpensive since the standard 1Gb ethernet should be fine for most things after this initial transfer.
Linux doesn’t have drivers like Windows does. The kernel either supports the hardware or it doesn’t.
This is categorically untrue. The kernel includes most open source drivers however it does not include proprietary drivers (or even all open source drivers) which require recompiling the kernel itself or installing secondary headers…
I’ve used many networking cards in the past that required you to recompile the kernel for them to work properly…
Can you do a speed test of your zpool?
dd if=/dev/null of=<mount-point>/test.file bs=1G count=10 status=progress
dd if=/dev/null of=<mount-point>/test.file bs=1G count=10 status=progress
/dev/null tests from dd are not an accurate indicator of performance unless you only have one disk in your pool. fio is a much more accurate tool for zfs pool testing.
Better to use iostat
zpool iostat
I don’t know about the Ubuntu LCX. I don’t container much.
I’d do this with a virtual machine and TrueNAS. Those are just the tools I like to use. The TrueNAS Scale ISO will install qemu-guest-agent, so you don’t need to worry about drivers. Make sure to build it with Virtio SCSI Single disk controllers. Use one 50gb OS disk for the install. Add huge data disk(s) after the install.
Promox Disk options … SSD emulation, Discard, IO Thread, No cache … and I use Write Back (unsafe). Use the Virtio NIC.And try it again. Hopefully faster this time.
What’s the benchmark on the disks? Are these SAS drives?