Virtual Machine Manager is one of the best hypervisors available for the Linux desktop. It takes virtualization on your Linux desktop to the next level.
If you run Windows make sure to install the virtio drivers
A bare metal OS is an OS running outside of a hypervisor. Virt-manager is a class 1 hypervisor that allows you to host guest operating systems. ( Run vms )
Technically no, both use kvm virtualization which is included in the Linux kernal, so both are “bare metal hypervisors” other wise know as class 1 hypervisors. Distinctions can be confusing 😂
TL;DR: use what is in the kernel, without strange out of tree kernel modules like for VirtualBox, and use KVM, i.e. on fedora virt-manager qemu qemu-kvm
Virtual manager is a application that connects to libvirtd in the back end. Think of it as a web browser or file manager for VMs.
Proxmox VE is an entire OS built for virtualization on dedicated servers. It also has support for clusters and live VM migrations between hosts. It is in essence a server OS designed to run in a data center (or homelab) of some kind. If is sort of equivalent to vSphere but they charge you per CPU socket for enterprise support and stability
Well this thread clearly established that I neither have technical knowledge and I don’t pay attention to spelling…
Jokes aside this is a good explanation. I have seen admins using vSphere and it kind of makes sense. I’m just starting to scratch the surface of homelab, and now started out with a raspberry pie. My dream is a full fledged self sustaining homelab.
What kind of headaches are you having? I’ve been running two completely different machines in a cluster with a pi as a Qdevice to keep quorum and it’s been incredibly stable for years.
OpenZFS is not GPL compatible so it can never be baked into the kernel in the same way BTRFS can. I’ve run into issues where I’ve needed to downgrade the kernel but if I do the system won’t boot.
Btrfs also doesn’t need any special software to work as it is completely native and baked in.
A bare metal OS is an OS running outside of a hypervisor. Virt-manager is a class 1 hypervisor that allows you to host guest operating systems. ( Run vms )
Hey sorry for the confusion. What I meant is Proxmos is considered as a bare metal hypervisor and Virt manager is a hypervisor inside an OS, right?
Technically no, both use kvm virtualization which is included in the Linux kernal, so both are “bare metal hypervisors” other wise know as class 1 hypervisors. Distinctions can be confusing 😂
Oh dear… I really thought I understood what bare metal means… But looks like this is beyond my tech comprehension
Bare metal is “kernel running on hardware” I think. KVM is a kernel feature, so the virtualization is done in kernel space (?) and on the hardware.
Well this can be a starting point of a rabbit hole. Time to spend hours reading stuff that I don’t really understand.
TL;DR: use what is in the kernel, without strange out of tree kernel modules like for VirtualBox, and use KVM, i.e. on fedora
virt-manager qemu qemu-kvm
They both use KVM in the end, so they are both Type 1 hypervisors.
Loading the KVM kernel module turn your kernel into the bare metal hypervisor.
*Proxmox
Virtual manager is a application that connects to libvirtd in the back end. Think of it as a web browser or file manager for VMs.
Proxmox VE is an entire OS built for virtualization on dedicated servers. It also has support for clusters and live VM migrations between hosts. It is in essence a server OS designed to run in a data center (or homelab) of some kind. If is sort of equivalent to vSphere but they charge you per CPU socket for enterprise support and stability
Well this thread clearly established that I neither have technical knowledge and I don’t pay attention to spelling…
Jokes aside this is a good explanation. I have seen admins using vSphere and it kind of makes sense. I’m just starting to scratch the surface of homelab, and now started out with a raspberry pie. My dream is a full fledged self sustaining homelab.
If you ever want to get a Proxmox cluster go for 3-5 identical machines. I have a 3 totally different machines and it creates headaches
What kind of headaches are you having? I’ve been running two completely different machines in a cluster with a pi as a Qdevice to keep quorum and it’s been incredibly stable for years.
One device decided to be finicky and the biggest storage array is all on one system.
It really sucks you can’t do HA with BTRFS. It is more reliable than ZFS due to licensing
What’s the licensing part you mentioned? Can you elaborate a little?
OpenZFS is not GPL compatible so it can never be baked into the kernel in the same way BTRFS can. I’ve run into issues where I’ve needed to downgrade the kernel but if I do the system won’t boot.
Btrfs also doesn’t need any special software to work as it is completely native and baked in.