• Homelabbin'

    From poindexter FORTRAN@21:4/122 to All on Fri Dec 9 08:13:00 2022
    I spent the day yesterday trying to install Nutanix CE, a hyperconverged virtual platform I'm using at work. It's a great package for a remote
    office, you can get firewall, networking and virtual machines in a highly available form factor - 3 nodes in 2u.

    I couldn't get it to install in a VM under Promox or a spare laptop I had - both times failing fatally because it couldn't light a chassis light. Odd.

    I took the time to add a 1TB NVME to my new homelab server, a SFF optiplex 3050 with an i5 and 32 GB of RAM. Nice little box. Unfortunately, it doesn't support an i7, but this'll do.

    I wanted to move my VMs over to it, and most pointers online referenced backing them up, moving the snapshots and importing them into the new
    server. For kicks, I tried adding the two servers into a cluster. I could migrate the servers just fine, and it appeared that if I broke the cluster properly, the servers would end up at the destination.

    It might be worth a try. For now, my current server is holding its own, but
    it would be nice to move to a proper desktop, get a faster disk subsystem
    and faster CPU.


    ... Repetition is a form of change
    --- MultiMail/DOS v0.52
    * Origin: realitycheckBBS.org -- information is power. (21:4/122)
  • From vorlon@21:1/195.1 to poindexter FORTRAN on Sat Dec 10 10:30:07 2022
    Hi poindexter FORTRAN,

    I wanted to move my VMs over to it, and most pointers online
    referenced backing them up, moving the snapshots and importing them
    into the new server. For kicks, I tried adding the two servers into a cluster. I could migrate the servers just fine, and it appeared that
    if I broke the cluster properly, the servers would end up at the destination.

    I've not looked at the clustering in Proxmox. But I'd shutdown the VM's
    use the backup function, copy the backup file to the new host and then
    restore the vm... I've done that in the past, all on the command line.



    \/orlon
    aka
    Stephen


    --- Talisman v0.46-dev (Linux/m68k)
    * Origin: Vorlon Empire: Amiga 3000 powered in Sector 550 (21:1/195.1)
  • From poindexter FORTRAN@21:4/122 to vorlon on Sat Dec 10 10:06:00 2022
    vorlon wrote to poindexter FORTRAN <=-

    I've not looked at the clustering in Proxmox. But I'd shutdown the VM's use the backup function, copy the backup file to the new host and then restore the vm... I've done that in the past, all on the command line.

    Clustering looks nice - apparently, if you use ZFS filesystems and CEPH, you can do high availability with Proxmox - turn off one server and all of your servers re-appear running on the other.

    I wish I had something like this when I was setting up barebones infrastructures for startups way back when - one decent system could host
    the infrastructure for a small company nicely - and two could provide almost complete high-availability.



    ... Adding on
    --- MultiMail/DOS v0.52
    * Origin: realitycheckBBS.org -- information is power. (21:4/122)
  • From vorlon@21:1/195.1 to poindexter FORTRAN on Tue Dec 13 13:49:02 2022
    Hi poindexter FORTRAN,

    I've not looked at the clustering in Proxmox. But I'd shutdown

    Clustering looks nice - apparently, if you use ZFS filesystems and
    CEPH, you can do high availability with Proxmox - turn off one server
    and all of your servers re-appear running on the other.
    [...]
    the infrastructure for a small company nicely - and two could provide
    almost complete high-availability.

    The only reason for not trying it, is that I have no need at home! (Plus
    my main VM server is still on ESXi 6). I make sure all the VM's have
    backup's going back at least a week. SSD's for the VM's, hd's for the NAS/Backup's, Via VTX-D to the controller.

    I do have a spare Dell R210 1U that was taken out of a datcentre, but it's
    a noisy bugger and is maxed out at 16Gb ram.... The ESXi host has 32Gb.





    \/orlon
    aka
    Stephen


    --- Talisman v0.46-dev (Linux/m68k)
    * Origin: Vorlon Empire: Amiga 3000 powered in Sector 550 (21:1/195.1)
  • From poindexter FORTRAN@21:4/122 to vorlon on Tue Dec 13 06:58:00 2022
    vorlon wrote to poindexter FORTRAN <=-

    The only reason for not trying it, is that I have no need at home!
    (Plus my main VM server is still on ESXi 6). I make sure all the VM's
    have backup's going back at least a week. SSD's for the VM's, hd's for
    the NAS/Backup's, Via VTX-D to the controller.

    I use vSphere at work; I would have loved to run it at home, but the
    hardware requirements are strict. I had some server hardware supported under 6.0 that wasn't that old, but wasn't supported by 6.5 - and hardware that
    was supported by 6.5 not supported by 7.0.

    I've got my new lab server up and running, and migrated all of my VMs to the new server last night. Got NUT working so my new server will shut down when the UPS goes on battery, and this afternoon I'm planning on swapping them
    out.

    Old:
    i7-6600u CPU (2.6ghz, 2 cores, 4 threads)
    16 GB RAM
    500 GB SATA SSD
    1x gigabit ethernet

    New:
    i5-6500 CPU (3.2ghz, 4 cores, 4 threads)
    32 GB RAM
    1 TB nvme SSD
    2x gigabit ethernet

    As much as the new system will be a nice speed bump, I'm going to miss my
    old laptop. I bought it as parts-only and it's been running for a couple of years now without a hitch.




    ... The tape is now the music
    --- MultiMail/DOS v0.52
    * Origin: realitycheckBBS.org -- information is power. (21:4/122)
  • From vorlon@21:1/195.1 to poindexter FORTRAN on Wed Dec 14 10:41:20 2022
    Hi poindexter FORTRAN,

    The only reason for not trying it, is that I have no need at
    home! (Plus my main VM server is still on ESXi 6). I make sure all
    the VM's have backup's going back at least a week. SSD's for the
    VM's, hd's for the NAS/Backup's, Via VTX-D to the controller.

    I use vSphere at work; I would have loved to run it at home, but the hardware requirements are strict. I had some server hardware supported
    under 6.0 that wasn't that old, but wasn't supported by 6.5 - and
    hardware that was supported by 6.5 not supported by 7.0.

    A lot of people got pissed at vmware for doing that... I was one of them,
    my old server had all the supported spec's except for the CPU, so a jump
    onto ebay and a Intel S1200BTL with a E3 1240 Xeon cpu, was the new
    server. Adding 32gb of ram was a nice feature though!

    Some of us don't need the latest wizbang high end system for home use.

    I've got my new lab server up and running, and migrated all of my VMs
    to the new server last night. Got NUT working so my new server will
    shut down when the UPS goes on battery, and this afternoon I'm
    planning on swapping them out.

    Our power is generally pretty good here, but every now and again some
    doofus will do something stupid....

    I got a notification monday morning that power had gone out, and the
    UPS's (two) kept things running for the half hour until it came back. (I
    wasn't home.)

    New:
    i5-6500 CPU (3.2ghz, 4 cores, 4 threads)
    32 GB RAM
    1 TB nvme SSD

    You'll see the improvement just in core count! The SSD will make it fly
    like there's no tomorrow.


    \/orlon
    aka
    Stephen


    --- Talisman v0.46-dev (Linux/m68k)
    * Origin: Vorlon Empire: Amiga 3000 powered in Sector 550 (21:1/195.1)