I’ve been working and learning for the past month or two with my four home lab servers. So far I’ve been able to:
- figure out how to convert from bios and disk labels from DOS to uefi. Fun experience – NOT!
- wrote scripts to build a base LXC Linux Container image, making easier service container creation via LVM copy-on-write block volume cloning. Copy-on-write means clones only actually allocate any of their own disk space if they write to that block, so you start out with all the “base” packages installed. Setting up the base is a site-dependent task, as different authentication and security schemes are undoubtedly in use.
- become much more familiar with systemd, systemctl, journalctl
- experimented by comparing LVM volume manager + XFS filesystems, versus BTRFS combined volume manager + filesystem, by creating two base images. BTRFS is nice, but I still need to figure out quotas, so I decided to proceed with LVM and XFS for now, because they are more familiar.
- install basic requirements for Openstack (EPEL repos, Openstack Liberty repos) in base container
- install software for each separate Openstack controller inside separate containers for:
- MariaDB as an SQL DB
- MongoDB as a NoSQL DB
- RabbitMQ as a Queue Manager
- rsyslogd centralized logging container
- OpenStack Keystone identity management controller
- OpenStack Glance image storage controller
- OpenStack Nova compute controller
- Setup two OpenStack Nova compute nodes
- Next up – OpenStack Neutron networking controller and agents
It’s fun and educational, and hopefully I can some day use this knowledge to get a better job.
I am positive we could use Linux Containers and open source cloud technology inside our company today, to run our existing application architecture (X number clients, Y number servers per farm), but use way fewer physical computing resources. We can run 50% more JVMs per 3 node cluster, improving server efficiency, adding redundancy for physical resources, automate DR, and simplify app level OS security via use of verified base images.
And once on cloud architecture, these new farms can later on, easily add in auto-scaling load balancer features, so we never run out of resources, we don’t run excess servers where the load on a farm is actually lighter (for example, when it’s new). Data Centers separate load by Application and only allow certain Applications to run on certain resources, so the clouds can grow and easily handle the load our users currently demand, and much more.
Conversion could be piloted, and could be handled in a rolling, farm by farm fashion. After existing farms are migrated, and existing physical servers are freed up, they then get converted to addition OpenStack compute nodes, providing additional capacity for the next phase of growth.