MariaDB + Galera

I got a bit distracted. Just learned about MariaDB 10.1, which now includes the Galera multi-master database package built-in. I had to make it work, of course, and using three containers, it wasn’t all too hard. The documentation is a little weak at this time, because they want to sell their support contracts, but it’s not too bad.

This is what I had to add to /etc/my.cnf.d/server.conf on CentOS 7:


[galera]
wsrep_on=ON
# this varies on different versions and OSs
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="gcache.size=512M"
# your cluster's name
wsrep_cluster_name="cluster1"
# who belongs to the cluster
# use the next line on the first server only, the very first time only
#wsrep_cluster_address="gcomm://"
# after the first time, and on every other server, use the whole cluster
wsrep_cluster_address="gcomm://192.168.1.67,192.168.1.68,192.168.1.74"
# define method of sync (requires rsync package be installed, the default is mysqldump)
wsrep_sst_method=rsync
# you need to create this user with this passwd on node 1,
# after bring it up the first time and before joining node2
# to the cluster, for wsrep to work
wsrep_sst_auth=wsrepuser:wsrep16secretpass
# put the node's unique shortname here
wsrep_node_name="mariadb1"
#wsrep_node_name="mariadb2"
#wsrep_node_name="mariadb3"
# this is where the wsrep server will bind
# put the node's storage network IP address here
wsrep_node_address="192.168.1.67"
#wsrep_node_address="192.168.1.68"
#wsrep_node_address="192.168.1.74"
# this is where mysql service will be offered
# put the node's production IP address here
bind-address="192.168.1.67"
#bind-address="192.168.1.68"
#bind-address="192.168.1.74"
query_cache_size=0
binlog_format=row
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0

# Then you had to start the first node, the first time only, with:

/etc/init.d/mysql start --wsrep-new-cluster

# The you set things up:
$ mysqladmin -u root password
# enter the new root password twice

# do it again for the public interface
$ mysqladmin -u root -h mariadb1 password

$ mysql -u root -p
mysql> use mysql;
mysql> delete from user where User='';
mysql> create user wsrepuser;
mysql> grant all on *.* to 'wsrepuser'@'%' identified by 'wsrep16secretpass';

# check your wsrep process
mysql> show status like 'wsrep_cluster_size';
$ less /var/log/mysql/mysqld.log

# join node two and three (same config, uncomment that node's name and ip addresses)
# just start them like normal, then check the cluster size again and their mysqld.logs
# change the gcomm line in node 1, and restart it normally. You're up!

Openstack cloud Software

I’ve been working and learning for the past month or two with my four home lab servers. So far I’ve been able to:

  1. figure out how to convert from bios and disk labels from DOS to uefi. Fun experience – NOT!
  2. wrote scripts to build a base LXC Linux Container image, making easier service container creation via LVM copy-on-write block volume cloning. Copy-on-write means clones only actually allocate any of their own disk space if they write to that block, so you start out with all the “base” packages installed. Setting up the base is a site-dependent task, as different authentication and security schemes are undoubtedly in use.
  3. become much more familiar with systemd, systemctl, journalctl
  4. experimented by comparing LVM volume manager + XFS filesystems, versus BTRFS combined volume manager + filesystem, by creating two base images. BTRFS is nice, but I still need to figure out quotas, so I decided to proceed with LVM and XFS for now, because they are more familiar.
  5. install basic requirements for Openstack (EPEL repos, Openstack Liberty repos) in base container
  6. install software for each separate Openstack controller inside separate containers for:
    1. MariaDB as an SQL DB
    2. MongoDB as a NoSQL DB
    3. RabbitMQ as a Queue Manager
    4. rsyslogd centralized logging container
    5. OpenStack Keystone identity management controller
    6. OpenStack Glance image storage controller
    7. OpenStack Nova compute controller
  7. Setup two OpenStack Nova compute nodes
  8. Next up – OpenStack Neutron networking controller and agents

It’s fun and educational, and hopefully I can some day use this knowledge to get a better job.

I am positive we could use Linux Containers and open source cloud technology inside our company today, to run our existing application architecture (X number clients, Y number servers per farm), but use way fewer physical computing resources. We can run 50% more JVMs per 3 node cluster, improving server efficiency, adding redundancy for physical resources, automate DR, and simplify app level OS security via use of verified base images.

And once on cloud architecture, these new farms can later on, easily add in auto-scaling load balancer features, so we never run out of resources, we don’t run excess servers where the load on a farm is actually lighter (for example, when it’s new).  Data Centers separate load by Application and only allow certain Applications to run on certain resources, so the clouds can grow and easily handle the load our users currently demand, and much more.

Conversion could be piloted, and could be handled in a rolling, farm by farm fashion.  After existing farms are migrated, and existing physical servers are freed up, they then get converted to addition OpenStack compute nodes, providing additional capacity for the next phase of growth.