New Turntable – AT-LP120

For my birthday, I received several 33-1/3 rpm LPs. But we didn’t have a turntable to listen to them. So, I did a little research and ordered a nice turntable, the audio-technica LP-120. It’s way heavier and fancier than I imagined, but I really like it. The thing that attracted me most was the direct drive motor.

And I love my new albums – Fragile: Yes, Rick Wakeman: The Six Lives of Henry VIII, Jesus Christ Superstar: The Original Broadway Soundtrack. I know I have Rolling Stones Sticky Fingers somewhere around here.

Limitations of LXD and DNAT

I am learning of some limitations of using LXD to containerize apps. I mean, LXD is nice, and it works great as designed. I have tried the default configuration, with lxdbr0 bridge and filesystem backing, and found I get more use out of one where I run the host’s ethernet interface as a slave to bridge br0, and add br0 (unmanaged) to the LXD default profile instead of lxdbr0, so each container gets peer level networking on it’s host. I also attach an external disk as LVM backing for LXD, so each of my containers get take advantage of LVM snapshots.

You have to be careful with firewall rules with LXD, but you do have the option of 1) simple ufw rules on each container, or 2) host level ufw rules that control all the forwarding. You already have to add some lines to /etc/ufw/before.rules and before6.rules, to NAT all outbound IP traffic and DNAT inbound ports to services you run. I found the DNAT rules painful for IPV6 addresses, and think I probably need to enable both DHCP6 support, and static IP v4 and v6 addresses for each container.

Or, I could go old school, rebuild the host and isntead of LXD, run LXC version 1 to containerize the applications using the host’s native IP network stacks, meaning no NAT or DNAT will be needed. Just plain simple ufw rules at the host level. LXC is more complex to setup, but is more flexible in some ways.

Or, I could go REALLY old school, and do it 80s style, where postfix, mysqld, and nginx all run native on the host. The absolute least amount of overhead, only 1 set of iptables rules, which might make more sense on a 1 vcpu/1 GB RAM/25 GB disk VPS, but somewhat less secure.

Finally got my old blog online

I migrated from my old ISP Hostmonster to a VPS in the Digital Ocean cloud. You can read more details at my new blog, https://whistl.me/.

Anyway, I finally managed to restore my old database and public_html directory. pretty happy with the results.

NCSA Mosaic 25th anniversary, 1993 Memories

According to Daily Tech News Show headlines show yeterday, 25 years ago, April 22, 1993, the NCSA Mosaic v1.0 web browser was released to the Internet. Back then I was Sr System Manager and Facilities Manager at a small compiler company, KAI. I was in charge of all Unix/OSF/Linux/VMS servers, workstations, X terminals, and Windows PCs.

In addition to all the computers, I also ran the facilities, all of the office building, not just the computer room, all the electrical power, HVAC, burglar alarm system, combination door locks, vendor management, and all computer networking, from running cable, soldering AUI connectors for thick-net, crimping coax cable for thin-net, crimping RJ45 connectors for 10baseT or 100baseT all the way to writing my own protocol analyzer to figure out what turned out to be ARP issues with old ethernet controller hardware.

I recall compiling Mosaic on multiple workstations to get it working. Windows PCs didn’t have native Internet Protocol built in yet, and NCSA Mosaic had multiple plugin drivers for specific ethernet cards (3com 3C501 ring a bell anyone?) and you have to buy a card carefully. I remember buying some PC IP stack, Chameleon NFS, because we were very much a UNIX shop, we had no PC servers, we had NFS and NIS, and the Internet was long distance dialup to UUNET. Did you know Microsoft IE was originally based on NCSA Mosaic? I believe it’s still in their license agreement somewhere.

I remember when I setup a terminal room in the unused office next to the computer room, so people could play solitaire and mahjong after work and at lunch. I cut holes in the wall, and installed 3 workstation class systems on a long table along the wall, with the cables run through the wall to the 19″ monitors, keyboards and mice in the terminal room. There was a deskside workstation with multiple 68020s, and an HP-UX workstation, and a DEC Alpha workstation joining in.

My email signature at KAI used to include “if it ends in *ix, we’ve got one”, because we had 45 *IX systems running 23 entirely different operating systems. One guy challenged me once, “How about Okidata?” I smiled and replied “Okix? Yep, we have one workstation from them.”

KAI had the worlds smallest Intel Paragon supercomputer – 4 nodes. Even the system engineers would do a double take when they heard how small it was. We also had a much larger multicabinet installation from Kendall Square Research, that was fun getting installed. We had multiple Sequent servers, one FX-8 from Alliant, and multiple systems from Silicon Graphics, HP, Apollo, Sun, IBM, Motorola, Data General, and a couple smaller government system suppliers which I shall not name.

I recall getting systems that were ripped from someone’s desk, slapped in a box, and mailed to my office. No manuals, no root password, no info on the software management processes, like backups. I had to coax the root password out of business partners multiple times so I could install the software upgrades they sent me. Some business partners treated us really well and sent us great hardware, like Sequent and SGI. Most just leant us a small workstation. A few tried to sell us their hardware, so we could support our software on their platform. One traded us a big multiprocessor server for a compiler, but we ended up paying them back for yearly support, so eh.

We were lucky enough to have serial number 14 DEC Alpha workstation, running alpha Alpha OS software (hee hee). Back then OpenVMS required license keys be typed in to unlock the compiler updates every month – what a PITA DEC licensing was. Ugh. All the Alpha systems we had were dual-boot – OpenVMS and OSF/1, which later on was renamed as DEC UNIX. I had to setup setuid root programs on both OSs so the developer/QA tester could switch from one to the other.

Back then, we didn’t give root access to mere users. Nobody had root access to anything, not even their own desktop workstation, except me. My boss had the root password in a sealed envelop in a locked cabinet drawer in his office, otherwise he didn’t want to know either. We took product security very seriously. Every time we hired a new kid from college, I’d have the same argument. “But I NEEED root access!” “No you don’t. This is NOT ‘your’ workstation, this is the companies hardware, and you aren’t the only user. If you want something installed or changed, that’s my job, I’ll take care of it for you.” “Nooo, you don’t understand…” Every time. They got used to it, mostly.

Once the bosses got cheap, and decided I didn’t deserve an expensive SGI Indigo graphic workstation on my desktop, because I wasn’t a developer, I was *JUST* a System Manager. You can see their point, a VT220 clone was $400, a 19200 baud port on a terminal server was already paid for, but a new Indigo was about $20k. So I gave mine up and went back to using the screen program on a 24×132 VT220 clone terminal. Because I didn’t have the same workstation as the developers, I could no longer experiment with SGI software and update and improve the SGI user experience. After a few months of missing that, the developers spoke up, and informed management that I deserved an Indigo, just like them, that they needed my regular improvements and ability to resolve their issues quicker. That was pretty cool, and I got my own Indigo again.

I worked on lots of open source software back then, and really gave back a bunch. My name is in the manpage or the README for C-Kermit, tcsh, perl, screen and a few others. I’ve published a few packages of my own, but nothing really major. I collaborated with another on a multicast FTP program that we used to distribute huge libraries and binaries to all the developer workstations every night.

When I started in 1985, KAI used to do weekly compiles of their own product, because it took about 24 hours on a DEC VAX-11/750. A couple years later, we were loaned a Sequent S81 server, and we upgraded our Balance 8000 to an S27, and moved all our dev and QA efforts to them. By 1993, we had a system setup where each developer’s Indigo workstation would poll a central server across the network, to get the next filename to compile. It would then process it, return the resulting object file back to the server, then repeat the whole process. The parallel sync of hundreds of files of the latest checked in version of source code took about 10 minutes, a distributed Indigo compile step only took about 30 minutes, the link step on the server was another 30 minutes, and the parallel distribution of the huge resulting link library to all workstations for the next day’s dev efforts added a final 20 minutes. From 24 hours on a single 3.125 MHz cpu VAX 11/750 VMS server, down to under 4 hours on a 10 cpu 20 MHz 80386 Sequent S81 server Dynix, down to 100 minutes on a cluster of 20+ SGI 33 MHz MIPS R3000 Irix workstations.

Nerding out

This weekend I had my own little private nerd fest. Just hung out at home and played with technology for two straight days. The wife understands, she gets the same way to n a Michael’s store.

Yesterday was dedicated to learning more about Docker Swarm, Nginx reverse-proxy, and trying out xUbuntu 18.04-beta2. Today was way more practical. I setup a “hidden” private DNS master server for one of my lesser used domains, and a reverse-proxy I can use for any websites I choose to setup.

I intend to start over, but this time record and edit the whole ordeal into a series of brief YouTube instructional videos. 1) Building an Ubuntu LXD (virtual) server and the very basics of using LXD, 2) installing your own domain’s Bind9 DNS server in an LXD container, 3) installing your own ISC DHCPd in an LXD container, 4) installing your own Nginx reverse-proxy web server in an LXD container, complete with auto renewing Let’s Encrypt SSL certificate, 5) setting up Shorewall Firewall on the LXD host, 6) setting up your own private Docker Test Swarm using Ubuntu and LXD on a VM.

I also want to do things like documenting the build of an POP3/IMAP4 email server for my domain, with cloud based MX servers to prevent email being lost or delayed when my home Internet is down.

I even intend to document building a team/department/organization communications hub, to support its members communicating via text chat, voice calls, even telephone calls, video chat, group calls, screen-sharing, everything people need to communicate effectively.

I also want to branch into slack chat, in particular, chatbot integration. It would be awesome to build on previous lessons, build a chatbot that configures and controls your master DNS server, and monitors and controls your docker swarm.

The more content I produce, the more real my personal dream comes true. I just need to get comfortable recording myself and talking about what I’m doing at the same time, and start over.

Maybe I just should stream the whole thing live, I do have a twitch account.

Digital Ocean

Ever heard of Digital Ocean? They are yet another cloud computing service. They offer free trial of $100 free service for 60 days, so I signed up. They only have data centers in a few international locations, nothing like AWS, but I really haven’t reviewed reliability and rates comparisons between all the various cloud services. Digital Ocean has some really intuitive dashboard software, that helped me get off the ground quite fast.

Unfortunately, the online tutorial on launching your first container said it would show how to use the open source TerraForm product to launch a whole multi-container, load balanced application, but cuts off after revealing about 10% of the process. Disappointing.

Linux & cloud computing educational opportunities

The other day, I offered on the Facebook group “The Tadpool” to teach a beginner cloud computing course to any takers. All I really want is some curious, hopefully smart, students to ask good questions, and make sure I’m teaching what they need to know. I need more experience doing podcasts, so I’ll feel more comfortable doing them, and meet more people, so it’s kind of a win-win situation here.

I realized after the anemic response, that maybe I need to set my sights a little high, so I was thinking that maybe I should first offer some beginner Linux user courses. Get people out of their horribly tiny Windows GUI world, and back into the command line world, where open source software offers so much power and opportunity to everyone.

I’m thinking I should design a list of podcasts, a sort of curriculum, one intended to pass on as much of my Linux and cloud computing knowledge as I can, in a series of short, consumable 1-2 hr shows.

I guess I should first create a video on installing the free Oracle VirtualBox VM software on your Windows, Mac, or Linux computer (assuming you haven’t already paid for some commercial VM product), to help people easily play with Linux anytime, at home. I can following with a short, newbie basic Linux command line user class, and maybe some basic system admin tips. I can eventually branch off into teaching Perl or Python programming basics, for that matter.

running a docker cluster on your home computer

I want to do some experiments with Docker on my home computer, but I don’t want to use a single node, like Docker for Mac provides. I want to run a full 5 node Docker swarm, but building and running 5 linux virtual machines is a pain in the ass, so I used Ubuntu and LXD.

You can run Docker inside LXD containers, so all I had to do was create a single 4 cpu, 16 GB VirtualBox VM, with 2 64 GB virtual disks, running Xubuntu 18.04 beta, and install LXD there, using the /dev/sdb for LXD container storage. On that Xubuntu host, I wrote a few shell scripts to help me build my docker swarm automatically.

build.sh launches 5 LXD containers running Ubuntu:16.04 and the latest Docker-CE
stop.sh stops all 5 LXD containers
delete.sh deletes all 5 LXD containers from disk

You have to run “lxc exec docker1 bash” to start a shell on the Docker master node to run normal docker commands, or else prefix them each with “lxc exec docker1 –” to run docker from the LXD host. For example, “lxc exec docker1 — docker info | less”