Sadly, this doesn’t surprise me.
My internet hosting service provider (not my ISP, the people who host this website and my email) sent me a bill for 2 years service for over $520. I cancelled my auto-renewal, because honestly, I can’t afford that much right now. $17/mo for email and a low traffic blog is just not feasible anymore.
My options include transferring my domain to a new hosting provider (even Hostmonster offers new customers a $5/mo choice), or setting up and managing my own servers. For the latter, I inquired about getting a static IP via my ISP, but found a /29 block of 8 static IP addresses will add $15/month to my service. Not much cheaper, and I’d have to manage and backup all my own servers. I have decided to move my domain to Amazon Web Service, for several reasons.
Mostly because AWS charges only for usage, I can host my services anywhere in the world, and my email and web users are VERY low traffic, so it’d probably be less than an average of $12/mo, but mostly because my employer is moving more towards AWS cloud services, and I need that kind of experience to make smarter decisions and build useful tools at work.
I was approached a few years back by representatives of the Whistl delivery company in the UK, who were interested in obtaining the whistl.com domain. My knee-jerk reaction was to tell them “no way”, but if they were to approach me again now, I am thinking I’d probably let them have it for a reasonable price. I don’t really need whistl.com, especially since I get so much spam as it is, and also because I own the whistl.us domain, which I hardly use, because *.us domains are mostly spambots.
Holy cow! You don’t hear about Apple getting hacked by insiders very often.
They were using Apple’s own internal computer system to gather the information before selling it on in a scam worth $7.3 million.
Desktop browser choice is pretty much a user-defining thing. The average user just uses whatever browser came pre-installed. However, power users pick their browser, and install it, if it’s not the default. Most pick Google Chrome, others Firefox, yet others Opera. Those paranoid about privacy, choose the Tor browser (based on Firefox).
The Opera browser was purchased a while ago by a Chinese marketing company, so I can’t help but think they have an interest in collecting as much user behavior as possible. Chrome is made by Google, an American marketing company – same thing. Chrome suffers from the security vulnerability in that they refuse to verify if SSL certificates are revoked. Firefox has other weaknesses, but far fewer.
I left off Safari and Internet Explorer, because they are not cross platform browsers, and they are usually not worth considering, except to download a better browser. I’ve only heard about Microsoft’s newest browser, Edge, which I think is specific to Windows 10. It may be the fastest browser on that platform, it is the only browser they are going to allow on the Windows 10S low-end tablets and systems. Microsoft has itself turned into a huge information mining operation, they only need market and sell that anonymized data for untold profit.
Well, right now, on my Mac, I’m using Google Chrome to watch video streams on my second screen, because it works well, and Firefox doesn’t. There was something chipmonkey about the audio under Firefox. But I am using Firefox to browse, because it’s not created by a marketing company, and Firefox with Privacy Badger is a pretty safe browsing environment.
I’d love to hear your thoughts. Just tweet @whistl034
Last year, when we switched from Comcast to AT&T U-Verse, we already had a 4 tuner TiVO OTA and antenna in the bedroom, so we were fine there. For our office TV, I installed a TiVo mini I already had, and we were golden for TV and DVR.
Last week, the TiVo mini bit the dust, mid-program watching. Won’t boot, just a sad yellow ‘sorry boss’ led. I think it was $130 and it lasted four years, so I got my money’s worth.
Today, I setup an Ubuntu server running Plex Media Server software. I already had a free Plex account, so I was already able to run the Plex Media Server to stream my own media library. I can also run the free Plex Player client on my Mac. Their new DVR function, however, requires a Plex Pass, which costs money. You can buy one year for $40 or a lifetime pass for $120. Once purchased, the Plex Player client reveals additional functions, like setting up the DVR.
Plex gives away it’s two programs. The Plex Media Server is where you store your DVR recordings, so think of it as a DVR. You can have multiple DVRs, each controlling a different tuner, if you like. The second program is the Plex Media Player. They have clients for just about every device and OS, hard to find one they don’t cover, and you can just use the web interface. If you open up the port to the Internet, you can watch your video away from home too.
Oh, but where to get the TV signal and channel guide from? Last year, I bought an HDHomeRun two-tuner box from Amazon, for about $100, and an HDTV flat panel antenna for my office window for about $45. I decided I wasn’t a fan of the HDHomeRun Mac software and DVR, so I wasn’t using the HDHomeRun until now. The Plex automatically locates the HDHomeRun box, and let’s you setup the tv channel guide too. It presents the guide in a unique format that I’m not used to.
So, I’m giving it a shot. Have some recording setup, and it’s working fine. Video playback is smooth, not a whole lot of disk being used. Win!
So, AT&T makes an “error” when updating 911 software which blocks 12,600 people from reaching 911 services, across 14 states, over an untold number of hours. But they get no fine. “Unacceptable” says Ajit Pai, the FCC chairman. “That’s really just too bad, somebody might have died, but it was nobody’s fault, so we’ll just ignore it.” implied the FCC chairman.
News: Last March, AT&T suffered a massive 911 outage that prevented customers across fourteen states from being able to call 911. While it was buried under tod
I am taking some of my well earned vacation time, and spending it nerding out at home.
I updated the homebrew package on my iMac, and ran brew install hercules. Hercules is an IBM mainframe emulator. It’s capable of emulating anything from a 360 to a current system/Z mainframe. And if you have access to any mainframe OS software, you can run your own 4 cpu z890. I downloaded the Debian Linux for s390x installer, and am having at it now.
Several articles about installing linux under hercules, but not all of them explain it clearly. I read at least three, but the most helpful was http://www.josefsipek.net/docs/s390-linux/hercules-s390.html Thanks to the original author to getting me over the hurdle.
To get tuntap networking to work on my Mac, I first needed to download and install tuntap software for MacOS Sierra. http://tuntaposx.sourceforge.net/ to the rescue.
I had to use homebrew again to install squid and dnsmasq, to provide HTTP and DNS proxy services for my point-to-point attached mainframe virtual machine. I need to find the MacOS commands to NAT the mainframe’s tun0 connection, but for now, keeping it behind a proxy isn’t really a bad thing.
My goal is to to get lxc guests running under Debian GNU/Linux on an s390x server. Just ‘cuz I can. I do honestly think that would be a way to convert my work environment from LPARs running z/vm hosting multiple z/linux guests running multiple JVMs into LPARs running native z/Linux running each guest as an LXC container instead, which then runs the JVMs. We get all the benefits of containers, and can forego the overhead of z/vm and multiple z/linux instances, each scheduling tasks and managing memory and swap independently.
Combine this with a strategy of using overlayfs to add a writable filesystem on top of a read-only base OS filesystem, and you end up with easily upgradable systems, and shared memory and shared swap space advantages too. You’d get most of the performance advantages of running JVMs inside Linux Containers, but continue to operate, manage, and monitor your software the same way you do today. Call it a half step towards converting everything towards micro-services.
I’d love to do a proof of concept project, with 1 decent size LPAR in our dev lab, where we can try to operate over all the same existing development environments. See how much faster and more efficient Linux Containers are compared to the z/vm solution.
Figuring out just the right hercules configuration file was the hardest part.
First I had to create a couple virtual disks for the OS. There are five versions of the IBM 3390 disk drive, with different sizes. I chose a 3390-1 for the /boot partition, and a 3390-54 second drive to format with LVM to hold root, swap and /home partitions.
Then I launched hercules, ipled the card reader to boot up the installer. Then I configured the network and switched to the SSH installer, picked the disk layout I wanted, and sat back for a couple hours. The hercules emulator is fairly slow, and the install process is entirely single threaded, and it downloads everything from the network.
dasdinit -lfs ./0110.3390 3390-1 LX0110
dasdinit -lfs ./0111.3390 3390-54 LX0111
hercules -f hercules.cfg
Here’s my config file:
CNSLPORT 3270 # TCP port number to which consoles connect
CPUMODEL 2086 # Z890
LOADPARM 0120.... # IPL parameter
PANRATE SLOW # Panel refresh rate (SLOW, FAST)
PANTITLE "LPAR HERC1"
# .-----------------------Device number
# | .-----------------Device type
# | | .---------File name and parameters
# | | |
# V V V
#--- ---- --------------------
0009 3215-C / noprompt
000C 3505 ./kernel.debian ./parmfile.debian ./initrd.debian autopad eof
000E 1403 ./printer.txt crlf
# created with: dasdinit -lfv ./0110.3390 3390-1 LX0110
0110 3390 ./0110.3390
# created with: dasdinit -lfv ./0111.3390 3390-54 LX0111
0111 3390 ./0111.3390
# point-to-point tunnel from zlinux to host
1000.2 CTCI -n /dev/tun0 -t 1500 192.168.5.2 192.168.5.1
I am impressed with T-Mobiles purchase of a large chunk of the available 600 MHz spectrum. It will become an important part of their future.
Dish, Comcast, and US Cellular also bought plenty of 600MHz spectrum.
Ubuntu Linux is available to download in many different ISO files. The main website gives you version choices (14.04, 16.04, 16.10), hardware architecture choices (armhf, i386, x86_64), and GUI front end choices (Gnome, KDE, LXDE, FXCE, None). But there’s one special version of Ubuntu I wanted to let you know more about – the minimum distro. Each of the other ISO files you get to choose from actually only differ in the lists of packages they put on the ISO, and the selection of packages they install by default.
The wonderful thing about the minimum distro (mini.iso) is that it lies at the root of every single one of them. It doesn’t include ANY packages on the ISO file, you are forced to download them from an Internet repository, not even the installer. You have to download it all, but you also get the latest bugfixed version of everything, and spend no time on updates. In fact, it does take longer to download everything, than it does, for example, to install the minimum packages from the 16.04 ISO and do updates, but it also is safer, and like I said, you can avoid loads of unneeded packages being installed.
When you get to the part of the install where the base OS is installed, and you get to choose which additional packages to install, that menu includes every one of the other Ubuntu distros as options. In other words, using mini.iso, you can not only install a minimal server, you can install a normal Ubuntu desktop, or an Lubuntu server, or a Kubuntu desktop, all from one boot image.
Another thing about the minimum distro I like is its truly tiny footprint. I re-created my Mac’s Lab VM and reinstalled the OS on all my physical lab servers, and the base OS of each only uses about 2 GB of disk. Each LXD or Docker container only adds another 2 to 4 GB of disk usage. It makes my firewall’s 64 GB SSD seem plenty big enough, and the servers 256 GB SSD just enormous.
So now, instead of installing the Ubuntu Server minimum distro on my physical servers, then having to turn some of the services I don’t want running, I only install the minimum distro, and none of the junk ever gets installed. No Network Manager, no Avahi daemon, nothing.
Also finally figured out my VirtualBox networking issue that was preventing me from using LXC, LXD, or Docker guests that were bridged to my home lan. I finally found the guest’s Network Adapter Advanced settings, where I needed to set Promiscuous Mode to “Allow All”, AND power cycle the VM, not simply reboot. The vbox process must exit and restart for the setting to take effect. That’s it. Suddenly I can use my 32 GB iMac more effectively.
If you have to ask, it’s probably bullshit. But we came with citations anyway.