Author Archives: whistl

Browser Extensions

How many browser extensions do you have installed in each of your browsers? Watching Security Now 681, where Steve Gibson shows a results of one study of browser extensions, or rather their overuse.

It turns out I’m in the minority, only having 2 extensions in each of my Chrome and Firefox browsers. I have Safari too, but zero extensions there.

How about you? Do me a favor, and check how many extensions are running in your browser right now. Why should you care? Recent security review of extensions in the Chrome and Firefox stores have revealed dozens of popular extensions actually include spyware, which collects and sends your data off to cloud servers, too often located in China. They’re collecting stuff like your browser history, all of your website credentials, what files/applications you download, etc.

The two extensions I rely on come from fairly reputable sources. LastPass is a website account manager that automatically fills in website username and a long, secure password for each site. I only need to enter my “last pass password” the first time I use it each day, or if I login to LastPass on a different device, like my phone.

The other extension I trust is Privacy Badger, which comes from the Electronic Frontier Foundation (EFF). Privacy Badger isn’t an ad-blocker, but it does block “tracking cookies”, and that advertisers use to track your viewing habits. Doing this causes some websites to complain that you should turn off your ad-blocker or pay them ransom so that you may be permitted to access their content.

Privacy Badger is smart too. It’s not configured by the EFF with who to block, and it doesn’t download any rules from anyone at all. It simply watches you browse, and learns which sites use tracking cookies. At first, it lets everything through. Once it starts to notice the same cookie is being called for from multiple sites, it kicks into gear and marks that one as a tracking cookie, which get handled differently than regular cookies, from that point on.

While you can “whitelist a site” (i.e. disable Privacy Badger) if you don’t want to pay, when I happen upon those sites, I just close that tab. Since they have decided to not support my belief in privacy in favor of enhancing their own profits, I have decided I don’t care to consume their content. There are plenty of news sources that don’t charge, or whine about me blocking their tracking cookies.

Mostly this only affects me reading articles in Wired, Washington Post (which I DO pay for, on my phone), New York Times (which I ALSO pay for, on my phone), and other magazines and newspapers.

Really, they COULD serve regular non-personalized ads to privacy mined people (like they used to do for ALL of us), and sure, maybe make a little less money, but they really want all the money they get by selling your personal information to other businesses, and they aren’t afraid to piss you off. They think they are so special, we’ll have to give in.

I want to see a new Internet TV protocol standard

I want to see a new standard internet protocol for streaming audio, video, text media content.
Then we could let TV set manufacturers just build smart monitors with a single standard open source operating system and player program installed. Or, in consideration of my last article, maybe use a minimally featured monitor, or the monitor you already own, and a new set-top box. Make the new thing support 10/100/1000baseTX Ethernet and the newest WiFi high speed and MIMO standards, like 802.11ax.

Imagine it, you could have a TV that comes with a smart phone app, to authenticate to ANY video provider. HBO, Amazon, Apple, CBS, anyone.

The new standard communication protocol will include some kind of DNS-style distributed content-publisher location service, allowing individual sets to subscribe to a variety of internet video sources. Imagine if your cable company, your DSL provider, HBO, Netflix, Hulu, Youtube, your DVR, a TV tuner box, and everything else used a standards based HD or UHD player, and everyone’s TV set only had to worry about presenting the highest resolution signal it can stream. Since it’s a smart player, it’ll be able to negotiate for content in a resolution appropriate for the local device (so 4K TVs get the UHD version if it’s available, and the bandwidth will support it), and it’ll vary based on network throughput, so for everyone “it just works”.

There also needs to be some kind of RSS ordered content list, so people can consume playlists, podcasts, binge-watchable-seasons of netflix, etc. Imagine being able to subscribe to “Mr Robot” and being presented with any content published on that channel, as soon as it’s published and you’re available to view it.

Now, if all Smart TVs run the same standards based OS, and they use Debian Linux style distributed internet repositories for OS and player updates, TV manufacturers won’t need to worry as much about future security holes that nobody patches., and just focus on their own video, audio device drivers, and I’m sure as soon as some smart vendor sells the same driver to every mfg in exchange for a support contract, we’ll all be better off.

If anyone on the Internet desires, they should be able to copy and fork the new standards based OS+player, and anyone who wants should be able to use alternative distros or OSs (Ew, I think I smell a future WinblowsTV). As long as they all support the same standards, and those standards are allowed to evolve naturally, everything should be fine.

I envision the new standard communication protocol supporting something like ssh tunnels containing multiple concurrent streams such as video+audio(eg. mpeg4), video-only, audio-only, text-only, and every permutation of the three. Picture in picture could be two concurrent video streams that let players choose which to view, or watch both side by side. Maybe some future version of ESPN will sell you access to a stream of an F1 race with 10 concurrent HD channels, so you can watch any in-car cameras you like, from each of the top 5 drivers, various corners, and the main produced broadcast view.

I support that means we’ll add a new DHCP configuration parameter, so home routers, as well as cable and DSL providers, can automatically configure all your devices with your content subscriptions.

I even think we need to standardize the authorization/authentication/subscription protocols, so I can buy HBO once, pay for it legally, and from then on, consume it on any device I want, any where I want, any time I want. I shouldn’t need to pay twice to watch on my TV and my computer, or have to waste time confirming my account every time I switch from my personal cellphone, my TV, The TV in my Living room, my Laptop, regardless of location (home, a hotel, or your favorite Starbucks).

If you want to limit me to one or two streams at a time, fine, but stop fucking me on medium. Let my DVD player, my so-called Smart TV, my TiVo, my laptop, my ChromeBook, my iPhone, let me use any of them to watch any of the content I already paid for, including the stuff I paid for 10 years ago. We don’t need 75+ authentication systems, with every company managing their own databases. Created a federated system using a trusted standard. Define the standard for features purchased, expiration dates, and let content providers rely on them to control users. If we can get a ssl client certificate generated by, say HBO, when we subscribe to their service, we can store the certificate when we authenticate one time, that would be great.

There is also a concurrent need for a standard for publishing content schedules, aka TV guides.

One problem is, you need to serve two types of content – live future content and produced content and past live shows. The former might be regularly refreshed by every device, maybe once every 3 days, to keep the local box’s guide up to date. The latter, the past, needs to be up to date, so it will need to serve every box out there, answering as people search for actors name or series name, just wants to browse what’s new and hot, like the netflix interface.

Then everybody’s device guide can download content directly from the content publishers. Broadcast TV signals have digital data channels, they could include the link to the guide, so devices self-configure based on the channels they receive.

Make the guide data stream a basic XML file, where version 1.0 including plenty of things everyone will want, like Channel, Start Time, End Time, Reasonable Flexible End Time (for live sports), Program Name, Season/Episode number, Show Title, Cast, Episode Synopsis, Starting Time, Streaming Content URL, Multicast IP Address, License Type, License Owner, Year Produced, Length, Format, Encryption in use, Authentication Website, things like that. The standard should include a reference cloud based server that can be used by content providers to provide the service to everyone on the Internet who wants to consume media.

If we can standardize the format, device manufacturers will be able to make devices that play anyone’s content on any device. No one will be locked into a brand name to consume media content. You want to use a tablet app, a phone app, a laptop app, a desktop app, a tv set top box, go right ahead. They’re all the exact same thing to the content providers. Just data queries and streams.

I wonder what ever happened to Multicast streams? I figured someone would have tried to setup a wide area network Multicast stream for live events by now. With Multicast, instead of each user maintaining an individual connection from themselves all the way across the Internet to the content provider’s server,Instead you tell your ISP that you subscribe to a particular Multicast IP address, which is pretty much the equivalent of a TV channel. Each channel has it’s own Multicast IP address. To tune in, you simply “subscribe”. If your ISP doesn’t already have anyone else already tuned in, they have to subscribe t\o their upstream ISP on the same channel. And again, each provider between the source and the destination subscribes to that same channel, and a single stream is established across the internet.  Now, here is the cool part, when a second subscriber “tunes in” to the same channel, if they use the same ISP, it’s one connection and their done. Then the ISP has two downstream connections, and just still the same one upstream connection. Even if an ISP has 10,000 viewers on the same Multicast IP address, there is only going to be a single upstream connection needed between them and the source.

Imagine if CBS, NBC, ABC, and the others all agreed to use a Multicast system to distribute their live content streams. They could deliver video to an entire city, even the entire world, and they would only require enough ISP bandwidth at the source for one stream to each of their ISPs. The ISPs and BGP would control the actual flow of each stream, optimizing it for redundancy and latency. I wonder if that would end up requiring a new standard for Multicast Channel Management, to manage the upstream/downstream connections. Hmm. That might be pretty cool.

I was thinking of another box to take advantage of this new video standard I’ve proposed. A local tuner, to be connected to the window or external TV or radio antenna, that scans the assigned radio frequencies to see what channels exist, and uses the digital side channel feature to find the content guide data webserver, so each box can auto-configure. The box offers Multicast streaming video service to the standard players I described above.  Another standard Multicast Channel for local site (Internet routers won’t forward traffic) traffic service. So anyone in your house can tune into a live channel, and your LAN continues to serve a single LAN video stream for each live channel being watched in the house.

The advantage is, instead of a TV set with a VHF/UHF antenna everywhere you want to watch video, and screw you on the laptop and tablet, you get at least one TV tuner with an antenna, connected to your home LAN, and high res stupid video monitors connected to a smart box on the same home LAN, which can use MDNS to locate all local video sources and channels available from each. Maybe they should publish signal quality numbers, so players can choose from the tuner with the best signal. Anyway you look at it, it’s a single Multicast stream throughout the home LAN, so the LAN isn’t saturated. Then you have “dumb” smart TVs, that only need to understand the local MDNS/Multicast way to find local tuners, and instead of apps, you select from a list of common, or enter a custom, video source URL, username, password, 2FA passcode, whatever you need to connect to video sources you pay for. You could even purchase a box to tune into Netflix, Amazon Prime Video, all the streaming services you use, and let all the dumb smart TVs just use the local MDNS/Multicast method of tuning into content.

Okay, maybe I’m describing a method for boxes like the Roku or Apple TV to tune into various streaming content sources, and ds=istribute them to local Multicast channels, again, so multiple TVs in the house can tune into the same channels without wasting LAN bandwidth. Apple/Amazon/Google/Sony could also make a little box that provides content you purchase in their EcoSystem to all the players in your LAN. Your TV doesn’t need to connect to each box, you only need the LAN, MDNS, and a Multicast channel for each live stream.

Tin Foil hates on time. If every local video stream is distributed via Multicast, it will be possible for parents to monitor what their kids are watching quite easily. To make everything transparent, players can just xmit a multicast msg so anyone local who is listening will learn what you tune into. If you don’t care, nothing is lost. If you care, you setup a server to log it all. Businesses can setup servers using anti-porn and other content filters.

Tin foil hat 2: The standard video player OS should never report viewing habits, which is why I would never buy a tuner box from Google. Their entire business model is based upon them snooping on everything you do.

Learned A New Linux Command Today

Try it:  sudo findmnt

It’s a more user-friendly way to display filesystem mount points on the current running system.

The filesystem point of view of objects is one of the greatest inventions of the UNIX architecture. Every physical device is presented as a filessytem object, and can be passed commands via it’s device name. Many newer virtual devices (eg. cgroups, securityfs, debugfs) make driver statistics and configuration options available via virtual files inside virtual directories presented by the associated driver. You can tweak the configs by writing to the virtual filesystem. Neat.

Making everything visible in the filesystem, then virtualizing the filesystem via adjustable cgroup namespaces has really changed the security game, but I suspect the best features are yet to come. So far, mostly only LXC, LXD and Docker are using cgroups to isolate container processes. But cgroups make so much MORE possible.

I have yet to play with LXC combined with the “host” network space. In this mode, the container allows containers to access the regular system IP interface, but everything else is isolated – users, groups, processes, vcpus, memory, disk. That would let me run each daemon in an isolated environment, ignorant of the others, but it could still communicate with the other processes via the shared localhost IP interface.

I’d have to do some figure out some mount-fu to get the same shared directory mounted on both the dovecot imap4 server and the postfix master server, so postfix SASL authentication for SMTP users would continue to work. Or figure out a different SASL solution that doesn’t require dovecot access.

Or just install dovecot in the same smtp container as amavis, spamassassin, clamd. I mean, it’s still more isolated than the current implementation, Right now, they’re all on one host in a single namespace anyway, along with the web, fpm, jabber, openvpn, and a few other services. At least isolating the all the email daemons in one namespace would keep them separate from all of the other unrelated services.

root@whistl:/var/log# findmnt
TARGET SOURCE FSTYPE OPTIONS / /dev/xvda1 ext4 rw,relatime,discard,data=ordered ├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,mode=755 │ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd │ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory │ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids │ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct │ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio │ │ ├─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer │ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio │ │ ├─/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb │ │ ├─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices │ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset │ │ └─/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event │ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime │ ├─/sys/kernel/debug debugfs debugfs rw,relatime │ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime ├─/proc proc proc rw,nosuid,nodev,noexec,relatime │ └─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct ├─/dev udev devtmpfs rw,nosuid,relatime,size=1015348k,nr_inodes=253837,mode=755 │ ├─/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 │ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev │ ├─/dev/mqueue mqueue mqueue rw,relatime │ └─/dev/hugepages hugetlbfs hugetlbfs rw,relatime ├─/run tmpfs tmpfs rw,nosuid,noexec,relatime,size=204672k,mode=755 │ ├─/run/lock tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k │ └─/run/user/1001 tmpfs tmpfs rw,nosuid,nodev,relatime,size=204672k,mode=700,uid=1001,gid=1001 ├─/snap/core/4917 /dev/loop0 squashfs ro,nodev,relatime ├─/snap/core/5145 /dev/loop1 squashfs ro,nodev,relatime ├─/snap/core/5328 /dev/loop2 squashfs ro,nodev,relatime ├─/snap/amazon-ssm-agent/295 /dev/loop3 squashfs ro,nodev,relatime ├─/snap/amazon-ssm-agent/495 /dev/loop4 squashfs ro,nodev,relatime └─/var/lib/lxcfs lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other

Podcast Recommendations

Here is a list of the podcasts I regularly enjoy:

  • DTNS –
  • TMS, Current Geek –
  • Security Now –
  • TekThing –
  • TWiET =
  • Coverville –


Build Your First Serverless Web App | Amazon Web Services

This is a serious game changer for Linux System Administrators the world over. We don’t need to wait for AI to replace us, when AWS is doing that right now! Serverless implementations is simply “virtual applet servers”, as opposed to renting virtual machine and running your own Docker lightweight containers. They completely abstract system/pod/cloud management from app management, and scale automatically according to very flexible criteria you define. Not sure what app language(s) or format(s) the serverlets support, but you can set it to automatically scale up or down based on the number of threads on each serverlet, web response time, memory usage, avg cpu usage, things like that. Cool!