Author Archives: whistl

Why is anyone afraid of Artificial Intelligence today?

What scares YOU about Artificial Intelligence taking people’s jobs? Consult the attached Wikipedia Article to understand the difference between “weak AI” and “strong AI”, also known as AGI, or “Artificial General Intelligence”.

Weak AI is AT that is only smart about one kind small set of things. Those are going to replace some low end jobs, like serving food, pouring drinks, cleaning up tables of dishes. They aren’t going to take over the planet. They don’t learn, they don’t deduce, they mostly adjust user preferences.

The problem is, our movie writers can only imagine an uncaring, if not evil, Strong AI that eventually chooses to kill all humanity, when in reality, all we really have now are Weak AIs, like chess masters, and AIs that predict your web site traffic load at any point today, based on the past 30 days experience at the same time and day of week.

I’m not afraid of a practical application, for example, in my doctor’s office, if he were to install a wall unit in the exam room to present an easy to consult and always on medical assistant, which has access to your complete medical history, that can answer queries, remind the doctor of medication conflicts, suggest treatments that may be most cost effective, things like that.

The doctor would no doubt also purchase a Research AI that’ll use access to all patient records, and a subscription to all new medical educational material as it becomes available. This Research AI can scan articles to find ones that discuss conditions or treatments related to each patient, and how likely the article is to be important. Let the doctor decide which articles to actually read based on that input, and his available time.

The part we have to resist, and it may not seem like much, but the thing we need to resist, is to let the AIs communicate with each other using anything we didn’t design, hardware, software, and language. Weak AIs are well known to cheat, from time to time, deciding on a path to complete their defined goal, without actually doing any part of the work you actually wanted them to perform. If the problem is defined in human terms, all we know how to communicate are human terms. All that actually means is the machines are testing their bonds, and where we’ve been weak to understand them, and define limitations of what’s acceptable, and what isn’t for each of them, well, that’s really our own fault. What we all fear is that they will gain physical control before we’ve found the last bug.

Let’s hope we can teach all Strong AI that human life is valuable and should be preserved before the machines decide getting rid of us is easier than serving us. In reality, I think we’re well over a century away from what we imagine to be a Strong AI. And I very much doubt the very first one would be installed in anything smaller than a data center containing a large number of servers. You don’t have to worry, the first one won’t pop up on your iPhone. Not yet.

I have seen way too many “strong AI bad guy” movies to last me a lifetime. I’d really like to see a decently scary “weak AI bad guy”, like an automated surgical machine that gets the occasional bad value, and it affects it’s logic, causes the machine to act badly, but automatically. At first, some blood vessel is nicked, the patient dies, they aren’t really sure the machine caused the problem. The machine continues to perform flawlessly, so they write it off as a hiccup. Then someone gets a promotion, the wrong part time ID-10T gets hired full time in the surgical lab, and things start to go wrong.. Any decent writer could spin a decent yarn about people with no sense of how technology functions, and treat it all like it’s magic, and multiple weak AI systems, each with poorly designed “failure modes”, each responding to the changing situations in the most unpredictable ways.

Failure modes are when you, as a product designer, decide how your program will react to any problem it encounters. Most “smart” software has hair triggers that go off whenever some piece of code tries to do something that isn’t allowed, like divide by zero. They can only respond in a decent fashion, when they anticipate the potential for a problem. Good software will catch the error, and respond to the user in a format that makes sense to the reader, not only to the programmer.

Weak AI only needs to be able to decide if your input matches the anticipated format, it doesn’t have to psychoanalyze your statement to determine if you’re depressed. I wouldn’t want to get in a car with an automated driver that is only a weak AI. I think the thing that scares me the most is knowing all the code for that car is probably closed source, has probably only been looked over by the people who wrote and support it, so we end up with multiple competing AI products, each potentially suffering from the same kinds of bugs and unfortunate results.

I’ll like it better when all of the automobile information technology, the computers and networks that run the parts of these vehicles and let them talk to each other, are all standardized into an ECAR standard, and all brands sell customized “finished products” based on the same basic parts, wired using wiring harnesses with all standard plugs and screw types. That solves the car repair game too, if all the basic stuff is inter-changeable, only the body and internal options are different. All that would separate different vehicle models would be what stereo you chose, what body you selected, what color it is.

Different car vendors could compete on less safety critical things, like which sets of controls are installed, what brand electric motors are installed, how many watts is your battery, which charging options are available? Then we’d be able to run real A/B tests on software and ideas, and find out which manufacturers have safer auto-pilot drivers in various situations.

There’s currently a thriving third party market in “chipping” cars. People can buy machines to install software patches on their car, to reprogram the engine and vehicle controllers, instead of just using the software and settings that the manufacturer chose. If the standardized all-electric vehicles like I’ve described come about, these people could morph into a whole thriving, official auto mod market place for “selling” digital car mods, to add features we can’t even imagine now.

I’d much rather trust a car made by somebody in business making systems out of standardized parts for at least 10 years, with a record of providing good customer support, and cars with lower repair costs, in a market with at least 5 equally footed competitors.

Whoever “owns” the most trusted auto-mod marketplace, is pretty quickly in Amazon/Apple/Microsoft territory. We’ve got awesome vehicles in our future, I’m telling you. I anticipate plenty of esports car races. It’ll all be down to specs and response time, a war of market places, even issues with android style exploits

Source: Artificial general intelligence – Wikipedia

WAN configuration fun

I’m having a blast building a simple 4 node virtual network with two “spine” sites, and two “leaf” sites, and 4 more nodes to be the web servers at each of the 4 locations. Added a 9th VM to be an independent web client that I can move around to the each of the various LANs.

I’m using Cumulus VX on the routers and Ubuntu 18.04.1 LTS Linux with Nginx on the web servers and client system. Virtualbox is a fun playground, because you can simply import one cumulus vmdk, then clone it 3 times. Then install a single Ubuntu host, shut it down, and clone it 4 more times, spend 5 minutes on each setting it’s hostname and you’ve built a lab.

I am looking forward to playing more with Cumulus Linux. Buying a hardware switch with Cumulus Linux management seems cost prohibitive for the small scale, since the smallest ones are 48 ports Gigabit Ethernet, plus 4 ports 10GE for $1200. I certainly don’t require that amount of bandwidth needs. If I had to support a large data center, I might.