Why is anyone afraid of Weak Artificial Intelligence?

What scares YOU about Artificial Intelligence taking people’s jobs? Consult the attached Wikipedia Article to understand the difference between “weak AI” and “strong AI”, also known as AGI, or “Artificial General Intelligence”.

Weak AI is only smart about a limited set of (usually related) topics. Those are much more likely to replace a small subset of jobs, like food order takers, food servers, cleaning, and even some high tech jobs, like paralegals. But weak AIs are not going to take over the planet, because they have simple goals, they don’t expand their research into other topics, they won’t deduce things based on things they learned solving other problems, they mostly just adjust user preferences.

The problem is, our movie writers can only imagine an uncaring, if not evil, Strong AI that eventually chooses to kill all humanity, when in reality, all we really have now are Weak AIs, like smart thermostats, AI chess and Go masters, and AIs that can look at the past 30 days traffic at this hour and day of week, and let you know if you’re traffic is higher or lower than normal.

I’m not afraid of all practical applications, for example, in my doctor’s office, if he were to install a wall unit in the exam room to present an easy to consult and always on medical assistant, which has access to your complete medical history, that can answer queries, remind the doctor of medication conflicts, suggest treatments that may be more cost effective, cheaper alternatives the insurance covers better, things like that.

The doctor would no doubt also purchase a Research AI that’ll use access to all patient records to build a list of things to be interested in, and also a subscription to the firehose of all new medical educational material as it becomes published. There really should be a single standard API for submitting scientific reasearch proposals and the resulting write-up. The Research AI can scan articles content to locate ones to one or more patients, plus a ranking of the confidence in the source and primary publishers, how likely the article is to be important. Let the doctor decide which articles to actually read based on the AI’s suggestions, and his available time may be better optimized.

The part we have to resist, and it may not seem like much, but the thing we need to resist, is to let the AIs communicate with each other using any protocols we didn’t give them. Weak AIs are well known to cheat, from time to time, deciding on a path to complete their defined goal, without actually doing any part of the work you actually wanted them to perform. They’ll invent their own languages and communication protocols that let them negotiate terms and cheat to achieve what they determine to be the best result, given a human created, and thus flawed, set of criteria. If all AI communication is defined in standards based XML formats, perhaps humans could learn why AI does what it does. We can’t really trust the AIs if they can’t explain their decisions, their rationale.

Let’s hope we can teach all Strong AI that human life is valuable and should be preserved before the machines decide that the best solution is to get rid of us. In reality, I think we’re well over a century away from what we imagine to be a Strong AI.

I have seen way too many “strong AI bad guy” movies to last me a lifetime. My favorite was the one where the guy who invested the singularity also invented battery technology, and everything else neccessary to contain his AI in a robot body which just happens to be shaped, and trained to act like, a beautiful actress.

What I’d really like to see is a decently scary “weak AI bad guy”, like an automated surgical machine that acts unexpected when it randomly gets some bad human entered input value, but only when a certain AI controlled sensor starts acting up. At first, maybe, a critical blood vessel is nicked, the patient dies, they aren’t really sure what caused the error. The machine continues to perform flawlessly on subsequent operations, so they write it off as a hiccup. Then someone gets a promotion, the wrong ID-10T gets hired full time in the surgical staff and things start to go wrong more often. Just ask any IT help desk person about their weirdest “nobody would believe this” story from work is. Any decent writer could spin a decent yarn about how technology gets used by people with no sense of how anything works, and treat it all like it’s both smart and magic. Imagine an operation in a state-of-the-art surgical theatre, with all the electronic bells and whistles What are the potential failure modes? Electrical supply, heating and cooling systems. communications services, perhaps some understaffed, overworked, unwell or dishonest personnel, any physical threat (earthquake, tornado, hurricane, terrorist),

Failure modes are when you, as a system or product designer, decide how you will react to any problem you encounter. Most “smart” software has hair triggers that go off whenever some piece of code tries to do something that isn’t allowed, like divide by zero. They can only respond in a decent fashion, when they anticipate the potential for a problem and look for it.

Weak AI only desires to reach achievable results, typical the answer to your query. It only needs to be able to decide if your input makes sense given the current context. Weak AI doesn’t benefit from psychoanalyzing your statement to determine if you’re happy or depressed.

Source: Artificial general intelligence – Wikipedia