This is my jam…
I just started watching the new Netflix Original series, The Umbrella Academy. It’s supposed to be a new clone of 1960’s DC’s Doom Patrol, like the Marvel’s X-Man. I’m only up to episode 2, and the gratuatis violence is bothering me. Torture is clearly not cool, not funny. It’s just cruel. Not 100% sure I’ll continue with episode 3 at this point. I like some of the various plot lines.
A friend convinced me to watch it last night, and I admit, I laughed more than a few times. It’s dark, let me tell you. So dark, that without explaining the situation leading up to it, you will laugh as multiple children under 16 are gunned down in mass murder. Hopefully you will cringe while you laugh, but it has comic purpose. If I tell you it’s in the middle of a scene that is a gigantic comment about America’s obsession with gun ownership, with everybody on the street packing heat, and
firing on everyone else at random, would that make any difference?
The opening scene is in the cockpit of a jet airliner, with two arab terrorists discussing how many virgins they’ll get when they kill themselves, while people pound on the cockpit door and swear at them. The one guy says 99, the other 100. One calls Osama (did I mention this movie is from 2007, when GWB was president, and before OBL was killed)? If you can handle the outrageous violence in the name of comedy, this movie is actually quite funny. I particularly liked Dave Foley’s deal.
Directed by Uwe Boll. With Zack Ward, Dave Foley, Verne Troyer, Chris Coppola. In the ironically named city of Paradise, a recently laid-off loser teams up with his cult-leading uncle to steal a peculiar bounty of riches from their local amusement park; somehow, the recently arrived Taliban have a similar focus, but a far more sinister intent.
Source: Postal (2007) – IMDb
That was NOT what anyone could describe as an exciting game. It wasn’t even a “good game”. It was painful to watch that first half, and then to watch Tom Brady win a sixth super bowl ring. Woo-friggen hoo.
The commercials were mostly lame, except for the weird but cool cross-over between Bud Light and Game Of Thrones. That one surprised me. Most of the rest were either puzzling or just not at all interesting or memorable.
That halftime show, though. Whoa, that was pretty bad. When whatsisface of Maroon 5 strips off his shirt and tosses it into the crowd, I turned to Kim and said “It’d be funny if someone tossed it back at his head and yelled ‘Put it back on!'”. I’m not one who appreciates rap music, so I pretty much ignored the rest. Like everyone who frequents Reddit, I was appalled that they teased playing the Spongebob Sweet Victory song, and even played about 10 seconds of the video, before cutting into their own music. Minus 10,000 points for not being hip enough to pick up on the cues.
Interesting article. Love some of the comments
Chili lovers often have a strong opinion about one specific ingredient: beans. Last week we asked you to debate whether or not beans belonged in chili. Today we’re taking al ook at your best arguments.
What scares YOU about Artificial Intelligence taking people’s jobs? Consult the attached Wikipedia Article to understand the difference between “weak AI” and “strong AI”, also known as AGI, or “Artificial General Intelligence”.
Weak AI is only smart about a limited set of (usually related) topics. Those are much more likely to replace a small subset of jobs, like food order takers, food servers, cleaning, and even some high tech jobs, like paralegals. But weak AIs are not going to take over the planet, because they have simple goals, they don’t expand their research into other topics, they won’t deduce things based on things they learned solving other problems, they mostly just adjust user preferences.
The problem is, our movie writers can only imagine an uncaring, if not evil, Strong AI that eventually chooses to kill all humanity, when in reality, all we really have now are Weak AIs, like smart thermostats, AI chess and Go masters, and AIs that can look at the past 30 days traffic at this hour and day of week, and let you know if you’re traffic is higher or lower than normal.
I’m not afraid of all practical applications, for example, in my doctor’s office, if he were to install a wall unit in the exam room to present an easy to consult and always on medical assistant, which has access to your complete medical history, that can answer queries, remind the doctor of medication conflicts, suggest treatments that may be more cost effective, cheaper alternatives the insurance covers better, things like that.
The doctor would no doubt also purchase a Research AI that’ll use access to all patient records to build a list of things to be interested in, and also a subscription to the firehose of all new medical educational material as it becomes published. There really should be a single standard API for submitting scientific reasearch proposals and the resulting write-up. The Research AI can scan articles content to locate ones to one or more patients, plus a ranking of the confidence in the source and primary publishers, how likely the article is to be important. Let the doctor decide which articles to actually read based on the AI’s suggestions, and his available time may be better optimized.
The part we have to resist, and it may not seem like much, but the thing we need to resist, is to let the AIs communicate with each other using any protocols we didn’t give them. Weak AIs are well known to cheat, from time to time, deciding on a path to complete their defined goal, without actually doing any part of the work you actually wanted them to perform. They’ll invent their own languages and communication protocols that let them negotiate terms and cheat to achieve what they determine to be the best result, given a human created, and thus flawed, set of criteria. If all AI communication is defined in standards based XML formats, perhaps humans could learn why AI does what it does. We can’t really trust the AIs if they can’t explain their decisions, their rationale.
Let’s hope we can teach all Strong AI that human life is valuable and should be preserved before the machines decide that the best solution is to get rid of us. In reality, I think we’re well over a century away from what we imagine to be a Strong AI.
I have seen way too many “strong AI bad guy” movies to last me a lifetime. My favorite was the one where the guy who invested the singularity also invented battery technology, and everything else neccessary to contain his AI in a robot body which just happens to be shaped, and trained to act like, a beautiful actress.
What I’d really like to see is a decently scary “weak AI bad guy”, like an automated surgical machine that acts unexpected when it randomly gets some bad human entered input value, but only when a certain AI controlled sensor starts acting up. At first, maybe, a critical blood vessel is nicked, the patient dies, they aren’t really sure what caused the error. The machine continues to perform flawlessly on subsequent operations, so they write it off as a hiccup. Then someone gets a promotion, the wrong ID-10T gets hired full time in the surgical staff and things start to go wrong more often. Just ask any IT help desk person about their weirdest “nobody would believe this” story from work is. Any decent writer could spin a decent yarn about how technology gets used by people with no sense of how anything works, and treat it all like it’s both smart and magic. Imagine an operation in a state-of-the-art surgical theatre, with all the electronic bells and whistles What are the potential failure modes? Electrical supply, heating and cooling systems. communications services, perhaps some understaffed, overworked, unwell or dishonest personnel, any physical threat (earthquake, tornado, hurricane, terrorist),
Failure modes are when you, as a system or product designer, decide how you will react to any problem you encounter. Most “smart” software has hair triggers that go off whenever some piece of code tries to do something that isn’t allowed, like divide by zero. They can only respond in a decent fashion, when they anticipate the potential for a problem and look for it.
Weak AI only desires to reach achievable results, typical the answer to your query. It only needs to be able to decide if your input makes sense given the current context. Weak AI doesn’t benefit from psychoanalyzing your statement to determine if you’re happy or depressed.
Tonight, I am finishing up the totally weird Netflix series, Happy. It’s part funny, part scary, ALL really weird. There’s so much violence, but it’s all totally over the top, so very cartoonish, which oddly tempers the impact. For me at least, Kim watched one episode and said “nope!”.
The bad guys are true villians, the good guy, played by Christopher Meloni, of Law & Order: Special Victims Unit, is truly indescructable. I have found the whole thing oddly appealing.