The Problem with the Singularity

First a little note on why I am qualified to write about technology singularities. I wrote my first neural net in 1987. It was part of my thesis “Natural Language Parsing using Neural Net written in C”. I then wrote an expert system for understanding natural language using MOPS, Meta-MOPS and TAUS. This won me a government grant from the Admiralty Research Establishment who employed me to write an AI based strategy algorithm for an anti-missile-missile. I then got side tracked when British Military Intelligence trained me in Computer Counter Espionage which led to a career as a “Ethical Hacker”, which is how I earn my crust to this day.

I have always kept my eye on AI. I’ve used multi layer back propagating neural nets, Q-Learning, Markov’s decision making process, so I see the flaws with AI. AI developers spend a lot of their time trying to understand the strange behaviors their systems produce. Sometimes these strange behaviors are complete stupidity and sometimes they are hidden genius.

For example a bot was made to study accidents in the home to find ways to make homes safer. It came to the conclusion that most accidents were human error, so its advice was to remove the humans. Imagine if they had the power to do it.

On the surface it sounds like the program was stupid, but it just was operating with the assumptions given to it. There was famous quote from an American Colonel “In order to defend the town, it became necessary to destroy it.” The computer was using human level logic.

Movie – Dark Star – Persuade bomb 20 not to explode

My point is that these types of misunderstandings are inevitable and the singulartarian in my books has struggled for many years to bring the AI in line with human needs. The AI has a mind of its own… which leads to big problems.

Getting the Singularity Wrong Can Be Apocalyptic

Leave a Reply