Skip to main content


By June 5, 2018Culture, Philosophy

The artificial intelligence revolution is the sound of one hand clapping. More precisely, an ambitious group of one-handed clappers. Each impressed with their specific developers and perspectives, but less so with the work of their peers.

And we expect these guys to self-regulate their progress and quest toward singularity — when computers outthink humans. Judging by the dismal example set by the leaders of internet technology to control their capacity to manipulate humankind, it would be wise not to trust technocrats with the potential for even greater and more clandestine influence.

Fortunately or unfortunately, for the time being the leaders of A.I. are too disagreeable and directionless to mastermind any sort of consensus.

If you want to know more about artificial intelligence, read an oral history “How We Got Here” by Ashlee Vance in the May 21, 2018 edition of Bloomberg News. The piece features interviews with a handful of the revolution’s “godfathers” and in the process sheds light on A.I.’s inception, the pioneers and their approaches and motivations.

“I’m interested in getting machines to learn as efficiently as animals and humans. … We can predict the consequences of our actions, which means that we don’t need to actually do something bad to realize it’s bad. … You could say that the ability to predict is really the essence of intelligence, combined with the ability to act on your predictions.” — Yann Lecun, computer scientist, University of Toronto

“I think it’s a big mistake that we’ve called the field “artificial intelligence.” It makes it seem like it’s very different from people and like it’s not real intelligence … but it’s a very human thing we’re trying to do: re-create human intelligence. “What is humanity? It’s striving to be better. We shouldn’t want to freeze the way we are now and say that’s the way it should always be.” — Richard Sutton, computer scientist, University of Alberta

“I think the social impact of all this stuff is very much up to the political system we’re in. Intrinsically, making the production of goods more efficient ought to increase the general good. The only way that’s going to be bad is if you have a society that takes all of the benefits of that rise productivity and gives it to the top 1 percent. … My main take is that it’s really hard to predict the future. … But there are some things we can predict, like that this technology is going to change everything.” — Geoffrey Hinton, computer scientist, University of Toronto


  • Gary says:

    I find it to be very unnerving.

  • ed Witts says:

    Machine learning is pretty interesting. Jeff Hawkings the founder of PALM has been working on visual identification. My wife had a plastic surgeon that worked on computers reading breast cancer x-rays. I have seen videos of Obama saying things he didn’t say I guess using this type of technology. Yes making our decision making from what we here and see even more difficult. Getting accurate data is getting more difficult.

Leave a Reply