(ED NOTE: Great references at the end, check them out, if you want to learn the subject of AI)

Note: This is Part 2 of a two-part series on AI. Part 1 is here.

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)


We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom

Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this.




No comments

Be the first one to leave a comment.

Post a Comment