Visit our mobile site

The Globe and Mail

Jump to main navigation
Jump to main content

News Search
Search Stock Quotes
Search The Web
Search People at canada411.ca
Search Businesses at yellowpages.ca
Former JEOPARDY! Champion contenstants during the final day of sparring sessions against Watson, IBM TJ Watson Research Center, Yorktown Heights, NY. Jeopardy! panel - NOVA "Smartest Machine on Earth" - Former JEOPARDY! Champion contenstants during the final day of sparring sessions against Watson, IBM TJ Watson Research Center, Yorktown Heights, NY. Jeopardy! panel - NOVA "Smartest Machine on Earth"

Rapid development

AI’s long, expensive road to Jeopardy!

Globe and Mail Update

Rapid developments in artificial intelligence are redefining the limits of what computers can do.

But the birth of the ultra-modern technological movement actually occurred at an academic conference at Dartmouth College in the mid-1950s. Organized by English computer scientist John McCarthy, the month-long event brought pioneers of the field together to lay the groundwork for what would be known from then on as artificial intelligence.

Computer science experts say AI has the potential to help doctors make faster medical diagnoses, improve national security, cut the prevalence of financial fraud and make numerous other advances.

“Every few years AI turns heads by doing something that previously could only be done by humans,” said Cory Butz, computer science professor at the University of Regina.

The path that has led to these possibilities, however, has been filled with relentless frustrations and seemingly insurmountable obstacles. In fact, the challenges that lay ahead for AI have some members of the field convinced the goal of creating machines that can behave and adapt like humans will never be realized.

Although computers have been around less than a century, references to intelligent machines date back thousands of years, when ancient scholars surmised about the possibility of inanimate objects having intelligent capabilities.

But the actual possibility of humans being able to create intelligent machines didn’t become a reality until the mid-20th century, when the first computers were developed.

Researchers began discussing the idea of intelligent machines in earnest in the 1940s, and AI was established as a research discipline at the infamous Dartmouth conference of 1956.

What AI looked like in the first few decades following that seminal conference is starkly different from the direction it is heading today.

Back then, computer scientists believed the key to developing machines with human-like intelligence centred on reasoning. The basic notion was that computers could accomplish tasks or reach goals by using deduction and reason to move each step of the way. In a chess match, for instance, a computer would analyze the outcome of each possible move in order to choose the one that would help it beat its competitor.

Programs used algorithms, or a step-by-step set of instructions for solving a problem, to allow computers to deduce which decisions to make.

For many years, this reason-based approach to AI dominated the field, with many experts convinced it would be a matter of years before they had created a machine that could meet or surpass human intelligence. Their optimism was met with a major infusion of government funding.

But, as it turns out, things were not that simple.

One of the main problems – which would prove to be a severely limiting factor – was that reason-based approaches to AI required complex programs based on vast amounts of data that needed to be put in by hand. Not only was it intensive, laborious, but it meant programs could only be as good as the humans who wrote them. Under this approach, there was no way computers could actually learn, or make inferences and deductions, beyond what they were specifically programmed to accomplish.

“It didn’t work, I think, because a lot of the knowledge, the way we make decisions, isn’t something we can explain in full details,” said Yoshua Bengio, professor in the department of computer science and operations research and Canada Research Chair in statistical learning algorithms at the University of Montreal.

Prof. Bengio highlighted that under reason-based approaches to AI, machines couldn’t comprehend natural language because it’s such a complex function humans have been unable to write programs properly explaining it to computers.

These challenges resulted in serious setbacks to AI, causing funds to dry up and research priorities to be directed elsewhere.