This content is from: Portfolio

Teaching Computers to Program Themselves

British mathematician Stephen Wolfram is creating a computational knowledge engine that can answer questions posed in plain English. The implications for finance are huge.

A wall of code at least a generation thick stands between many analysts and the computational tools redefining their industry, especially when those tools are driven by big data. The head of any titanic Wall Street firm might be able to lure a dozen college-age programmers with several cases of Red Bull and a few boxes of pizza to see them through a long night’s hackathon in their hoodies, but he’d still never understand the labyrinthine techniques by which those programmers tapped into an ever more rapidly evolving stream of open source data and software tools. Yet it is precisely those techniques by which his firm will increasingly live or die.

But what if someone came along and made it possible to build complex applications using only natural human language — that is, possible to do anything a programmer could do without having to know languages with such arcane names as Python, C++ and Ruby on Rails? What if you could speak English to a computer and have it build exactly the software application you need, even an application that could weave together complex streams of information drawn from every corner of human knowledge?

An emerging project by British mathematician Stephen Wolfram — the recipient of a Ph.D. in theoretical physics at the age of 20 and the mind behind the controversial book A New Kind of Science, as well as many, many TED Talks — purports to do just that. Wolfram is a leader in the field of complex systems research (he founded the Center for Complex Systems Research at the University of Illinois Urbana-Champaign). He is also the creator of Wolfram|Alpha, a computational knowledge engine that aims to provide accurately calculated answers to factual questions posed in plain English. Whereas search engines like Google index existing web sites (and can increasingly derive answers to English-language questions based on the data it scrapes from them), computational knowledge engines — including the iPhone’s uncanny robo-concierge Siri (which is partly powered by Wolfram|Alpha) — attempt to “understand” each element of a sentence and calculate an appropriate answer to any question based on mapping the semantic components of a query to sets of known facts.

Wolfram|Alpha’s computational knowledge engine is still very early stage and a work in progress; as of the writing of this column, the engine was stumped by natural-language questions as simple as “How tall is the American president?” (Siri, too, is stumped by the question). Impressively though, asking the system will elicit enough understanding that the engine will display a figure showing the average distribution of human height — just not President Obama’s. In this respect, Google’s older method of web-scraping for answers — and calibrating responses based on the popularity of information sources that a crowd tends to click through following a typed search query — is still currently more powerful than a computational knowledge engine; Google gets the answer right (Obama is 6-foot-1) when one simply types into the Google search box the same query, “How tall is the American president?” But Wolfram hopes that in the long run computational knowledge engines will outpace more traditional search-engine-derived results, such as Google’s.

Applications to financial information abound. What if a computer could answer natural-language queries more complicated than the President’s height with complete precision, such as, “How fast is Yelp growing by comparison to all other Web 2.0 recommendation services in the world?” or “What is relative health of the Mexican economy?” or “How do hurricanes, such as the one making landfall in Florida tomorrow, typically affect oil prices two days later?”

It does not require any expertise to see the structural change computational knowledge engines — leveraging cloud-based systems like grid computing — could wreak on the financial industry, and the labor force employed in it.

Yet still more ambitious would be sentient software: technology that allows natural-language statements to instruct a computer to program new software, not just retrieve information. Open source building blocks (such as natural-language software, news-parsing paradigms and the distributed knowledge of all facts and their relations — as manually mapped by Wikipedia users) might eventually be combined using spoken commands.

For example, imagine moving from the still impressive ability to ask Google or a computational knowledge engine “Which teams are playing in the World Cup final?” and receive a factually correct answer to the ability to command a computer to spontaneously generate a new piece of software by combining open source building blocks: “HAL, design a piece of software that tracks every team in the World Cup final and that parses news headlines to identify when a goal has been scored by one of them, and create a visualization that ranks the teams with the most goals and updates that list in real time.” Such technology is currently the realm of science fiction, but this new paradigm for the relationship between man and computer is the crux of what Wolfram hopes to see emerge.

According to Wolfram in a blog post last November: “Computational knowledge. Symbolic programming. Algorithm automation. Dynamic interactivity. Natural language. Computable documents. The cloud. Connected devices. Symbolic ontology. Algorithm discovery. ... We’ve figured out how to take all these threads, and all the technology we’ve built, to create something at a whole different level. The power of what is emerging continues to surprise me. But already I think it’s clear that it’s going to be profoundly important in the technological world, and beyond.”

Wolfram thus foresees the emergence of true natural-language programming, in which all elements of code and data are treated as symbolic elements — that is, as something very like words. Computers might even someday be able to listen passively to our conversations and program themselves intelligently — as well as speak intelligently to one another (a possibility grappled with in Spike Jonze’s Oscar-nominated Her).

It seems far too early to comment on the timing of the emergence of what Wolfram calls “sentient code” without stepping too far toward speculation. The consequences of even such possibilities for more terrestrial machines — our human brains — are far easier to forecast, especially insofar as each of those brains tries to position itself within a modern and increasingly shrinking techno–labor force.

In a previous column, I remarked on the brain drain from finance to technology companies. Graduates of prestigious universities are ditching finance for technology, but it almost begins to seem as if in a decade or two technology will need them less and less. When taken to its logical extreme, a world in which Wolfram’s vision becomes reality would result in the displacement of the hordes of young programmers now emerging from Stanford, MIT, Caltech and other centers of elite technology training. Sentient code might mean that as we head toward computers that think for themselves, those dozen programmers and their Red Bulls — hailed for now as the wunderkinds of a new, technologically driven economy — are working toward making themselves obsolete.

The circle of life.

Related Content