Kris Carlson

Just another weblog

My favorite passage by Dave Waltz

Dave Waltz was a friend and a mentor. He passed away in March 2012 from a glioblastoma. There was a wonderful symposium in his honor held over the weekend at Brandeis University and ably organized by Prof. Jordan Pollack. I send the following excerpt from Dave’s writings to a few friends who knew him last week.

…I dispute the heuristic search metaphor, the relationship between physical symbol systems and human cognition, and the nature and “granularity” of the units of thought. The physical symbol system hypothesis, also long shared by AI researchers, is that a vocabulary close to natural language (English, for example, perhaps supplemented by previously unnamed categories and concepts) would be sufficient to express all concepts that eve need to be expressed. My belief is that natural-language-like terms are, for some concepts, hopelessly coarse and vague, and that much finer, “subsymbolic” distinctions must be made, especially for encoding sensory inputs. At the same time, some mental units (for example, whole situations or events—often remembered as mental images) seem to be important carriers of meaning that may not be reducible to tractable structure of words or wordlike entities. Even worse, I believe that words are not in any case carriers of complete meanings but are instead more like index terms or cues that a speaker uses to induce e listener to extract shared memories and knowledge. The degree of detail and number of units needed to express the speaker’s knowledge and intent and the hearer’s understanding are vastly greater than the number of words used to communicate. In this sense language may be like the game of charades: the speaker transmits relatively little, and the listener generates understanding through the synthesis of the memory  items evoked by the speaker’s clues. Similarly, I believe that the words that seem widely characteristic of human streams of consciousness do not themselves constitute thought; rather, they represent a projection of our thoughts onto our speech-production faculties. Thus, for example, we may feel happy or embarrassed without ever forming those words, or we may solve a problem by imagining a diagram without words or with far too few words to specify the diagram.

Waltz, David L., The Prospects for Building Truly Intelligent Machines, Proceedings of the American Academy of Arts and Sciences (Daedalus, Winter 1988), V 117 N 1, pg 197.

September 24, 2012 Posted by | Artificial Intelligence, Neuroscience | Leave a comment

Wolfram on the generative Rule for our physical universe

The quest for a Rule that would generate our entire universe is a modern, information-theoretic version of unified field theory or a general, compact theory of the physical universe. Ed Fredkin came up with the idea that there could be a cellular automaton (CA) rule for the physical universe (or as I would put it, the currently-known physical microcosm). But he and Wolfram, and others such as Toffoli and Margolus, and Berkovich, have failed to find it. The early promise of, e.g., the Game of Life, was misleading, as often happens in science.

Wolfram did a more thorough exploration of the CA rule universe than anyone, and organized the field. He then generalized the generative CA Rule idea into mathematical re-write systems, but did not get much cohesive progress toward simulating the physical microcosm there, either. Part of what makes these domains interesting is they are chaotic, in the sense that neighboring parameter settings yield widely divergent results.

Anyway, I wonder what Wolfram means here and how it relates to the conjecture about parallel universes as an interpretation of quantum theory. I think, tho, one general problem for all unified physical theories is they underemphasize (and ignore) the historical trend toward the expansion of our horizons. In other words, history implies that the physical microcosm and macrocosm are far larger than we currently comprehend.

A second

“Still, I think it’s quite possible that we’ll be lucky—and be able to find our universe out in the computational universe. And it’ll be an exciting moment—being able to sort of hold in our hand a little program that is our universe. Of course, then we start wondering why it’s this program, and not another one. And getting concerned about Copernican kinds of issues.

Actually, I have a sneaking suspicion that the final story will be more bizarre than all of that. That there is some generalization of the Principle of Computational Equivalence that will somehow actually mean that with appropriate interpretation, sort of all conceivable universes are in precise detail, our actual universe, and its complete history.”

“The idea of Wolfram|Alpha was to see just how far we can get today with the goal of making the world’s knowledge computable. How much of the world’s data can we curate? How many of the methods and models from science and other areas can we encode? Can we let people access all this using their own free-form human language? And can we show them the results in a way that they can readily understand? Well, I wasn’t sure how difficult it would be. Or whether in the first decade of the 21st century it’d be possible at all. But I’m happy to say that it worked out much better than I’d expected.”

“And by using ideas from NKS—and a lot of hard work—we’ve been able to get seriously started on the problem of understanding the free-form language that we humans walk up to a computer and type in. It’s a different problem than the usual natural-language processing problem. Where one has to understand large chunks of complete text, say on the web. Here we have to take small utterances—sloppily written questions—and see whether one can map them onto the precise symbolic forms that represent the computable knowledge we know.”

Here Wolfram gives an example of the ‘new kind of science’ approach at work:

“In fact, increasingly in Mathematica we are using algorithms that we were not constructed step-by-step by humans, but are instead just found by searching the computational universe. And that’s also been a key methodology in developing Wolfram|Alpha. But going forward, it’s something I think we’ll be able to use on the fly.”

I recommend you read the entire address. There is much more beyond what I excerpted here.

June 21, 2010 Posted by | Artificial Intelligence, Complexity, Culture, History of Science, Mathematics | Leave a comment

Wolfram on Wolfram Alpha

Here are some excerpts from Wolfram’s description of the history of Alpha that I find interesting. The talk is here:

“For years and years we’d been pouring all those algorithms, and all that formal knowledge, into Mathematica. And extending its language to be able to represent the concepts that were involved. Well, while I’d been working on the NKS book, I’d kept on thinking: what will be the first killer app of this new kind of science?

When one goes out into the computational universe, one finds all these little programs that do these amazing things. And it’s a little like doing technology with materials: where one goes out into the physical world and finds materials, and then realizes they’re useful for different things. Well, it’s the same with those programs out in the computational universe. There’s a program there that’s great for random sequence generation. Another one for compression. Another one for representing Boolean algebra. Another one for evaluating some kind of mathematical function.

And actually, over the years, more and more of the algorithms we add to Mathematica were actually not engineering step by step… but were instead found by searching the computational universe.

One day I expect that methodology will be the dominant one in engineering.”

“We’d obviously achieved a lot in making formal knowledge computable with Mathematica.

But I wondered about all the other knowledge. Systematic knowledge. But knowledge about all these messy details of the world. Well, I got to thinking: if we believe the paradigm and the discoveries of NKS, then all this complicated knowledge should somehow have simple rules associated with it. It should somehow be possible to do a finite project that can capture it. That can make all that systematic knowledge computable.”

“And we actually at first built what we call “data paclets” for Mathematica. You see, in Mathematica you can compute the values of all sorts of mathematical functions and so on. But we wanted to make it so there’d be a function that, say, computes the GDP of a country—by using our curated collection of data. Well, we did lots of development of this, and in 2007, when we released our “reinvention” ofMathematica, it included lots of data paclets covering a variety of areas.

Well, that was great experience. And in doing it, we were really ramping up our data curation system. Where we take in data from all sorts of sources, sometimes in real time, and clean it to the point where it’s reliably computable. I know there are Library School people here today, so I’ll say: yes, good source identification really is absolutely crucial.

These days we have a giant network of data source providers that we interact with. And actually almost none of our data now for example “comes from the web”. It’s from primary sources. But once we have the raw data, then what we’ve found is that we’ve only done about 5% of the work.

What comes next is organizing it. Figuring out all its conventions and units and definitions. Figuring out how it connects to other data. Figuring out what algorithms and methods can be based on it.

And another thing we’ve found is that to get the right answer, there always has to be a domain expert involved. Fortunately at our company we have experts in a remarkably wide range of areas. And through Mathematica—and particularly its incredibly widespread use in front-line R&D—we have access to world experts in almost anything.”

June 21, 2010 Posted by | Artificial Intelligence, Complexity, Culture, Mathematics | Leave a comment


%d bloggers like this: