Kris Carlson

Just another WordPress.com weblog

Wolfram on the generative Rule for our physical universe

The quest for a Rule that would generate our entire universe is a modern, information-theoretic version of unified field theory or a general, compact theory of the physical universe. Ed Fredkin came up with the idea that there could be a cellular automaton (CA) rule for the physical universe (or as I would put it, the currently-known physical microcosm). But he and Wolfram, and others such as Toffoli and Margolus, and Berkovich, have failed to find it. The early promise of, e.g., the Game of Life, was misleading, as often happens in science.

Wolfram did a more thorough exploration of the CA rule universe than anyone, and organized the field. He then generalized the generative CA Rule idea into mathematical re-write systems, but did not get much cohesive progress toward simulating the physical microcosm there, either. Part of what makes these domains interesting is they are chaotic, in the sense that neighboring parameter settings yield widely divergent results.

Anyway, I wonder what Wolfram means here and how it relates to the conjecture about parallel universes as an interpretation of quantum theory. I think, tho, one general problem for all unified physical theories is they underemphasize (and ignore) the historical trend toward the expansion of our horizons. In other words, history implies that the physical microcosm and macrocosm are far larger than we currently comprehend.

A second

“Still, I think it’s quite possible that we’ll be lucky—and be able to find our universe out in the computational universe. And it’ll be an exciting moment—being able to sort of hold in our hand a little program that is our universe. Of course, then we start wondering why it’s this program, and not another one. And getting concerned about Copernican kinds of issues.

Actually, I have a sneaking suspicion that the final story will be more bizarre than all of that. That there is some generalization of the Principle of Computational Equivalence that will somehow actually mean that with appropriate interpretation, sort of all conceivable universes are in precise detail, our actual universe, and its complete history.”

“The idea of Wolfram|Alpha was to see just how far we can get today with the goal of making the world’s knowledge computable. How much of the world’s data can we curate? How many of the methods and models from science and other areas can we encode? Can we let people access all this using their own free-form human language? And can we show them the results in a way that they can readily understand? Well, I wasn’t sure how difficult it would be. Or whether in the first decade of the 21st century it’d be possible at all. But I’m happy to say that it worked out much better than I’d expected.”

“And by using ideas from NKS—and a lot of hard work—we’ve been able to get seriously started on the problem of understanding the free-form language that we humans walk up to a computer and type in. It’s a different problem than the usual natural-language processing problem. Where one has to understand large chunks of complete text, say on the web. Here we have to take small utterances—sloppily written questions—and see whether one can map them onto the precise symbolic forms that represent the computable knowledge we know.”

Here Wolfram gives an example of the ‘new kind of science’ approach at work:

“In fact, increasingly in Mathematica we are using algorithms that we were not constructed step-by-step by humans, but are instead just found by searching the computational universe. And that’s also been a key methodology in developing Wolfram|Alpha. But going forward, it’s something I think we’ll be able to use on the fly.”

I recommend you read the entire address. There is much more beyond what I excerpted here.

June 21, 2010 Posted by | Artificial Intelligence, Complexity, Culture, History of Science, Mathematics | Leave a comment

Wolfram on the history and future of computable knowledge

This is absolutely fascinating. The history is superb, and what he leads up to is his hope and intention that Alpha will be the culmination of it all. Of course, Google, IBM, and others are vying for that privilege, too.

http://blog.wolframalpha.com/2009/06/29/stephen-wolfram-on-the-quest-for-computable-knowledge/

Mathematica, imho, is one of the great achievements of our age. Further, imagine creating a powerful scientific instrument to explore the universe, and pioneering its use yourself. See A New Kind of Science.

“In Mathematica, for example, my goal has been to create a framework for doing every possible form of formal computation. Mathematica is in a sense a generalization of the usual idea of a computer language. In a sense, whatMathematica tries to do is to imagine all possible computations that people might want to do. And then to try to identify repeated structures—repeated lumps of computational work—that exist across all those computations. And then the role of the Mathematica language is to give names to those structures—those lumps of computational work. And to implement them as the built-in functions of the system.

I wanted Mathematica to be a very general system. Not a system that could just handle things like numbers, or strings, or even formulas. But a system that could handle any structure that one might want to build. So to do that I in effect went back to thinking about the foundations of computation. And ended up defining what one can call unified symbolic programming. One starts by representing absolutely everything in a single unified way: as a symbolic expression. And then one introduces primitives that represent in a unified way what can be done with those expressions.

In building Mathematica over the past 23 years one of the big ideas has been to include in it as much—in a sense formal—knowledge as possible. The methods, the algorithms, the structures that have emerged throughout the fields of mathematics and computation.

Well, one of the reasons I wanted to build Mathematica in the first place was that I wanted to use it myself. To explore just what the broad implications are of the fundamental idea of computation. You see, while computation has been of great practical importance—even in science—there’s a lot more to explore about its implications for the foundations of science and other things. If we’re going to be able to do science—or in general to make knowledge systematic—we kind of have to imagine that there are ultimately theories for how things work. But the question is: what are the primitives, what’s the raw material, for those theories?”

Ultimately it’s not that you can’t build complexity from mathematical primitives and so on. But what’s happened is that the exact sciences have tended to just define themselves to be about cases where that doesn’t happen. We haven’t studied the full computational universe of possibilities, only a thin set that we’ve historically found to be tractable.

Well, this has many implications. It gives us a “new kind of science”—as I pointed out in the title of the big book I wrote about all this. A kind of science that in a sense generalizes what we’ve had before. That uses a much broader set of primitives to describe the world.”

June 21, 2010 Posted by | Uncategorized | Leave a comment

Wolfram on Wolfram Alpha

Here are some excerpts from Wolfram’s description of the history of Alpha that I find interesting. The talk is here:

http://www.stephenwolfram.com/publications/recent/50yearspc/

“For years and years we’d been pouring all those algorithms, and all that formal knowledge, into Mathematica. And extending its language to be able to represent the concepts that were involved. Well, while I’d been working on the NKS book, I’d kept on thinking: what will be the first killer app of this new kind of science?

When one goes out into the computational universe, one finds all these little programs that do these amazing things. And it’s a little like doing technology with materials: where one goes out into the physical world and finds materials, and then realizes they’re useful for different things. Well, it’s the same with those programs out in the computational universe. There’s a program there that’s great for random sequence generation. Another one for compression. Another one for representing Boolean algebra. Another one for evaluating some kind of mathematical function.

And actually, over the years, more and more of the algorithms we add to Mathematica were actually not engineering step by step… but were instead found by searching the computational universe.

One day I expect that methodology will be the dominant one in engineering.”

“We’d obviously achieved a lot in making formal knowledge computable with Mathematica.

But I wondered about all the other knowledge. Systematic knowledge. But knowledge about all these messy details of the world. Well, I got to thinking: if we believe the paradigm and the discoveries of NKS, then all this complicated knowledge should somehow have simple rules associated with it. It should somehow be possible to do a finite project that can capture it. That can make all that systematic knowledge computable.”

“And we actually at first built what we call “data paclets” for Mathematica. You see, in Mathematica you can compute the values of all sorts of mathematical functions and so on. But we wanted to make it so there’d be a function that, say, computes the GDP of a country—by using our curated collection of data. Well, we did lots of development of this, and in 2007, when we released our “reinvention” ofMathematica, it included lots of data paclets covering a variety of areas.

Well, that was great experience. And in doing it, we were really ramping up our data curation system. Where we take in data from all sorts of sources, sometimes in real time, and clean it to the point where it’s reliably computable. I know there are Library School people here today, so I’ll say: yes, good source identification really is absolutely crucial.

These days we have a giant network of data source providers that we interact with. And actually almost none of our data now for example “comes from the web”. It’s from primary sources. But once we have the raw data, then what we’ve found is that we’ve only done about 5% of the work.

What comes next is organizing it. Figuring out all its conventions and units and definitions. Figuring out how it connects to other data. Figuring out what algorithms and methods can be based on it.

And another thing we’ve found is that to get the right answer, there always has to be a domain expert involved. Fortunately at our company we have experts in a remarkably wide range of areas. And through Mathematica—and particularly its incredibly widespread use in front-line R&D—we have access to world experts in almost anything.”

June 21, 2010 Posted by | Artificial Intelligence, Complexity, Culture, Mathematics | Leave a comment

   

%d bloggers like this: