Kris Carlson

Just another WordPress.com weblog

Wolfram on the generative Rule for our physical universe

The quest for a Rule that would generate our entire universe is a modern, information-theoretic version of unified field theory or a general, compact theory of the physical universe. Ed Fredkin came up with the idea that there could be a cellular automaton (CA) rule for the physical universe (or as I would put it, the currently-known physical microcosm). But he and Wolfram, and others such as Toffoli and Margolus, and Berkovich, have failed to find it. The early promise of, e.g., the Game of Life, was misleading, as often happens in science.

Wolfram did a more thorough exploration of the CA rule universe than anyone, and organized the field. He then generalized the generative CA Rule idea into mathematical re-write systems, but did not get much cohesive progress toward simulating the physical microcosm there, either. Part of what makes these domains interesting is they are chaotic, in the sense that neighboring parameter settings yield widely divergent results.

Anyway, I wonder what Wolfram means here and how it relates to the conjecture about parallel universes as an interpretation of quantum theory. I think, tho, one general problem for all unified physical theories is they underemphasize (and ignore) the historical trend toward the expansion of our horizons. In other words, history implies that the physical microcosm and macrocosm are far larger than we currently comprehend.

A second

“Still, I think it’s quite possible that we’ll be lucky—and be able to find our universe out in the computational universe. And it’ll be an exciting moment—being able to sort of hold in our hand a little program that is our universe. Of course, then we start wondering why it’s this program, and not another one. And getting concerned about Copernican kinds of issues.

Actually, I have a sneaking suspicion that the final story will be more bizarre than all of that. That there is some generalization of the Principle of Computational Equivalence that will somehow actually mean that with appropriate interpretation, sort of all conceivable universes are in precise detail, our actual universe, and its complete history.”

“The idea of Wolfram|Alpha was to see just how far we can get today with the goal of making the world’s knowledge computable. How much of the world’s data can we curate? How many of the methods and models from science and other areas can we encode? Can we let people access all this using their own free-form human language? And can we show them the results in a way that they can readily understand? Well, I wasn’t sure how difficult it would be. Or whether in the first decade of the 21st century it’d be possible at all. But I’m happy to say that it worked out much better than I’d expected.”

“And by using ideas from NKS—and a lot of hard work—we’ve been able to get seriously started on the problem of understanding the free-form language that we humans walk up to a computer and type in. It’s a different problem than the usual natural-language processing problem. Where one has to understand large chunks of complete text, say on the web. Here we have to take small utterances—sloppily written questions—and see whether one can map them onto the precise symbolic forms that represent the computable knowledge we know.”

Here Wolfram gives an example of the ‘new kind of science’ approach at work:

“In fact, increasingly in Mathematica we are using algorithms that we were not constructed step-by-step by humans, but are instead just found by searching the computational universe. And that’s also been a key methodology in developing Wolfram|Alpha. But going forward, it’s something I think we’ll be able to use on the fly.”

I recommend you read the entire address. There is much more beyond what I excerpted here.

June 21, 2010 Posted by | Artificial Intelligence, Complexity, Culture, History of Science, Mathematics | Leave a comment

Wolfram on Wolfram Alpha

Here are some excerpts from Wolfram’s description of the history of Alpha that I find interesting. The talk is here:

http://www.stephenwolfram.com/publications/recent/50yearspc/

“For years and years we’d been pouring all those algorithms, and all that formal knowledge, into Mathematica. And extending its language to be able to represent the concepts that were involved. Well, while I’d been working on the NKS book, I’d kept on thinking: what will be the first killer app of this new kind of science?

When one goes out into the computational universe, one finds all these little programs that do these amazing things. And it’s a little like doing technology with materials: where one goes out into the physical world and finds materials, and then realizes they’re useful for different things. Well, it’s the same with those programs out in the computational universe. There’s a program there that’s great for random sequence generation. Another one for compression. Another one for representing Boolean algebra. Another one for evaluating some kind of mathematical function.

And actually, over the years, more and more of the algorithms we add to Mathematica were actually not engineering step by step… but were instead found by searching the computational universe.

One day I expect that methodology will be the dominant one in engineering.”

“We’d obviously achieved a lot in making formal knowledge computable with Mathematica.

But I wondered about all the other knowledge. Systematic knowledge. But knowledge about all these messy details of the world. Well, I got to thinking: if we believe the paradigm and the discoveries of NKS, then all this complicated knowledge should somehow have simple rules associated with it. It should somehow be possible to do a finite project that can capture it. That can make all that systematic knowledge computable.”

“And we actually at first built what we call “data paclets” for Mathematica. You see, in Mathematica you can compute the values of all sorts of mathematical functions and so on. But we wanted to make it so there’d be a function that, say, computes the GDP of a country—by using our curated collection of data. Well, we did lots of development of this, and in 2007, when we released our “reinvention” ofMathematica, it included lots of data paclets covering a variety of areas.

Well, that was great experience. And in doing it, we were really ramping up our data curation system. Where we take in data from all sorts of sources, sometimes in real time, and clean it to the point where it’s reliably computable. I know there are Library School people here today, so I’ll say: yes, good source identification really is absolutely crucial.

These days we have a giant network of data source providers that we interact with. And actually almost none of our data now for example “comes from the web”. It’s from primary sources. But once we have the raw data, then what we’ve found is that we’ve only done about 5% of the work.

What comes next is organizing it. Figuring out all its conventions and units and definitions. Figuring out how it connects to other data. Figuring out what algorithms and methods can be based on it.

And another thing we’ve found is that to get the right answer, there always has to be a domain expert involved. Fortunately at our company we have experts in a remarkably wide range of areas. And through Mathematica—and particularly its incredibly widespread use in front-line R&D—we have access to world experts in almost anything.”

June 21, 2010 Posted by | Artificial Intelligence, Complexity, Culture, Mathematics | Leave a comment

Stephen Wolfram on the singularity; and on fundamental physics

Wolfram had a webcast today, mostly on Alpha, but he was asked about the singularity, a hypothesized point in history where artificial intelligence will exceed human intelligence, and solve problems such as extending our lives indefinitely. Wolfram opined that the computation done on earth by biological organisms is probably just a fractionary fragment of all interesting computation that can be done, and in particular is constrained to incremental advances due to the nature of evolution.

Another interesting thing he said is he’d like to work on a model of fundamental physics again as a reward for launching Alpha, and that he estimates it will require something over a hundred thousand and less than the code that is currently in Alpha, which is 7 million lines.

September 17, 2009 Posted by | Complexity | , , | Leave a comment

   

%d bloggers like this: