The Future of Science
This is the transcript of a short talk I gave a few months ago, which contains a (fairly rudimentary) presentation of some ideas about the future of science that I've been mulling over for a while. I'm really hoping to develop them much further, since I think this is a particularly important and neglected area of inquiry. Cross-posted from Less Wrong; thanks to Jacob Lagerros and David Lambert for editing the transcript, and to various other people for asking thought-provoking questions.
Today I'll be talking about the future of science. Even though this is an important topic (because science is very important) it hasn’t received the attention I think it deserves. One reason is that people tend to think, “Well, we’re going to build an AGI, and the AGI is going to do the science.” But this doesn’t really offer us much insight into what the future of science actually looks like.
It seems correct to assume that AGI is going to figure a lot of things out. I am interested in what these things are. What is the space of all the things we don’t currently understand? What knowledge is possible? These are ambitious questions. But I’ll try to come up with some framings that I think are interesting.
One way of framing the history of science is through individuals making observations and coming up with general principles to explain them. So in physics, you observe how things move and how they interact with each other. In biology, you observe living organisms, and so on. I'm going to call this “descriptive science”. More recently, however, we have developed a different type of science, which I'm going to call “generative science”. This involves studying the general principles behind things that don’t exist yet.
This is harder than descriptive science, because you don't actually have anything to study. You need to bootstrap your way into it. A good example of this is electric circuits. We can come up with fairly general principles for describing how they work. And eventually this led us to computer science, which is again very general. We have a very principled understanding of many aspects of computer science, which is a science of things that didn't exist before we started studying them. I would also contrast this to most types of engineering such as aerospace engineering. I don't think it's principled or general enough to put it in the same class as physics or biology and so on.
So what would it look like if we took all the existing sciences and made them more generative? For example, in biology, instead of saying, "Here are a bunch of living organisms, how do they work?" you would say, "What are all the different possible ways that you might build living organisms, or what is the space of possible organisms and why did we end up in this particular part of the space on Earth?"
Even just from the perspective of understanding how organisms work, this seems really helpful. You understand things in contrast to other things. I don't think we're really going to fully understand how the organisms around us work until we understand why evolution didn't go down all these different paths. And for doing that it's very useful to build those other organisms.
You could do the same thing with physics. Rather than asking how our universe works, you could ask how an infinite number of other possible universes would work. It seems safe to assume that this would keep people busy for quite a long time.
Another direction that you could go in is asking how this would carry over to things we don’t currently think of as science. Take sociology, for example. Sociology is not very scientific right now. It's not very good, mostly speaking. But why? And how might it become more scientific in the future? One aspect of this is just that societies are very complicated, and they're composed of minds, which are also very complicated. There are also a lot of emergent effects of those minds interacting with each other, which makes it a total mess.
So one way of solving this is by having more intelligent scientists. Maybe humans just aren't very good at understanding systems where the base-level components are as intelligent as humans. Maybe you need to have a more intelligent agent studying the system in order to figure out the underlying principles by which it works.
But another aspect of sociology that makes it really hard to study, and less scientific, is that you can't generate societies to study. You have a hypothesis, but you can't generate a new society to test it. I think this is going to change over the coming decades. You are going to be able to generate systems of agents intelligent enough that they can do things like cultural evolution. And you will be able to study these generated societies as they form. So even a human-level scientist might be able to make a science out of sociology by generating lots of different environments and model societies.
The examples of this we've seen so far are super simple but actually quite interesting, like Axelrod's Prisoner's Dilemma tournament or Laland's Social Learning Tournament. There are a couple of things like that which led to really interesting conclusions, despite having really, really basic agents. So I'm excited to see what much more advanced work of this type could look like.
Q&A
Ben: Thank you very much, Richard. That was fascinating. So you made this contrast between generative and more descriptive versions of science.
How much of that set was just a matter of whether or not feedback loops existed in these other spaces? Once we came up with microprocessors, suddenly we were able to build, research, and explore quite a lot of new, more advanced things using science.
And similarly with the sociology example, you mentioned something along the lines of "We'll potentially get to a place where we can actually just test a lot of these things and then a science will form around this measurement tool." In your opinion, is this a key element in being able to explore new sciences?
Richard: Yes. I think feedback loops are pretty useful. I'd say there's probably just a larger space of things in generative sciences. We have these computer architectures, right? So we can study them. But how do we know that the computer architectures couldn't have been totally different? This is not really a question that traditional sciences focus on that much.
Biologists aren't really spending much of their time asking, "But what if animals had been totally different? What are all the possible ways that you could design a circulatory system, and mitochondria, and things like that?” I think some interesting work is being done that does ask these questions, but it seems like, broadly speaking, there's just a much richer space to explore.
David: So when you started talking about generative versus descriptive, my initial thought was Schelling's “Micromotives and Macrobehavior” where basically the idea was, “Hey, even if you start with even these pretty basic things, you can figure out how discrimination happens even if people have very slight preferences.” There's a lot of things he did with that, but what strikes me about it is that it was done with very simple individual agents. Beyond that (unless you go all the way to purely rational actor agents, and even then you need lots and lots of caveats and assumptions), you don’t get much in terms of how economics works.
Even if you can simulate everybody, it doesn't give you much insight. Is that a problem for your idea of how science develops?
Richard: So you're saying that if we can simulate everyone given really simple models of them, it still doesn't give us much insight?
David: Even when we have complex models of them, we can observe their behavior but we can't do much with it. We can't tell you much that's useful as a result of even pretty good models.
Richard: I would just say that our models are not very good, right? Broadly speaking, often in economics, it feels something like "we're going to reduce all human preferences to a single dimension but still try to study all the different ways that humans interact; all their friendships and various types of psychological workings and goals and so on".
You can collapse all of these things in different ways and then study them, but I don't think we've had models that are anywhere near the complexity of the phenomena that are actually relevant to people's behavior.
David: But even when they are predictive, even when you can actually replicate what it is that you see with humans, it doesn't seem like you get very much insight into the dynamics... other than saying, "Hey, look, this happens." And sometimes, your assumptions are actually wrong, yet you still recover correct behavior. So overall it didn't tell us very much other than, yes, you successfully replicated what happened.
Richard: Right. What it seems like to me is that there are lots of interesting phenomena that happen when you have systems of interacting agents in the world. People do a bunch of interesting things. So I think that if you have the ability to recreate that, then you’d have the ability to play around with it and just see in which cases this arises and which cases it doesn't arise.
Maybe the way I'd characterize it is something like: in our current models, sometimes they're good enough to recreate something that vaguely looks like this phenomenon, but then if you modify it you don't get other interesting phenomena. It's more that they break, I guess. So what would be interesting is the case where you have the ability to model agents that are sophisticated enough, that when you change the inputs away from recreating the behavior that we have observed in humans, you still get some other interesting behavior. Maybe the tit-for-tat agents are a good example of this, where the set-up is pretty simple, but even then you can come up with something that's fairly novel.
Owain: I think your talk was based on a really interesting premise. Namely that, if we do have AGI in the next 50 years, I think it's plausible that development will be fairly continuous; meaning that on the road to AGI we'll have very powerful, narrow AI that is going to be transformative for science. And I think now is a really good time to think about, in advance, how science could be transformed by this technology.
Maybe it is an opportunity similar to big science coming out of World War II, or the mathematization of lots of scientific fields in the 20th century that were informal before.
You brought up one plausible aspect of that: much better ability to run simulations. In particular, simulations of intelligent agents, which are very difficult to run at the moment.
But you could look at all the aspects of what we do in science and say “how much will narrow AI (that’s still much more advanced than today’s AI) actually help with that?” I think that even with simulations, there are going to be limits, due to its difficulty. Some things are just computationally intractable to simulate. AI's not going to change that. There are NP-hard problems even when simulating very simple physical systems.
And when you're doing economics or sociology, there are humans, rational agents. You can get better at simulating them. But humans interact with the physical world, right? We create technologies. We suffer natural disasters. We suffer from pandemics. And so, the intractability is going to bite when you're trying to simulate, say, human history or the future of a group of humans. Does that make sense? I am curious about your response.
Richard: I guess I don't have strong opinions about which bits will be intractable in particular. I think there's probably a lot of space for high-level concepts that we don't currently have. So maybe one way of thinking about this is game theory. Game theory is a pretty limited model in a lot of ways. But it still gives us many valuable concepts like “defecting in a prisoner's dilemma”, and so on, that inform the way that we view complex systems, even though we don't really know exactly what the hypothesis we're evaluating is. Even just having that type of thing brought to our attention is sufficient to reframe the way that we see a lot of things.
So I guess the thing I'm most excited about is this expansion of concepts. This doesn't feel super intractable because it doesn't feel like you need to simulate anything in its full complexity in order to get the concepts that are going to be really useful going forward.
Comments
Post a Comment