Monday, March 17th, 2014

9:00 am11:00 am


A Quantitative Model of Innovation in Evolution

Leslie Valiant, Harvard University

I will describe a computational learning model of evolution in which all the following notions have definitions: the function being computed, the representation of the function, the evolutionary mechanism that produces variation in the representation, and the fitness of a representation. It would appear that all these notions need to have definitions if we are to account quantitatively for how the complex mechanisms, that apparently need millions of bits of information just to describe, could have evolved with the numbers of generations and individuals available. Biology suggests that the fundamental level at which evolution takes place is at the level of the expression functions of individual proteins in terms of each other, as the genome undergoes mutations. A quantitative explanation of how evolution at this level could have achieved what it apparently has would seem to be an achievable scientific goal.

On the Power and the Limits of Evolvability

Vitaly Feldman, IBM Almaden

In this talk I'll describe a technique for converting learning algorithms in the statistical query framework into evolution algorithms in L. Valiant's model. This technique implies a characterization of the power of the model and I'll overview the following results based on this characterization:(a) evolvability of most learnable classes of functions for fixed input distributions;(b) hardness of evolvability without distributional assumptions;(c) robustness of the model to many natural modifications of the definition.

Attribute-efficient Evolution

Varun Kanade, UC Berkeley

L. Valiant introduced a computational framework to study evolvability and Feldman showed that certain types of learning algorithms could be converted to evolutionary mechanisms in Valiant's framework. I will focus on the complexity of representations of evolutionary mechanisms. Most previously studied algorithms may result in fairly complex representations. Biological constraints suggest that the representations have low complexity, such as short cascades (low-depth) and sparsity. I will present a couple of mechanisms for the evolution of sparse linear functions under a large class of smooth distributions. These evolutionary mechanisms are attribute-efficient in the sense that the sparsity of the representations and the number of generations depend on the sparsity of the target function and the accuracy parameter, but not the total number of variables. Based on joint work with E. Angelino.

What Functions Do Gene Expression Levels Represent?

Elaine Angelino, Harvard University

I will give an introduction to mechanisms that regulate gene expression levels and describe an algebraic framework for modeling these systems. Gene expression level can be viewed as a function of a physical system described by a continuous time Markov chain over system states. Transitions between these states depend on physical properties, e.g., the rates of chemical reactions. Markov chain theory indicates that gene expression level is a rational function in terms of these transition rates. In the course of this talk, I will attempt to bridge two complementary points of view: chemical reaction network theory and statistical physics.

1:45 pm2:30 pm

A key step in the emergence of early life was the formation of a set of reactions that is (i) self-sustaining (all reactions involve reactants that are either produced by other reactions or are available in the environment) and (ii) autocatalytic (each reaction is catalyzed by some molecule produced by the system). Mathematical, algorithmic and stochastic techniques can help define, analyse, classify and search for these self-sustaining autocatalytic system in large chemical reaction networks. It is also possible to compute the probability of such systems forming in simple polymer models, as a function of the catalysis rate, and to determine their size when they first arise. In this talk, I will provide an overview of some of our earlier and more recent results in this area.

Joint work with Wim Hordijk, Elchanan Mossel and others.

2:30 pm3:15 pm

The phrase "evolution of evolvability," coined by Dawkins (1988), was adopted by several researchers to better summarize a theoretical subject that originated with Riedl (1975) and Conrad (1977). Here "evolvability" means the upper tail of the distribution of fitness effects of genetic variation. While many assert that the evolution of evolvability is "controversial," the evolution of the upper tail of fitness distributions actually goes back to Fisher (1930). "Evolvability," when understood as the upper tail of offspring fitness distributions, is clearly not a "population-level feature" requiring group or lineage selection to evolve, but is a property of individuals. The upper tail will evolve when it co-varies with individual fitnesses, as described by the Price (1970) equation. In the usual model of a population evolving toward an adaptive peak, as in Fisher's geometric model, the covariance will be negative, and the upper fitness tail will shrink. More challenging is to identify evolutionary mechanisms that systematically cause the upper fitness tail to be maintained or to expand.

My contribution is to theoretically tie the upper tail of fitness distributions together with several disparate phenomena: the modularity and complexity of the genotype-phenotype map, gene origin dynamics, the relationship of gene duplication effects to allelic variation, and the consequences of iterative gene duplication over macroevolutionary time. These relationships are still not widely understood, so they will be the focus of my talk.

Tuesday, March 18th, 2014

10:30 am11:15 am

It is extraordinary that natural selection has evolved complex adaptations, encoded by compact genomes. The classical approach to understanding limits to selection has been through the concept of a "load" - the difference in mean fitness between a population evolving under natural selection, relative to some ideal process. However, such arguments generally require either asexual reproduction, or additive inheritance; results for arbitrary "fitness landscapes" are hard to obtain. Ideas from computer science may allow a better understanding of how interacting genes evolve in sexual populations.

Wednesday, March 19th, 2014

1:30 pm2:15 pm

I will give a broad overview of the research program that Prof. Eors Szathmary (Parmenides Foundation, Munich) and I have been carrying out since 2008 on Evolutionary Neurodynamics. Since 2013 this has been a FP-7 FET OPEN Project in collaboration with Luc Steels (UVB), Dario Floreano (EPFL), and Phil Husbands (Sussex). The hypothesis we explore is that some kind of natural selection algorithm is implemented in the brain, with entities that undergo multiplication, with variation and heredity. We have identified several candidate units of evolution in the brain, e.g. patterns of connectivity, patterns of activity, and evolvable pathways of activation through a network. We have identified algorithmic advantages of using populations of solutions (e.g. as in particle filters) rather than single points (e.g. as in MCMC). We have reason to believe that language learning is an evolutionary process occurring during development, in which populations of constructions compete for communicative success. We have reason to believe that during human problem solving, multiple solutions are entertained recombined and mutated in the brain. We have reason to believe that evolutionary methods provide a powerful ensemble approach to combine populations of decomposed and segmented predictive models of the world, policies, and value functions. We have reason to believe that causal inference can play an important role in copying patterns of connectivity between one part of the brain and another part, by one part of the brain observing the spikes from another part of the brain, and that the same mechanism can be used to infer causal relations in perceived inputs. In short, multiple constraints at many levels make the idea of evolutionary neurodynamics not as crazy as it would seem from any one perspective. Please see the following papers for an introduction to our research.

Thursday, March 20th, 2014

1:30 pm2:15 pm

One of the goals of evolutionary theory must be to obtain quantitative estimates of the rate at which evolution progresses, so that we may confront the theory with the empirical facts we have about the history of life on earth.

But to speak of a rate of progress implies that progress is somehow proportional to time. In this talk I will discuss two models that call into question the extent to which this implication is justified. The results reported are joint work with Adi Livnat.

3:30 pm5:00 pm


The Adaptive Immune System as a Learning Algorithm

Shishi Luo, Los Alamos National Laboratory

Our adaptive immune system has the remarkable ability to learn and remember how to neutralize most foreign pathogens that invade our body. One pathogen for which this system seems to fail is human immunodeficiency virus (HIV). Suppose we formulate the adaptive immune response to a pathogen as a search process on some space of molecular shapes. what properties might it have and what is it about HIV that makes this process less effective?

Download Video [229.6MB .mp4]


Satisfiability and Evolution

Andrew Wan, Tsinghua University

How does a beneficial trait, one that is determined by the interaction between many genes, become fixed in a population? We consider the interaction between n genes and their evolving genotype frequencies, with reproduction through recombination, as a process on a Boolean function: when the Boolean function is satisfied by an assignment (i.e., a genotype), a small evolutionary advantage is conferred. Our main result shows that for any Boolean function and populations of polynomial size (polynomial in the number of genes and the inverse probability of initial satisfaction), with very high probability every individual in the population will be satisfying after polynomially many generations.

Download Video [213.6MB .mp4]


The Dynamics of Complex Adaptation

Daniel Weissman, IST Austria

When and how can populations acquire complex adaptations that need multiple mutations to provide a fitness benefit? At least for toy population models, we can find simple, universal approximations for how the dynamics of the process depend on both the genetics of the adaptation (the number of necessary mutations, their individual fitness costs and rates of occurrence, and the recombination rate among them) and the size and structure of the population. Qualitatively, the potential complexity of adaptations is generally highest in large populations undergoing an intermediate amount of recombination.

Download Video [148.7MB .mp4]

Friday, March 21st, 2014

11:15 am12:00 pm

Current evolutionary theory describes a Darwinian machine – i.e., heritable variation in reproductive success that assumes fixed mechanisms of variation and selection operating on a fixed reproductive unit. But, in fact, none of these mechanisms is fixed in nature. For example, the distribution of phenotypic variation changes over evolutionary time as a result of the evolution of development, the selective pressures on traits change as a result of the evolution of ecological interactions, and even the identity of the evolutionary unit changes as a result of the evolution of new reproductive strategies and new mechanisms of inheritance. The circular causality implied by an evolutionary process that alters its own mechanisms results in conceptual difficulties and controversies in many areas of evolutionary biology. However, in computer science, the idea that an algorithmic process can improve over time as a function of past experience, including its own past behaviour, has been thoroughly studied in the field of machine learning. Although many techniques in machine learning may seem unbiological at first, many of them are based on incrementally improving the fit of a model to data. Since incremental improvement is something natural selection can do, it only remains to be shown that evolution has access to a suitable model space.

Our work has shown that the selective pressures acting on regulatory interactions in a gene network are equivalent to (supervised) associative learning (Hebbian learning) well-studied in the context of neural network research (1). Accordingly a gene network can evolve a distributed associative memory of past selective environments and the principles of associative generalisation then help us understand how this internalised model of the past can modify the distribution of phenotypic variation in a manner that makes subsequent evolution faster and better (2,3). We can then use textbook machine learning results to understand the affordances and limitations of these phenomena including the tendency of evolution to 'over-fit' to past selective environments, failing to generalise to novel future environments, and the conditions that alleviate this and hence improve evolvability.

We have also shown that the selective pressures acting on ecological interactions in an ecosystem are equivalent to (unsupervised) associative learning, meaning that ecosystems can (also) form attractors that represent a generalised memory of past ecological states/environmental conditions (4). This shows that the organisation of ecological networks is not necessarily arbitrary but can be conditioned by past experience in a systematic manner. Beyond this, the fact that unsupervised learning can be used to 'prime' good performance at supervised learning tasks then suggests a systematic mechanism by which evolution before a transition in individuality can facilitate evolution after a transition in individuality (5,6). That is, individual selection in a system that is not yet an evolutionary unit can systematically prime for effective higher-level selection. We show that these principles can solve combinatorial optimisation problems that cannot be solved by any one level of selection (6,7).

  1. Watson, R. A., Wagner, G. P., Pavlicev, M., Weinreich, D. M., & Mills, R. (2014). The Evolution of Phenotypic Correlations and “Developmental Memory”. Evolution. (to appear)
  2. Watson, R. A., Buckley, C. L., & Mills, R. (2011). Optimization in “self‐modeling” complex adaptive systems. Complexity, 16(5), 17-26.
  3. Watson, R. A., Mills, R., & Buckley, C. L. (2011). Global adaptation in networks of selfish components: Emergent associative memory at the system scale. Artificial Life, 17(3), 147-166.
  4. Watson, R.A., Power, D.A., Szathmáry, E., Mills, R., Powers, S.T., Czapp, B. (in prep) The evolution of ecological relationships and principles of distributed learning in ecosystems
  5. Watson, R.A., R. Mills, R., C.L. Buckley, K. Kouvaris, A. Jackson, S.T. Powers, C. Cox, S. Tudge, P. Ryan, A. Davies (2014b) Evolutionary Connectionism: How the principles of associative learning operate in the evolution of developmental interactions, the evolution of ecological relationships and the evolution of new levels of individuality. Technical Report, ECS, University of Southampton, Southampton, U.K.
  6. Watson, R. A., Palmius, N., Mills, R., Powers, S. T., & Penn, A. (2011). Can selfish symbioses effect higher-level selection?. In Advances in Artificial Life. Darwin Meets von Neumann (pp. 27-36). Springer Berlin Heidelberg.
  7. Watson, R. A., Mills, R., & Buckley, C. L. (2011). Transformations in the scale of behavior and the global optimization of constraints in adaptive networks. Adaptive Behavior, 19(4), 227-249.

Joint work with Eors Szathmary, Daniel Power, Gunter Wagner, Kostas Kouvaris, Louis Kounios, Rob Mills, Chris Cox, and Adam Jackson.

1:30 pm5:00 pm


A Few Comments (& Questions!) from a Condensed Matter Physicist

Daniel Fisher, Stanford University

Download Video [109.6MB .mp4]


Non-Adaptive Evolvability

Jeff Clune, University of Wyoming
I summarize two separate papers that document cases where evolution failed to evolve evolvability, even though it would have benefitted from doing so. Instead, in both cases, evolvability arises because of an accidental constraint of physics.The first documents that evolution fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes [1]. The second documents that evolution does not evolve modularity in networks (e.g. genetic or neural) unless there exists a cost for network connections, even though such modularity improves performance and evolvability [2]. Together, these papers raise the question of how many instances of evolvability in nature arise as a by-product of some other force. The papers also provide instructions to engineers that wish to harness evolution to produce artificial intelligence as to how to increase the evolvability of evolutionary algorithms.

[1] Clune J, Misevic D, Ofria C, Lenski RE, Elena SF, and Sanjuán R (2008)
Natural selection fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes. PLoS Computational Biology 4(9): e1000187.

[2] Clune J, Mouret J-B, Lipson H (2013)
The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863.

Download Video [238.1MB .mp4]


Complexity of Evolutionary Equilibria in Static Fitness Landscapes

Artem Kaznatcheev, McGill University

A fitness landscape is a genetic space—with two genotypes adjacent if they differ in a single locus—and a fitness function. Evolutionary dynamics produce a flow on this landscape from lower fitness to higher; reaching equilibrium only if a local fitness peak is found. I use computational complexity to question the common assumption that evolution on static fitness landscapes can quickly reach a local fitness peak. I do this by showing that the popular NK model of rugged fitness landscapes is PLS-complete for K >= 2; there are NK fitness landscapes where every adaptive path from some vertices is of exponential length. Alternatively—under the assumption that there are problems in PLS not solvable in polynomial time—this means that there are no evolutionary dynamics (known, or to be discovered, and not necessarily following adaptive paths) that can converge to a local fitness peak on all NK landscapes with K = 2. Further, there exist single-peaked landscapes with no reciprocal sign epistasis where the expected length of an adaptive path following strong selection weak mutation dynamics is exp(O(n^1/3)) even though an adaptive path to the optimum of length less than n is available from every vertex. Finally, I discuss the effects of approximate equilibria and a connection to the long term E.coli evolution project.


Download Video [150.9MB .mp4]


Everything not Forbidden is Allowed

Aviv Bergman, Albert Einstein College of Medicine


Explaining Adaptation in Genetic Algorithms with Uniform Crossover

Keki Burjorjee, Zite Inc.
Hyperclimbing is an intuitive, general-purpose, global optimization heuristic applicable to discrete product spaces with rugged or stochastic cost functions. The strength of this heuristic lies in its insusceptibility to local optima when the cost function is deterministic, and its tolerance for noise when the cost function is stochastic. Hyperclimbing works by decimating a search space, i.e., by iteratively fixing the values of small numbers of variables. The hyperclimbing hypothesis posits that genetic algorithms with uniform crossover (UGAs) perform optimization by implementing efficient hyperclimbing. Proof of concept for the hyperclimbing hypothesis comes from the use of an analytic technique that exploits algorithmic symmetry. By way of validation, we will present experimental results showing that a simple tweak inspired by the hyperclimbing hypothesis dramatically improves the performance of a UGA on large, random instances of MAX-3SAT and the Sherrington Kirkpatrick Spin Glasses problem. An exciting corollary of the hyperclimbing hypothesis is that a form of implicit parallelism more powerful than the kind described by Holland underlies optimization in UGAs. The implications of the hyperclimbing hypothesis for Evolutionary Computation and Artificial Intelligence will be discussed.

Download Video [147MB .mp4]