

The class of enumeration problems having a polynomial delay algorithm is often regarded as the class of "easy" enumeration problems. From the definition of polynomial delay, it seems that the most interesting property of such algorithms is the guarantee that one can get a new solution by waiting only a polynomial amount of time after the last output. In many scenarios however, it is enough to have a guarantee that for every t, the time needed to output t solutions is linear in t and polynomial in the input size. More formally, we are interested in enumeration algorithms having the following property: on input of size n, for every t, after a time t*poly(n), the algorithm has output at least t solutions. We say that such algorithms have an incremental linear delay.
When analyzing the complexity of answering queries over databases, the traditional approach asks for the total time required to compute all answers. As the number of answers may be much larger than the size of the input database, there has been a focus on analyzing this task from an enumeration point of view, i.e. bounding the delay between successive answers when they are returned one by one. The motivation here is that we can get value from seeing some of the answers before all of them are produced. A more recent shift takes this approach a step forward. Depending on the final goal of the query, we can save a significant amount of computation by solving enumerationrelated problems that do not produce the entire solution set. As an example, simulating random access to a sorted list of query answers can be used to efficiently compute quantiles of the query answers. This talk will inspect enumerationrelated problems in databases from a finegrained complexity point of view and discuss their connection to enumeration.
Abstract: In this talk I present a general and versatile algorithmic framework for exhaustively generating a large variety of different combinatorial objects, based on encoding them as permutations. This framework provides a unified view on several known Gray codes and yields many new ones, and it exploits and establishes several deep connections between reconfiguration graphs, posets and polytopes. The talk gives an overview over the main concepts and applications of our framework, and it is based on joint work with Jean Cardinal (UL Bruxelles), Liz Hartung (MCLA), Hung P. Hoang (ETH Zurich), Arturo Merino (TU Berlin), and Aaron Williams (Williams College).
Abstracts of the papers are available for registered participants on this page.
Online user: 2  Privacy 