Tras un breve paréntesis en el que nos hemos tenido que concentrar en otros temas, retomamos las entradas sobre estos contenidos.

Un artículo interesante sobre isomorfismo de grafos. La teoría es el reciente resultado de Babai. Y también un resultado publicado ayer que demuestra que un método de solución de este problema (obviamente no el de Babai sino al parecer uno de los más utilizados en la práctica) es exponencial peor caso.

**An exponential lower bound for Individualization-Refinement algorithms for Graph Isomorphism**

Daniel Neuen and Pascal Schweitzer RWTH Aachen University {neuen,schweitzer}@informatik.rwth-aachen.de

May 10, 2017

**Abstract. ** *The individualization-refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question whether individualization-refinement algorithms can achieve upper bounds on the running time similar to the more theoretical techniques based on a group theoretic approach. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualization-refinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualization-refinement framework, our construction is immune to changing the cell selector, or adding various heuristic invariants to the algorithm. Furthermore, our graphs also provide exponential lower bounds in the case when the k-dimensional Weisfeiler-Leman algorithm is used to replace the standard color refinement operator and the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm. *

**Extracto.** *There are several highly efficient isomorphism software packages implementing the paradigm. Among them are nauty/traces [15], bliss [11], conauto [13] and saucy [6]. While they all follow the basic individualization-refinement paradigm, these algorithms differ drastically in design principles and algorithmic realization. In particular, they differ in the way the search tree is traversed, they use different low level subroutines, have diverse ways to perform tasks such as automorphism detection, and they use different cell selection strategies as well as vertex invariants and refinement operators.*

*With Babai’s [2] recent quasi-polynomial time algorithm for the graph isomorphism problem, the theoretical worst case complexity of algorithms for the graph isomorphism problem was drastically improved from a previous best e O( √ n log n) (see [3]) to O(n logc n ) for some constant c ∈ N. As an open question, Babai asks [2] for the worst case complexity of algorithms based on individualizationrefinement techniques. About this worst case complexity, very little had been known. In 1995 Miyazaki [16] constructed a family of graphs on which the then current implementation of nauty has exponential running time. For this purpose these graphs are designed to specifi- cally fool the cell selection process into exponential behavior. However, as Miyazaki also argues, with a different cell selection strategy the examples can be solved in polynomial time within the individualization-refinement paradigm. In this paper we provide general lower bounds for individualization-refinement algorithms with arbitrary combinations of cell selection, refinement operators, invariants and even given perfect automorphism pruning. More precisely, the graphs we provide yield an exponential size search tree (i.e., 2 Ω(n) nodes) for any combination of refinement operator, invariants, and the cell selector which are not stronger than the k-dimensional Weisfeiler-Leman algorithm for some fixed dimension k. The natural class of algorithms for which we thus obtain lower bounds encompasses all software packages mentioned above even with various combinations of switches that can be turned on and off in the execution of the algorithm to tune the algorithms towards specific input graphs. Our graphs are asymmetric, i.e., have no non-trivial automorphisms, and thus no strategy for automorphism detection can help the algorithm to circumvent the exponential lower bound.*

En el segundo paper nos informan sobre el estado del arte de la práctica tras este resultado.

* Benchmark Graphs for Practical Graph Isomorphism* Daniel Neuen and Pascal Schweitzer

*May 11, 2017 *

**Abstract** The state-of-the-art solvers for the graph isomorphism problem can readily solve generic instances with tens of thousands of vertices. Indeed, experiments show that on inputs without particular combinatorial structure the algorithms scale almost linearly. In fact, it is non-trivial to create challenging instances for such solvers and the number of difficult benchmark graphs available is quite limited. We describe a construction to efficiently generate small instances for the graph isomorphism problem that are difficult or even infeasible for said solvers. Up to this point the only other available instances posing challenges for isomorphism solvers were certain incidence structures of combinatorial objects (such as projective planes, Hadamard matrices, Latin squares, etc.). Experiments show that starting from 1500 vertices our new instances are several orders of magnitude more difficult on comparable input sizes. More importantly, our method is generic and efficient in the sense that one can quickly create many isomorphism instances on a desired number of vertices. In contrast to this, said combinatorial objects are rare and difficult to generate and with the new construction it is possible to generate an abundance of instances of arbitrary size. Our construction hinges on the multipedes of Gurevich and Shelah and the Cai-FürerImmerman gadgets that realize a certain abelian automorphism group and have repeatedly played a role in the context of graph isomorphism. Exploring limits of such constructions, we also explain that there are group theoretic obstructions to generalizing the construction with non-abelian gadgets.

**Extractos.**

*The practical algorithms underlying these solvers differ from the ones employed to obtain theoretical results. Indeed, there is a big disconnect between theory and practice [2]. One could 1 interpret Babai’s recent breakthrough, the quasipolynomial time algorithm [1], as a first step of convergence. The result implies that if graph isomorphism is NP-complete then all problems in NP have quasi-polynomial time algorithms, which may lead one to also theoretically believe that graph isomorphism is not NP-complete.*

P.s. Otro paper que no tiene nada que ver con el anterior pero que nos ha parecido interesante.

A Survey of Shortest-Path Algorithms

Amgad Madkour1 , Walid G. Aref1 , Faizan Ur Rehman2 , Mohamed Abdur Rahman2 , Saleh Basalamah2 1 Purdue University, West Lafayette, USA 2 Umm Al-Qura University, Makkah, KSA

**May 8, 2017**

** Abstract.*** A shortest-path algorithm finds a path containing the minimal cost between two vertices in a graph. A plethora of shortest-path algorithms is studied in the literature that span across multiple disciplines. This paper presents a survey of shortest-path algorithms based on a taxonomy that is introduced in the paper. One dimension of this taxonomy is the various flavors of the shortest-path problem. There is no one general algorithm that is capable of solving all variants of the shortest-path problem due to the space and time complexities associated with each algorithm. Other important dimensions of the taxonomy include whether the shortest-path algorithm operates over a static or a dynamic graph, whether the shortest-path algorithm produces exact or approximate answers, and whether the objective of the shortest-path algorithm is to achieve time-dependence or is to only be goal directed. This survey studies and classifies shortest-path algorithms according to the proposed taxonomy. The survey also presents the challenges and proposed solutions associated with each category in the taxonomy*

## Terms and conditions: 1. Any commenter of this blog agrees to transfer the copy right of his comments to the blogger. 2. RSS readers and / or aggregators that captures the content of this blog (posts or comments) are forbidden. These actions will be subject to the DMCA notice-and-takedown rules and will be legally pursued by the proprietor of the blog.