Enlaces variados sobre variados temas. Destaco los dos puntos sobre Datacenters, un tipo de supercomputador o de sistema HPC con unas necesidades muy específicas, en particular en relación con su red de interconexión, que es uno de sus dos componentes fundamentales.
1. Tendencias de futuro en supercomputación.
Así ve el futuro un CTO de una empresa HPC. Me quedo con el dato de que en el siguiente escalón, en redes, Infiniband y GigaEthernet (GIGE), van a ser sustituidas por redes propietarias.
CPR = Check Point re-start.
RAS = Reliability, Availability and serviceability.
Relacionado 1. Libro (2011). High Performance Datacenter Networks: Architectures, Algorithms, and Opportunity.
Escrito por Dennis Abts,John Dongjun Kim
A data center is a pool of computing resources clustered together using communication networks to host applications and store data. Major information and communication technology components of the data center are the following: (a) servers and (b) network infrastructure. Conventional data centers are modeled as a multilayer hierarchical network with thousands of low-cost commodity servers as the network nodes.
Data centers are experiencing exponential growth in number of hosted servers. Google, Yahoo, and Microsoft already host hundreds of thousands of servers in their respective data centers [1, 2]. Google was having more than 450,000 servers in 2006 [3, 4], and the servers are doubling in number every 14 months at the Microsoft data centers.
Increased number of servers demands fault tolerant, cost-effective, and scalable network architecture with maximum inter-node communication bandwidth. Another important design aspect of the data center is the use of low-cost commodity equipment.
The server portion of data centers has experienced enormous commoditization, and low-cost commodity servers are used in data centers instead of high-end enterprise servers. However, the network portion of the data center has not seen much commoditization and still uses enterprise-class networking equipment .
Increased number of servers demands high end-to-end bisection bandwidth. The enterprise-class network equipment is expensive, power hungry, and is not designed to accommodate Internet-scale services in data centers. Therefore, the use of enterprise-class equipment experiences limited end-to-end network capacity, non-agility, and creation of fragmented server pools 
Data center network (DCN) is typically based on the three-tier architecture . Three-tier data center architecture is a hierarchical tree-based structure comprised of three layers of switching and routing elements having enterprise-class high-end equipment in higher layers of the hierarchy. Example of the three-tier DCN architecture is shown in Figure 1 [7, 8]. Unfortunately, deployment of even the highest-end enterprise-class equipment may provide only 50% of end-to-end aggregate bandwidth . To accommodate the growing demands of data center communication, new DCN architectures are required to be designed. Most of the Internet communication in the future is expected to take place within the data centers .
Many applications hosted by data centers are communication intensive, such as more than 1,000 servers may be touched by a simple web search request. Communication pattern in a data center may be one-to-one, all-to-all, or one-to-all . The major challenges in the DCN design include the following: (a) scalability, (b) agility, (c) fault tolerance, (d) end-to-end bisection bandwidth, (e) robustness against single point of failure, (f) automated naming and address allocation, and (g) backward compatibility. Data center network architecture is a major part of data center design acting as a communication backbone and requires extreme consideration. Numerous DCN architectures have been proposed in the recent years [9, 10, 12–18]. This paper provides a comparative study and analysis of major DCN architectures that are proposed in the recent years by implementing: (a) proposed network architectures, (b) customized addressing scheme, (c) customized routing schemes, and (d) different network traffic patterns
3. Expanders y datacenters.
Expander graphs (aka expanders) have been the object of extensive study in theoretical computer science and mathematics (see e.g. the survey of ). Originally introduced in the context of building robust, high-performance communication networks , expanders are both very natural from a purely mathematical perspective and play a key role in a host of other applications (from complexity theory to coding). While d-regular random graphs are, in fact, very good expanders [4, 5], many applications require explicit, deterministic constructions of expanders.1
Consequently, a rich body of literature in graph theory deals with deterministic constructions of expanders, of which the best known examples are Margulis’s construction  (with Gabber and Galil’s analysis ), algebraic constructions involving Cayley graphs such as that of Lubotzky, Phillips, and Sarnak , constructions that utilize the zig-zag product , and constructions that rely on the concept of 2-lifts [3, 9].
All of these constructions generate an infinite family of d-regular expanders. However, for important applications of expanders that arise in computer networking, this is not enough. Our primary motivating example are datacenters, which network an unprecedented number of computational nodes and are the subject of much recent attention in the networking research community.
Consider a datacenter network represented as a graph, in which each vertex represents a rack of servers, and edges represent communication links between these racks (or, more accurately, between the socalled “top-of-rack switches”). Expanders are natural candidates for datacenter network topologies as they fare well with respect to crucial objectives such as fault-tolerance and throughput [2, 7].
However, the number of racks n in a datacenter grows regularly as new equipment is purchased and old equipment is upgraded, calling for an expander construction that can grow gracefully (see discussion of industry experience in , and references therein).
Our construction technique is to first deterministically construct an infinite sequence of “extremely good” expanders by starting at Kd 2 +1 and repeatedly “2-lifting” the graph . This standard and well-studied approach to explicitly constructing an infinite sequence of expanders was introduced in the seminal work of Bilu and Linial . However, as every 2-lift doubles the size of the graph, this construction can only generate expanders on n vertices where n = 2i ( d 2 + 1) for some i ≥ 1. We show how to “interpolate” between these graphs. Intuitively, rather than doubling the number of vertices all at once, we insert new vertices one at a time until reaching the next Bilu-Linial expander in the sequence. Our construction and proof crucially utilize the properties of 2-lifts, as well as the flexibility afforded to us by using multigraphs.
Vertex-transitive graphs and their arc-types
Marston Conder, Tomaˇz Pisanski, and Arjana Zitnik ˇ 6 May 2015
Let X be a finite vertex-transitive graph of valency d, and let A be the full automorphism group of X. Then the arc-type of X is defined in terms of the sizes of the orbits of the action of the stabiliser Av of a given vertex v on the set of arcs incident with v. Specifically, the arc-type is the partition of d as the sum n1 +n2 +· · ·+nt + (m1 +m1)+ (m2 +m2)+· · ·+ (ms +ms), where n1, n2, . . . , nt are the sizes of the self-paired orbits, and m1, m1, m2, m2, . . . , ms, ms are the sizes of the non-self-paired orbits, in descending order. In this paper, we find the arc-types of several families of graphs. Also we show that the arc-type of a Cartesian product of two ‘relatively prime’ graphs is the natural sum of their arc-types. Then using these observations, we show that with the exception of 1 + 1 and (1 + 1), every partition as defined above is realisable, in the sense that there exists at least one graph with the given partition as its arc-type.
Es la base de datos que se utiliza para una competición sobre clasificación de imágenes automática, que se organiza desde 2010.
7. HPC y complejidad computacional. Tres enlaces.
8 ETH. Nondeterministic extensions of the Strong Exponential Time Hypothesis and consequences for non-reducibility.
We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences.
In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH
implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-Sum, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.
Aviad Rubinstein∗ October 19, 2015
We prove that, assuming the exponential time hypothesis, finding an ǫ-approximately optimal symmetric signaling scheme in a two-player zero-sum game requires quasi-polynomial time (n Ω(lg ˜ n) ). This is tight by  and resolves an open question of Dughmi . We also prove that finding a multiplicative approximation is NP-hard.
9. An Automatic Inequality Prover and Instance Optimal Identity Testing∗
Gregory Valiant† Stanford University firstname.lastname@example.org Paul Valiant‡ Brown University email@example.com January 2, 2015
No recuerdo por qué he recopilado este artículo.
We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support p = (p1, p2, . . . , pn), how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p = q from the case that the total variation distance (L1 distance) ||p − q||1 ≥ ? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c0 and a function f(p, ) on distributions and error parameters, such that our tester distinguishes p = q from ||p − q||1 ≥ using f(p, ) samples with success probability > 2/3, but no tester can distinguish p = q from ||p − q||1 ≥ c · when given c 0 · f(p, ) samples. The function f(p, ) is upper-bounded by a multiple of ||p||2/3 2 , but is more complicated, and is significantly smaller in some cases when p has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most n have L2/3 norm bounded by √ n, this result immediately shows that for such distributions, O( √ n/2 ) samples suffice, tightening the previous bound of O( √ n polylog n 4 ) for this class of distributions, and matching the (tight) known results for the case that p is the uniform distribution over support n. The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities— generalizing Cauchy-Schwarz, H¨older’s inequality, and the monotonicity of Lp norms. Specifically, we characterize the set of sequences of triples (a, b, c)i = (a1, b1, c1), . . . ,(ar, br, cr) for which it holds that for all finite sequences of positive numbers (x)j = x1, . . . and (y)j = y1, . . . , Yr i=1 X j x ai j y bi j ci ≥ 1. For example, the standard Cauchy-Schwarz inequality corresponds to the triples (a, b, c)i = (1, 0, 1 2 ), (0, 1, 1 2 ), ( 1 2 , 1 2 , −1). Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought through trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.
No la conocía.
Satisfiability of random Boolean expressions built from many clauses with K variables per clause (random K-satisfiability) is a fundamental problem in combinatorial discrete optimization. Here we study random K-satisfiability for K = 3 and K = 4 by the Backtracking Survey Propagation algorithm. This algorithm is able to find, in a time linear in the problem size, solutions within a region never reached before, very close to SAT-UNSAT threshold, and even beyond the freezing threshold. For K = 3 the algorithmic threshold practically coincides with the SAT-UNSAT threshold. We also study the whitening process on all the solutions found by the Backtracking Survey Propagation algorithm: none contains frozen variables and the whitening procedure is able to remove all variables, following a two-steps process, in a time that diverges approaching the algorithmic threshold.
Ankur Moitra c Draft date March 30, 2014.