HPC. HiPINEB 2015 : IEEE International Workshop on High-Performance Interconnection Networks Towards the Exascale and Big-Data Era.

De cara a la response, he estado buscando información lo más actualizada posible sobre tecnologías de redes de interconexión, y he encontrado este enlace sobre un seminario reciente, de septiembre de 2015.

Redes para proceso y memoria, es decir HPC y Big Data, están claramente convergiendo.

La página web del evento, con acceso a las presentaciones / papers finalmente presentados.


By the year 2018, High-Performance Computing (HPC) Systems are expected to break the Exaflop performance barrier of (10^18 flops) while their power consumption is kept at current levels (or incremented marginally), known as the Exascale challenge. In addition, more storage capacity and data-access speed is demanded of HPC clusters and datacenters to manage and store huge amounts of data produced by software applications, known as the Big-Data challenge. Consequently, the Exascale and Big-Data challenges are driving the technological revolution of this decade, motivating significant research and development efforts from both industry and academia. In this context, the interconnection network plays an essential role in the architecture of HPC systems and datacenters, as the number of processing or storage nodes that must be interconnected in these systems is very likely to grow significantly to meet the higher computing and storage demands. Therefore, the interconnection network should provide high communication bandwidth and low latency, otherwise the network will become the bottleneck of the entire system. In that regard, many design aspects should be considered for improving interconnection network performance, such as topology, routing algorithm, power consumption, reliability and fault tolerance, congestion control, programming models, control software, etc.

The main goal of this workshop is to gather and discuss, in a full-day event, the latest and groundbreaking advances in the design, development and configuration of scalable high-performance interconnection networks, especially those oriented towards meeting the Exascale challenge and Big-data demands.

All researchers and professionals, both from industry and academia, working in the area of interconnection networks for scalable HPC systems and Datacenters are encouraged to submit an original paper to the workshop and to attend this event.


The list of topics covered by this workshop includes, but is not limited to, the following:

* High-speed, low-latency interconnect architectures and technologies
* Scalable network topologies, suitable for interconnecting a very large number of nodes
* Power saving policies and techniques in interconnect components and network infrastructure, at both the software and hardware levels
* Innovative configuration of the network control software
* High-performance frameworks for distributed applications: MPI, RDMA, Hadoop, etc.
* APIs and support for programming models
* Routing algorithms
* Quality of Service (QoS)
* Reliability and Fault tolerance
* Load balancing and traffic scheduling
* Network Virtualization
* Congestion Management
* Applications and Traffic characterization
* Modeling and simulation tools
* Performance Evaluation

Note, however, that papers focused on topics that are too far from the design, development and configuration of high-performance interconnects for HPC systems and Datacenters (e.g., mobile networks, intrusion detection, peer-to-peer networks or grid/cloud computing) will be automatically considered as out of scope and rejected without review.

Pese a que se desarrollará en Chicago, los organizators son castizos:


* Pedro Javier Garcia, University of Castilla-La Mancha, Spain
* Jesus Escudero-Sahuquillo, Technical University of Valencia, Spain

Y la participación es completamente multicultural.

Varias presentaciones tratan de la topología Dragonfly que se introdujo en este artículo.

De interés también:

  How can we dramatically increase network scalability?

In their talks, the panelists will address the next questions:

  • In order to reach Exascale performance, what are the necessary changes we need to introduce in the interconnection network?
  • Using photonics to reduce signal attenuation seems to be mandatory, but at what levels? Just for node to node interconnects? Within the motherboard? Within the processor chip?
  • Most of the latency is not in the interconnect hardware. How should communication protocols be modified? How will those changes affect the programming model for massively parallel applications?
  • The interconnect is consuming an increasing fraction of the computer power. In addition to using photonics to reduce losses, what other techniques should be implemented to reduce power consumption? How will they affect network congestion and message latency (average latency and jitter)?



Terms and conditions: 1. Any commenter of this blog agrees to transfer the copy right of his comments to the blogger. 2. RSS readers and / or aggregators that captures the content of this blog (posts or comments) are forbidden. These actions will be subject to the DMCA notice-and-takedown rules and will be legally pursued by the proprietor of the blog.

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión /  Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )


Conectando a %s

A %d blogueros les gusta esto: