De cara a la response, he estado buscando información lo más actualizada posible sobre tecnologías de redes de interconexión, y he encontrado este enlace sobre un seminario reciente, de septiembre de 2015.
Redes para proceso y memoria, es decir HPC y Big Data, están claramente convergiendo.
By the year 2018, High-Performance Computing (HPC) Systems are expected to break the Exaflop performance barrier of (10^18 flops) while their power consumption is kept at current levels (or incremented marginally), known as the Exascale challenge. In addition, more storage capacity and data-access speed is demanded of HPC clusters and datacenters to manage and store huge amounts of data produced by software applications, known as the Big-Data challenge. Consequently, the Exascale and Big-Data challenges are driving the technological revolution of this decade, motivating significant research and development efforts from both industry and academia. In this context, the interconnection network plays an essential role in the architecture of HPC systems and datacenters, as the number of processing or storage nodes that must be interconnected in these systems is very likely to grow significantly to meet the higher computing and storage demands. Therefore, the interconnection network should provide high communication bandwidth and low latency, otherwise the network will become the bottleneck of the entire system. In that regard, many design aspects should be considered for improving interconnection network performance, such as topology, routing algorithm, power consumption, reliability and fault tolerance, congestion control, programming models, control software, etc.
The main goal of this workshop is to gather and discuss, in a full-day event, the latest and groundbreaking advances in the design, development and configuration of scalable high-performance interconnection networks, especially those oriented towards meeting the Exascale challenge and Big-data demands.
All researchers and professionals, both from industry and academia, working in the area of interconnection networks for scalable HPC systems and Datacenters are encouraged to submit an original paper to the workshop and to attend this event.
TOPICS OF INTEREST
The list of topics covered by this workshop includes, but is not limited to, the following:
* High-speed, low-latency interconnect architectures and technologies
* Scalable network topologies, suitable for interconnecting a very large number of nodes
* Power saving policies and techniques in interconnect components and network infrastructure, at both the software and hardware levels
* Innovative configuration of the network control software
* High-performance frameworks for distributed applications: MPI, RDMA, Hadoop, etc.
* APIs and support for programming models
* Routing algorithms
* Quality of Service (QoS)
* Reliability and Fault tolerance
* Load balancing and traffic scheduling
* Network Virtualization
* Congestion Management
* Applications and Traffic characterization
* Modeling and simulation tools
* Performance Evaluation
Note, however, that papers focused on topics that are too far from the design, development and configuration of high-performance interconnects for HPC systems and Datacenters (e.g., mobile networks, intrusion detection, peer-to-peer networks or grid/cloud computing) will be automatically considered as out of scope and rejected without review.
Pese a que se desarrollará en Chicago, los organizators son castizos:
* Pedro Javier Garcia, University of Castilla-La Mancha, Spain
* Jesus Escudero-Sahuquillo, Technical University of Valencia, Spain
Y la participación es completamente multicultural.
Varias presentaciones tratan de la topología Dragonfly que se introdujo en este artículo.
De interés también:
How can we dramatically increase network scalability?
In their talks, the panelists will address the next questions:
- In order to reach Exascale performance, what are the necessary changes we need to introduce in the interconnection network?
- Using photonics to reduce signal attenuation seems to be mandatory, but at what levels? Just for node to node interconnects? Within the motherboard? Within the processor chip?
- Most of the latency is not in the interconnect hardware. How should communication protocols be modified? How will those changes affect the programming model for massively parallel applications?
- The interconnect is consuming an increasing fraction of the computer power. In addition to using photonics to reduce losses, what other techniques should be implemented to reduce power consumption? How will they affect network congestion and message latency (average latency and jitter)?