InfiniBand


Also found in: Dictionary, Thesaurus, Acronyms, Wikipedia.

InfiniBand

A high-speed interface used to connect storage networks and computer clusters, introduced in 1999. Using switched, point-to-point channels similar to mainframes and also similar to PCI Express (switched version of PCI), InfiniBand is designed for fabric architectures interconnecting devices in local networks.

A Technology Merger
Supporting both copper wire and optical fibers and originally known as "System I/O," InfiniBand is a combination of Intel's NGIO (Next Generation I/O) and Future I/O from IBM, HP and Compaq. For more information, visit the InfiniBand Trade Association at www.infinibandta.org. See RDMA, PCI Express, NGIO and Future I/O.

 INFINIBAND SIGNALING SPEEDS IN EACH DIRECTION (Gigabits per Second)
   Data transfer rate is
   80% of signaling speed.

 Type           Number of(DR =          Bonded Channels data rate)     1x   4x   8x   12xSingle (SDR)   2.5   10  20   30

 Double (DDR)5   20   40   60

 Quad (QDR)10   40   80  120

 14 (FDR)14   56  112  168

 Enhanced (EDR)  26  104  208  312
References in periodicals archive ?
The joint effort with NVIDIA and testing performed in Mellanox's performance labs, using the Mellanox HDR InfiniBand Quantum connecting four system hosts, each with eight NVIDIA V100 Tensor Core GPUs with NVLink interconnect technology and a single ConnectX-6 HDR adapter per host, have achieved an effective reduction bandwidth of 19.6GB/s by integrating SHARP's native streaming aggregation capability with NVIDIA's latest NCCL 2.4 library, which now takes full advantage of the bi-directional bandwidth available from the Mellanox interconnect.
HDR InfiniBand delivers the best performance and scalability for HPC and AI applications, providing our users with the capabilities to enhance research, discoveries and product development, said Gilad Shainer, vice president of marketing at Mellanox Technologies.
The ConnectX family of Virtual Protocol Interconnect[R] adapters support both InfiniBand and Ethernet and offer unmatched RDMA (Remote Direct Memory Access) features and capabilities and future-proofs data center investments by supporting speeds of 10, 25, 40, 50, and 100Gb/s.
Mellanox's end-to-end FDR 56Gb/s InfiniBand solution provides high-bandwidth and a low-latency processing framework needed for the internal data network to link with the two private clouds, the company added.
In addition, Mellanox has expanded its line of EDR 100Gb/s InfiniBand switch systems.
QLogic offers a comprehensive, end-to-end portfolio of InfiniBand networking products for HPC, including quad data rate (QDR) host channel adapters, QDR directors, edge switches, pass-thru modules and intuitive tools to install, operate, and maintain high-performance fabrics.
"By qualifying all of its adapter families with SUSE Linux Enterprise 11, QLogic is well-positioned to fully exploit the capabilities of this platform in Fibre Channel, iSCSI, InfiniBand and FCoE environments."
InfiniGreen is also projected to typically consume only 0.9 watts per termination, about one-third of that consumed by current InfiniBand cables.
The single InfiniBand fabric enables connectivity from every server to every resource and eliminates the need for additional infrastructure.
* Improved cable signal integrity versus standard 4-channel InfiniBand cables.
Signal reliability is enhanced over the InfiniBand QDR data rate of 10 Gbps/ channel.