|
News Feed
|
LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
|
 |
 |
Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...
|
|

|
LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.
|
|


Mellanox Powers World-Class Sockets Performance Over 20Gb/s InfiniBand Links
|
|
Monday September 26 2005 @ 07:50AM EDT
|
|
SANTA CLARA, CA -- 09/23/2005 -- Mellanox Technologies Ltd, the leader in business and technical computing interconnects, announced Zero-Copy (ZCopy) Sockets Direct Protocol (SDP) running over InfiniBand fabrics can generate double the data throughput between clustered server nodes, while reducing the overall CPU utilization by up to a factor of ten when compared to other solutions. Mellanox InfiniBand fabric products are the only low-latency, high-performance 20Gb/s interconnect solutions that offload transport processing in hardware. The wide range of mainstream sockets-based applications that run over a InfiniBand connected server clusters using ZCopy SDP obtain industry-leading performance due to optimal CPU utilization, maximum interconnect throughput, and minimal latency. Benchmarks show that a 20Gb/s server-to-server InfiniBand link supports over 1360MB/s of data throughput utilizing ZCopy SDP -- more than double the performance of older sockets-based implementations limited by CPU processing bottlenecks.
"With 20Gb/s InfiniBand server nodes in the market today and 40Gb/s capabilities around the corner, an efficient communication protocol that scales with fabric speeds, rather than CPU horsepower, is vital to improve clustered computing performance and efficiency," said Dror Goldenberg, Senior Member of Mellanox Technologies' Architecture Team. "ZCopy SDP enables the plethora of mainstream sockets-based applications to efficiently utilize zero buffer copy and transport offload capabilities of the InfiniBand fabric for generations to come."
ZCopy SDP implementation and benchmark results will be discussed at the Remote Direct Memory Access (RDMA): Applications, Implementations, and Technologies (RAIT 2005) Workshop to be held on September 26th, 2005, at the Burlington Marriott near Boston, Massachusetts. Dror Goldenberg from Mellanox will present a paper entitled "Transparently Achieving Superior Socket Performance Using Zero Copy SDP over 20Gb/s InfiniBand Links." The research paper highlights how pipelining communication over multiple connections via ZCopy SDP ultimately drives the highest performance and efficiency of server-to-server sockets traffic using 20Gb/s InfiniBand links. ZCopy SDP open-source drivers will be widely available in 4Q2005.
Visit the Mellanox Booth at the Cluster 2005 Conference
|
|
|
 |
 |