Two Nics, Only one's throughput!

Associate
Joined
8 Nov 2005
Posts
1,161
'm trying to do this without NIC bonding for the moment, as I didn't
have much luck with it and recompiling the kernel only seemed to add to
my problems.

I have Computer A and Computer B
Each computer has 2 NICs in it.
They are connected to each other via two Crossover cables.
One pair of NICs has a 10.0.0.x subnet and one pair has a 192.168.1.x
subnet.
Computer A has the IPs, 10.0.0.1 and 192.168.1.1
Computer B has the IPs, 10.0.0.2 and 192.168.1.2
Using NFS I've set up one NFS folder per pair of NICs, each connected
to a different IP address.
So that computer A is connected to a folder on 10.0.0.2 and another
folder on 192.168.1.2 (which are obviously both Computer B).

The purpose of this is so that when I copy a file from each folder at
the same time, I should get the full bandwidth of both cards (ie;
200mbps total) correct?

This isn't what I'm seeing.
When I start one transfer, it goes at around 11-11.6Mbytes/sec, then
when I start the second, it drops down to about 5.5, with the other
transfer going at an almost identical speed.
Why do they seem to be sharing the same bandwidth, which as far as I
can see isn't possible?

Any ideas on how to solve this would be MUCH appreciated!

Thanks in advance!
 
Man of Honour
Joined
4 Nov 2002
Posts
15,508
Location
West Berkshire
Have you allowed for how fast the disks can actually read/write the data?

Besides, your explanation is muchos confusing. I don't see how it can possibly work the way you expected it to (in fact, I'm surprised it works at all).
 
Associate
OP
Joined
8 Nov 2005
Posts
1,161
Aye, I'm sorry I couldn't explain it, better, I've drawn it out in visio so maybe that'll convey it better.

The main idea itself
networkinfrastructure1bs.jpg

The first transfer, going at a sensible 10mbytes/second
transfer18cm.jpg

Then after I've started the second transfer on the second nic, which bizarrely causes the first transfer to go at half its original speed and the second transfer to go at the same speed
transfer26ks.jpg
 
Associate
OP
Joined
8 Nov 2005
Posts
1,161
Caged: This is a small scale model of what I want to do with gigabit cards, and why exactly won't it work? I can't see a logical reason.

Deano: Reading off some bog standard Maxtor SATA drives to some bog standard IDE Maxtor drive, but even so it should be able to manage more than 11meg/sec quite happily.
And it's so EXACTLY 100mbits/sec that it seems crazy to me that it's purely coincidence.
 
Associate
Joined
18 Oct 2002
Posts
2,261
Location
Kidderminster
Hi

you are aware that 100Mbit network doesn't mean you get 100Mbytes/sec, your maximum as you have already found is 11ish Mbytes/sec normaly your find 8Mbytes/sec to be more normal, sure it's running very well at the moment :) the speed will drop once the second transfere starts cause you probably have a end user motherboard rather than server motherboard, which will have I/O chips which are flawed in there designed to handle the through put that you are trying to acheive. Basically your saturating your I/O chip, due to the transfer of the hard disk and also to the network cards. Your nics probably have a very low amount of meomery as well 16 rather than 96 which is found on server boards, for a great opteron board have a look at this link http://www.supermicro.com/Aplus/motherboard/Opteron/HT1000/H8SSL-i.cfm
don't think it against the rule to post that link, thre board uses the ServerWorks
HT1000 Chipset which is very good.

Cheers
deano
 
Last edited:
Associate
OP
Joined
8 Nov 2005
Posts
1,161
You know.. it's not the hard disk/io interface.
I've just tried copying a file from the hard disk of computer A to another location on the hard disk of computer A, and I'm getting about 12-15mbyte/sec, and it fluctuates wildly whereas the network transfer was rock steady.
And reading and writing from the same disk has got to be slower than just writing to it.
Oh and I tried it on Computer B, that's about 20mbytes/sec!
 

v0n

v0n

Soldato
Joined
18 Oct 2002
Posts
8,130
Location
The Great Lines Of Defence
Overheads aside - a gigabit network is theoretically capable of roughy 119 MB/sec between interfaces each way. 32 bit 33Mhz PCI bus is capable of about 122 MB/sec transfer. However bus is shared - in real life 40-50 MB/sec short burst transfer via gigabit network is considered healthy. Next bottleneck - hdparm -t on UDMA100 /dev/hda of my linux box shows only 37.98 MB/sec writing speed. There is swap on hda2, other processes write to it etc. Now add overheads - encryption, protocol calls, error checking, driver issues, bugs of all sorts - 20-30 MB/s via scp between two identical machines is not unreasonable. Or 10-15MB/sec under windows where pretty animation and progress checking will have to spawn few calls and some make the process even busier. Damn. And it all loooked so blazing fast when measured with netio - full gig transfers interface to interface handled in cards memory...
 
Associate
OP
Joined
8 Nov 2005
Posts
1,161
Yes, I'm well aware of the other bottlenecks in the equasion, this is why I'm trying it with 10/100 cards at the moment.
When I do use gigabit cards it will be to 10k SCSI drives with Intel Server PCI X gigabit cards, which should handle it quite nicely.

And anyway, even in windows, even with bog standard PCI, I've managed 30-40 megabytes a second FTPing data :)
 

v0n

v0n

Soldato
Joined
18 Oct 2002
Posts
8,130
Location
The Great Lines Of Defence
The fact that speed of your transfer drops by half as soon as you start second transfer points to a bottleneck. Either hdd can't be read fast enough or data can't be transferred faster to cards in slots. hdparm -Tt will tell you if drives can read/write fast enough, tweaking kernel to match hardware (as in - istead of generic IDE options, have modules for the actual hardware in the box etc) can make all the difference to the bus transfers. Make sure both cards are in full duplex modes and routing tables aren't screwed up. I presume you use ftp from shell, not a graphical client?
 
Associate
Joined
12 Oct 2005
Posts
7
post your mount info from the computer for the different folders.

For this to work I expect that you should have the something like the following.

mount -t nfs 10.0.0.2:/folder1 /mnt/folder1
mount -t nfs 192.168.1.2:/folder2 /mnt/folder2

Also post your route and ifconfig output, it will people a better idea on how things are configured, and if something is causing all packages to be routed over the wrong nic.

Finally perform two transfers simultaneously while running ethereal captures on the different network devices. That should show you whether each nic is actually be used.


tbh, I would recomment looking at channel bonding, the linux kernel supports making multiple network connections appear as one. That way you leave it up to the kernel's on either side to determine with nic to use.
 
Associate
Joined
20 Oct 2002
Posts
147
i would go back to the ethernet bonding. it works great and you get your 200mbit (20MB/s) over cheap nasty nic (reltec) cards.


c0w
 
Associate
OP
Joined
8 Nov 2005
Posts
1,161
electrofelix: That is how I had it setup yeah, good idea on the ethereal captures.
Also I just tried connection bonding on them again, same thing, ie; I try a transfer from one computer to another using the single bonded IP address and get 11MB/sec again.

I'll see what happens when I try this with proper server hardware and report back :)
 
Back
Top Bottom