View Single Post
Old 23-06-2009, 18:27   #9
popper
Inactive
 
Join Date: Jan 2006
Posts: 3,270
popper has a bronze arraypopper has a bronze arraypopper has a bronze array
popper has a bronze arraypopper has a bronze arraypopper has a bronze arraypopper has a bronze arraypopper has a bronze arraypopper has a bronze array
Re: Gigabit Network Cards

Quote:
Originally Posted by tweetiepooh View Post
I was reading some tests on Gb ethernet and they concluded that you are likely to be held back by disk speed (even RAID), cable and cards don't really help much, they are what they say and Cat5 is fine.

Article here
http://www.tomshardware.com/reviews/...l#xtor=RSS-182
before iv even read the second page its clear Don Woligroski the writer is no tech, and i wouldnt even say its good enough for a basic outline for a newb as he's not giving accurate consistant info to build on later.


as flinxsl 06/22/2009 9:37 AM or is it MartenKL 06/22/2009 9:34 AM as that comments page looks like it draws a line under the commenter name , making it look like the next comment wrote it but no matter.

says in the comments "Gbit is actually 10^9 bits per second, ie about 119 MB/s"

the VERY first thing you need to remember IS networking aways talks Bits and in Mb (Megabits)per second.

HOWEVER harddrives transfer speeds are measured in MB (MegaBytes) so that makes HD's Bytes speeds as a generic rule, 8 times more than networking bits.

you cant mix and match 1000Mbit and 1000(*1024)MByte and treat them as the same without converting one or the other up or down first.

he should have made that clear from the outset and then wouldnt have made this most basic error.

OC mixing up these is nothing new, people also do the same for generic PCI33 that runs at 133MegaBytes too, not Megabits.

what does this really mean, simple, even a single generic HD today thats rated above 133Mbytes/s on a current motherboard not some old junk AT although most of those too can work on PCI33 to near full 1gig speeds if the CPU doing the network stack SW processing is good enough, can keep up with generic 1gigbit thoughput.

its the tweaking the OS network stack that improves throughput, or using generic average OS defaults that slows your cards down from reaching their near full potential.

in this case OC Broadbandings will know all this, and do his linux sweaking stuff, perhaps he's gone for the more expensive HW assisted offloading cards for long term use.

along with an easy dual or quad multicard PCI-e slot linux Bonding Ethernet option "end to end" to assist his 'up to' 10GbE home network rather than the way over the top priced commercial grade £750 per 2x10Gbit cards, times two, as you need one (dual.quad) ethernet card at eather end of even a simple xOver ethernet connection remember.

---------- Post added at 18:27 ---------- Previous post was at 18:01 ----------

Quote:
Originally Posted by altis View Post
But a router is hardly going to be thrashing stuff to disk, is it?



BTW, my 15Krpm RAID10 array keeps up a solid 100MB/s according to HDtune so it's nearly as fast as the 111MB/s offered by many gigabit cards to RAM.
you sure your getting ONLY 100MegaBytes a second throughput on a raid10 using striping and mirroring ?

how many active (as in not the, non operational ,fall over, standby) Drives are in there on the chain?, you should be getting AT least 350Mega Bytes a second, if not more on a reasonable HW raid card today,slightly less on write for the cheaper cards, Hell, even basic software raid with only two drives in stiped mode should be giving far more than 100MB/s
popper is offline   Reply With Quote