Adventures in 10 Gigabit.
by TheeMahn on Jul.05, 2019, under Ultimate Edition
I have been off the grid for a minute.
http://forumubuntusoftware.info/viewtopic.php?f=140&t=12556
This has been an expensive project, I have undertaken with my own Capitol. My goal was to sit on 10 Gigabit networking, end resolve to sit on 20 Gigabit (this was a accident).
Let’s take a step back. My main rigs motherboard currently a Ryzen 1700X (YES , I will upgrade to a 3950X with PCIE 4.0 in Sept), nor do either of my servers in the basement have 10 Gigabit. They have dual Gigabit I have disabled both of them in the BIOS (Basic Input Output System).
Purchased initially 3 X PCIE 2.0 X 8 dual 10 Gigabit Network cards, all very capable of 20 Gigabit a piece off Ebay. I purchased a 4 port 10 Gigabit Router / switch, a Fiber Optic Cable from Amazon and ran it down through the register to the basement for my main rig upstairs. 1 X Copper 3 Foot 10 Gigabit cable & 2 transceivers (CISCO). Only one of the PCIE cards worked in one of the servers, bought 2 more & 2 more transceivers (this time Compatible MikroTik & paid out the ass). I thought I had bent the fiber optic running it to the basement & purchased annother 50 Meter Fiber Optic Cable). One for the other server and 1 for my main rig. Bought a second Copper 3 Foot 10 Gigabit cable for the other server. Was only getting Laser light through one of the dual cables. Thought it was a broken fiber Optic Cable.
I can now assure you, knowing What I know now. I will soon be on 20 Gigagbit.
This will Blow your mind. What does it take to see 20 Gigabit? I have since purchased a 8 Port Sata III X 8 PCI card, be here tomorrow. Will Jack in initially 6 X 8TB drives in raid 0 into the 32 core server & dual copper 10 Gigabit. I bought a 4 port NVME card from the Unitied Kingdom was not available in the USA yet. And 4 NVME cards that can pull 3.4 GB a sec a piece and raided them. I have not got the 2nd set of transceivers yet. I ordered them from China, they are in the USA now, but not at my door.
I have dropped posts on the internet pulling over 400MB/s (That is less then 1/2 of 10Gigabit) Remember, our servers are Sata II, this is raid 3 across the network that is 3 X 8TB drives as a source. That will change tomorrow. I am still only on 10 Gigabit. As soon at those transceivers arrive? The router BTW supports merging / bridging ports. 20 Out 20 in. That is roughly 2000 megabits per second. I have been looking at PXE. Boot from a network device fast as a NVME drive. Do you feel me?
At Sata 2 a 24TB drive capable of roughly 800MB/S faster then a SSD. Now lets open that up? Do they make a 8TB Sata II? No, they are much faster. Let’s Say double, I can promise you much faster. Cost me $13.00 for the card & $4.00 for the cable. The network side was expensive. Home of Ultimate Edition. Got a question? Please review the F.A.Q. Browse the How to section.
Main O/S: Builder of O/S Guess.
Mainboard: ASUS Hero VI (AM4)
CPU: AMD 1700X water cooled (Deepcool Captain Genome Cooling tower)
Ram: 16 GB GSkill Trident RGB Series Dual Channel DDR4 3200
Video: MSI RX470 8GB Gaming card.
Hard Disks: MASSIVE on the network.(10 Gigabit, 48 port, multiple servers)
Monitors: 4K Samsung 28″, HannsG HH281, Various others
750 Watt modular PSU (Rosswell)
1100 Watt Amp & 4 X 600 Watt speakers
Servers in the basement.
July 5th, 2019 on 2:48 am
If I was going to worry I might actually saturate the PCIE 2.0 Bus. That is fucked up is it not. I hear 4.0 is all what should be hearing about.
July 5th, 2019 on 2:53 am
I will Jack 8 Hard drives into one PCIE 2.0 bus, not 4.0
July 5th, 2019 on 2:55 am
Devs are pretty good about thinking. We must have 2.5 Gigabit internet and making the decisions and allocating your bandwidth.
July 5th, 2019 on 3:01 am
600 MB a sec a piece, they call it 6 gigabit per sec per drive. Servers were devised for that. 8 Off one slot. Nothing.
July 5th, 2019 on 3:14 am
even at 48 Gigabit, what is the capacitance of PCIE 2.0? Wait until you see 4.0. I could drop 100 Drives at NVME and not fill the void.
Let’s do the math 600 megabytes X 4, that is pretty nasty on PCIE 2.0. I will saturate the bus.
I can almost present you, what they will do, let’s provide you 100 Gigabit internet. 10, multiple 5 gigabit that kind of shit. I have seen multiple 5 Gigabit and 2.5. Bandwidth is there.
July 5th, 2019 on 3:22 am
Once you hit 4,0 impressive.
July 5th, 2019 on 3:26 am
Would be hard to saturate 4.0
July 5th, 2019 on 3:41 am
Try and find a X570 motherboard that does not waste lanes, to provide you 5 Gigabit, 2.5 or gigabit. If they wanted to they could put 100 on there.
July 5th, 2019 on 3:50 am
That too is screwed up, almost all motherboards that are X570, have multiple ports. Want 10 Gigabit, want 5, want 2.5. Want Gigabit. Do you see it. What did we soak up a developer? I would roughly say 18 gigabit of bandwidth. A drop in the bucket. Is what they think.
July 5th, 2019 on 4:34 am
I am sorry I missed that, Yes 20,000 Megabits per second.
July 5th, 2019 on 7:08 am
My god, the controller I have inbound is a 8 Port Caching controller. It has RAM on it. Things are about to come off the chain. Lets start at Sata III 8TB X 6 drives. Yes, I can shove in 8.