ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    NVMe and RAID?

    IT Discussion
    11
    72
    5.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • PhlipElderP
      PhlipElder @scottalanmiller
      last edited by PhlipElder

      @scottalanmiller said in NVMe and RAID?:

      @Pete-S said in NVMe and RAID?:

      If you do a fileserver like this, skip the hypervisor completely and run it on bare metal. You'll lose at ton of performance otherwise.

      Agreed. This is one of those rare exceptions.

      I'm not sure about this claim? Maybe ten years ago.

      The above solution I mentioned has the workloads virtualized. We've had no issues saturating a setup with IOPS or throughput by utilizing virtual machines.

      It's all in the system configuration, OS tuning, and fabric putting it all together. Much like setting up a 6.2L boosted application, there's a lot of pieces to the puzzle.

      EDIT: As a qualifier, we're an all Microsoft house. No VMware here.

      1 1 Reply Last reply Reply Quote 0
      • 1
        1337 @PhlipElder
        last edited by 1337

        @PhlipElder said in NVMe and RAID?:

        @scottalanmiller said in NVMe and RAID?:

        @Pete-S said in NVMe and RAID?:

        If you do a fileserver like this, skip the hypervisor completely and run it on bare metal. You'll lose at ton of performance otherwise.

        Agreed. This is one of those rare exceptions.

        I'm not sure about this claim? Maybe ten years ago.

        The above solution I mentioned has the workloads virtualized. We've had no issues saturating a setup with IOPS or throughput by utilizing virtual machines.

        It's all in the system configuration, OS tuning, and fabric putting it all together. Much like setting up a 6.2L boosted application, there's a lot of pieces to the puzzle.

        EDIT: As a qualifier, we're an all Microsoft house. No VMware here.

        We're not talking about any fabric because we are talking about local NVMe storage. Data goes straight from the drive over the PCIe bus directly to the CPU.

        For high performance I/O workloads the difference between virtualized and bare metal has increased, not decreased, because the amount of I/O you can generate has increased.

        When everyone was running spinners and SAS, you couldn't generate enough I/O for the small overhead that virtualizing added to matter. A few percent at most.

        As NVMe drives becomes faster and faster and CPUs have more and more PCIe lanes it's not difficult to generate massive amount of I/O. Then every little added overhead for each I/O operation will become more and more noticeable. That's because the overhead becomes a larger percentage of the time, as the total time for the I/O operation becomes shorter.

        That's why the bare metal cloud market has had massive growth the last three years or so. There is simply no way to compete with bare metal performance.

        Typical bare metal server instances that for instance Oracle offers, runs on all NVMe flash local storage. They put 9 NVMe drives on each server. With high performance NVMe drives that's almost 20 Gigabyte of data per second.

        1 Reply Last reply Reply Quote 1
        • dbeatoD
          dbeato
          last edited by

          One of the first Dell Servers with Hotswap NVME was the R7415 so yeah
          https://www.dell.com/en-us/work/shop/povw/poweredge-r7415

          Not sure what others have seen.

          1 1 Reply Last reply Reply Quote 0
          • 1
            1337 @dbeato
            last edited by 1337

            @dbeato said in NVMe and RAID?:

            One of the first Dell Servers with Hotswap NVME was the R7415 so yeah
            https://www.dell.com/en-us/work/shop/povw/poweredge-r7415

            Not sure what others have seen.

            The newer ones have a 5 in the model number, so R7515, R6515 etc.
            That's the ones you want to buy. AMD Epyc 2 Rome CPUs.

            Dual sockets models are R7525, R6525 etc.

            And to make this complete: 6 is 1U and 7 is 2U. R7515, R6515 etc.

            JaredBuschJ 1 Reply Last reply Reply Quote 8
            • JaredBuschJ
              JaredBusch @1337
              last edited by

              @Pete-S said in NVMe and RAID?:

              @dbeato said in NVMe and RAID?:

              One of the first Dell Servers with Hotswap NVME was the R7415 so yeah
              https://www.dell.com/en-us/work/shop/povw/poweredge-r7415

              Not sure what others have seen.

              The newer ones have a 5 in the model number, so R7515, R6515 etc.
              That's the ones you want to buy. AMD Epyc 2 Rome CPUs.

              Dual sockets models are R7525, R6525 etc.

              And to make this complete: 6 is 1U and 7 is 2U. R7515, R6515 etc.

              Too helpful must downvote.

              1 Reply Last reply Reply Quote 4
              • B
                biggen
                last edited by

                Chatting with Dell, they don't offer any of their Epyc servers with any 40Gbe offerings. They only go up to dual 25Gbe. They offer HDR100 Infiniband and Fibre channel, but these are pretty foreign to me and I don't even know if they can be used.

                scottalanmillerS 1 2 Replies Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @biggen
                  last edited by

                  @biggen said in NVMe and RAID?:

                  They offer HDR100 Infiniband and Fibre channel, but these are pretty foreign to me and I don't even know if they can be used.

                  They replace Ethernet. FC is the standard SAN connection. InfiniBand can be used anywhere Ethernet can.

                  1 Reply Last reply Reply Quote 0
                  • 1
                    1337 @biggen
                    last edited by 1337

                    @biggen said in NVMe and RAID?:

                    Chatting with Dell, they don't offer any of their Epyc servers with any 40Gbe offerings. They only go up to dual 25Gbe. They offer HDR100 Infiniband and Fibre channel, but these are pretty foreign to me and I don't even know if they can be used.

                    It's totally random what Dell offers and what they don't.

                    They have the Intel XL710-T2L which is dual port 10 GbE but not the XL710-QDA2 which is the dual port 10/40 GbE. It's the same driver and everything.

                    You could of course buy the network card anywhere you'd like and plug it in.

                    XL710-QDA2

                    T 1 Reply Last reply Reply Quote 1
                    • 1
                      1337
                      last edited by 1337

                      40 GbE is actually 4x10 GbE internally inside the interface. That's why 10 GbE switches have 40 GbE uplinks ports.

                      The interface is called QSFP+ as in Quad SFP+ (SFP+ being 10GbE, SFP being 1GbE)

                      And 25 GbE is the upgrade of the 10 GbE. Interface is called SFP28. Same physical dimensions as SFP+.
                      And 25 GbE switches have 100 GbE uplinks, because 100GbE is 4x25 GbE internally. And the interface is called QSFP28.

                      1 Reply Last reply Reply Quote 1
                      • T
                        taurex @1337
                        last edited by

                        @Pete-S I'd stay away from the 7xx Intel NICs, I heard lots of bad things on different IT forums how they play up. The Mellanox NICs would be my first choice for anything with RDMA support.

                        1 1 Reply Last reply Reply Quote 0
                        • 1
                          1337 @taurex
                          last edited by 1337

                          @taurex said in NVMe and RAID?:

                          @Pete-S I'd stay away from the 7xx Intel NICs, I heard lots of bad things on different IT forums how they play up. The Mellanox NICs would be my first choice for anything with RDMA support.

                          I just picked it because that is what Dell sells. It's a simple card, no RDMA, but I don't think RDMA is needed in a fileserver application like this with huge files.

                          I'm surprised to hear that people have problems with it because it's been around for 5-6 years something now and Intel have newer cards as well. You would kind of assume they've worked out the kinks by now.

                          Anyway, it more a proof-of-concept at this point. You got to have some numbers to play with to see if it's economically feasible for the customer. What you end up will depend on the budget and what the needs actually are. Switches are also a big cost when it comes to 10GbE and faster.

                          And yes, Mellanox is good stuff.

                          1 Reply Last reply Reply Quote 1
                          • B
                            biggen
                            last edited by

                            I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                            Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                            PhlipElderP 1 Reply Last reply Reply Quote 0
                            • PhlipElderP
                              PhlipElder @biggen
                              last edited by

                              @biggen said in NVMe and RAID?:

                              I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                              Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                              FleaBay is your best friend. 😉

                              10GbE pNIC: Intel x540: $100 to $125 each.

                              For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

                              As far as the server goes, is this a proof of concept driven project?

                              • ASRock Rack Board
                                ** Dual 10GbE On Board (designated by -2T)
                              • Intel Xeon Scalable or AMD EPYC Rome
                              • Crucial/Samsung ECC Memory
                              • Power Supply

                              The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

                              The build will cost a fraction of a Tier 1 box.

                              Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

                              scottalanmillerS M 2 Replies Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @PhlipElder
                                last edited by

                                @PhlipElder said in NVMe and RAID?:

                                FleaBay is your best friend.

                                Is that where people trade their pets?

                                1 Reply Last reply Reply Quote 0
                                • M
                                  marcinozga @PhlipElder
                                  last edited by

                                  @PhlipElder said in NVMe and RAID?:

                                  @biggen said in NVMe and RAID?:

                                  I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                                  Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                                  FleaBay is your best friend. 😉

                                  10GbE pNIC: Intel x540: $100 to $125 each.

                                  For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

                                  As far as the server goes, is this a proof of concept driven project?

                                  • ASRock Rack Board
                                    ** Dual 10GbE On Board (designated by -2T)
                                  • Intel Xeon Scalable or AMD EPYC Rome
                                  • Crucial/Samsung ECC Memory
                                  • Power Supply

                                  The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

                                  The build will cost a fraction of a Tier 1 box.

                                  Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

                                  I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

                                  scottalanmillerS PhlipElderP 2 Replies Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @marcinozga
                                    last edited by

                                    @marcinozga said in NVMe and RAID?:

                                    I love Asrock Rack products

                                    I'm on an ASRock right now. Got another sitting beside me.

                                    1 Reply Last reply Reply Quote 0
                                    • PhlipElderP
                                      PhlipElder @marcinozga
                                      last edited by PhlipElder

                                      @marcinozga said in NVMe and RAID?:

                                      @PhlipElder said in NVMe and RAID?:

                                      @biggen said in NVMe and RAID?:

                                      I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                                      Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                                      FleaBay is your best friend. 😉

                                      10GbE pNIC: Intel x540: $100 to $125 each.

                                      For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

                                      As far as the server goes, is this a proof of concept driven project?

                                      • ASRock Rack Board
                                        ** Dual 10GbE On Board (designated by -2T)
                                      • Intel Xeon Scalable or AMD EPYC Rome
                                      • Crucial/Samsung ECC Memory
                                      • Power Supply

                                      The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

                                      The build will cost a fraction of a Tier 1 box.

                                      Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

                                      I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

                                      We just received two ROMED6U-2L2T boards:
                                      https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

                                      They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

                                      FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

                                      EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

                                      M B 1 4 Replies Last reply Reply Quote 0
                                      • M
                                        marcinozga @PhlipElder
                                        last edited by

                                        @PhlipElder said in NVMe and RAID?:

                                        @marcinozga said in NVMe and RAID?:

                                        @PhlipElder said in NVMe and RAID?:

                                        @biggen said in NVMe and RAID?:

                                        I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                                        Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                                        FleaBay is your best friend. 😉

                                        10GbE pNIC: Intel x540: $100 to $125 each.

                                        For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

                                        As far as the server goes, is this a proof of concept driven project?

                                        • ASRock Rack Board
                                          ** Dual 10GbE On Board (designated by -2T)
                                        • Intel Xeon Scalable or AMD EPYC Rome
                                        • Crucial/Samsung ECC Memory
                                        • Power Supply

                                        The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

                                        The build will cost a fraction of a Tier 1 box.

                                        Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

                                        I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

                                        We just received two ROMED6U-2L2T boards:
                                        https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

                                        They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

                                        FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

                                        EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

                                        You're probably overpaying with that CPU, here's a deal not many know about, Epyc 7302P for $713
                                        https://www.provantage.com/hpe-p16667-b21~7CMPTCR7.htm

                                        PhlipElderP 1 Reply Last reply Reply Quote 0
                                        • PhlipElderP
                                          PhlipElder @marcinozga
                                          last edited by

                                          @marcinozga said in NVMe and RAID?:

                                          @PhlipElder said in NVMe and RAID?:

                                          @marcinozga said in NVMe and RAID?:

                                          @PhlipElder said in NVMe and RAID?:

                                          @biggen said in NVMe and RAID?:

                                          I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                                          Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                                          FleaBay is your best friend. 😉

                                          10GbE pNIC: Intel x540: $100 to $125 each.

                                          For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

                                          As far as the server goes, is this a proof of concept driven project?

                                          • ASRock Rack Board
                                            ** Dual 10GbE On Board (designated by -2T)
                                          • Intel Xeon Scalable or AMD EPYC Rome
                                          • Crucial/Samsung ECC Memory
                                          • Power Supply

                                          The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

                                          The build will cost a fraction of a Tier 1 box.

                                          Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

                                          I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

                                          We just received two ROMED6U-2L2T boards:
                                          https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

                                          They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

                                          FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

                                          EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

                                          You're probably overpaying with that CPU, here's a deal not many know about, Epyc 7302P for $713
                                          https://www.provantage.com/hpe-p16667-b21~7CMPTCR7.htm

                                          We're in Canada. We overpay for everything up here. :S

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @PhlipElder
                                            last edited by

                                            @PhlipElder said in NVMe and RAID?:

                                            @marcinozga said in NVMe and RAID?:

                                            @PhlipElder said in NVMe and RAID?:

                                            @marcinozga said in NVMe and RAID?:

                                            @PhlipElder said in NVMe and RAID?:

                                            @biggen said in NVMe and RAID?:

                                            I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.

                                            Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"

                                            FleaBay is your best friend. 😉

                                            10GbE pNIC: Intel x540: $100 to $125 each.

                                            For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.

                                            As far as the server goes, is this a proof of concept driven project?

                                            • ASRock Rack Board
                                              ** Dual 10GbE On Board (designated by -2T)
                                            • Intel Xeon Scalable or AMD EPYC Rome
                                            • Crucial/Samsung ECC Memory
                                            • Power Supply

                                            The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.

                                            The build will cost a fraction of a Tier 1 box.

                                            Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.

                                            I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications

                                            We just received two ROMED6U-2L2T boards:
                                            https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#Specifications

                                            They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.

                                            FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.

                                            EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.

                                            You're probably overpaying with that CPU, here's a deal not many know about, Epyc 7302P for $713
                                            https://www.provantage.com/hpe-p16667-b21~7CMPTCR7.htm

                                            We're in Canada. We overpay for everything up here. :S

                                            And even when you pay a lot, you often can't get things. We tried to order stuff from Insight Canada for our Montreal office and after a week of not being able to ship, they eventually just told us that they couldn't realistically service Canada.

                                            PhlipElderP 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 2 / 4
                                            • First post
                                              Last post