ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Configuration for Open Source Operating systems with the SAM-SD Approach

    SAM-SD
    5
    51
    11.7k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      With a100TB array, even RAID 6 is getting to be a bit risky. A rebuild could take an extremely long time. A RAID 6(1) is really the riskiest thing that I would consider for something that is pretty important and RAID 10 would remain the choice for ultra mission critical.

      1 Reply Last reply Reply Quote 1
      • scottalanmillerS
        scottalanmiller @GotiTServicesInc
        last edited by

        @GotiTServicesInc said:

        so for a large setup like that (50 drives), would you want to do a raid 6 per 10 drives and software raid 0 them together to allow for a quicker rebuild time with more drives being able to fail simultaneously?

        You never mix hardware and software RAID, not in the real world. That's purely a theoretical thing. Of course you can, but no enterprise would do this. Use all one or all the other.

        1 Reply Last reply Reply Quote 1
        • scottalanmillerS
          scottalanmiller @GotiTServicesInc
          last edited by

          @GotiTServicesInc said:

          so for a large setup like that (50 drives), would you want to do a raid 6 per 10 drives and software raid 0 them together to allow for a quicker rebuild time with more drives being able to fail simultaneously?

          You would either accept the risks of RAID 6, go for full RAID 10 or blend the two with RAID 60. At 25 drives you are just large enough to consider RAID 60 in a practical sense.

          http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count/

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller
            last edited by

            Choices with 25 drives per node with RAID 60 would be...

            5 sets of 5 drives. That's it. Only one configuration.

            So the RAID 60 overhead would be 15 usable drives and 10 parity drives. Not a good mix at all. You lose a ton of capacity as well as performance for only moderate additional protection.

            1 Reply Last reply Reply Quote 1
            • scottalanmillerS
              scottalanmiller
              last edited by

              Of course to get to our capacity of 100TB we have to consider that assuming 4TB drives....

              RAID 0: 25 drives
              RAID 6: 27 drives
              RAID 60/5: 35 drives
              RAID 10: 50 drives

              1 Reply Last reply Reply Quote 1
              • G
                GotiTServicesInc
                last edited by

                So really the only correct solution is a RAID 10 setup with mirroring over FC or iSCSI? I still feel like you'd be stuck with a whole lot of rebuild time if a drive failed in one of the arrays, although the rebuild time should be faster as long as you don't have a failure in both JBOD arrays on a single server?

                scottalanmillerS 3 Replies Last reply Reply Quote 0
                • DashrenderD
                  Dashrender
                  last edited by

                  But in a 20 drive array, you have 10 RAID 1 drives that are RAIDed 0 over them. So a single drive failure only requires the mirroring of a single drive.

                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                  • dafyreD
                    dafyre
                    last edited by

                    If I understand all this right, I think that RAID 10 would have the fastest rebuild time since it only has to rebuild a single disk from its partner in the same server. It doesn't have to fight network congestion or anything like that quite so much.

                    1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @GotiTServicesInc
                      last edited by

                      @GotiTServicesInc said:

                      So really the only correct solution is a RAID 10 setup with mirroring over FC or iSCSI?

                      You can't really mirror over a SAN block protocol like that. In theory you can, but it is incredibly impractical and nothing really leverages those technologies for that. You mirror either at a RAID level or via a replication protocol like DRBD or HAST.

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @GotiTServicesInc
                        last edited by

                        @GotiTServicesInc said:

                        ....although the rebuild time should be faster as long as you don't have a failure in both JBOD arrays on a single server?

                        There is no JBOD. You wouldn't have that in any situation.

                        1 Reply Last reply Reply Quote 0
                        • dafyreD
                          dafyre
                          last edited by

                          What is it that folks like HP, NetApp, EMC, et al. do? Do they do it at the block level or via some type of other method like DRBD?

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @GotiTServicesInc
                            last edited by

                            @GotiTServicesInc said:

                            So really the only correct solution is a RAID 10 ....? I still feel like you'd be stuck with a whole lot of rebuild time if a drive failed in one of the arrays

                            A drive failure on RAID 10 is always the same no matter how big the array is. A drive resilver on RAID 10 is always, without exception, just a RAID 1 pair doing a block by block copy from one drive to its pair. That's it. So if you were using 4TB drives like our example, the rebuild time is the time that it takes to copy one drive to another directly. That's all. It's the smallest rebuild time possible for any RAID system. You really can't make it faster than that.

                            1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @dafyre
                              last edited by

                              @dafyre said:

                              What is it that folks like HP, NetApp, EMC, et al. do? Do they do it at the block level or via some type of other method like DRBD?

                              Always something like DRBD. Normally it is proprietary, but not always. NetApp does not have a stable, working replication last I heard. Some vendors actually just use DRBD (Synology, for example.) Some make their own replication products. But the general theory is the same - write to both at the same time, read from the local.

                              1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @Dashrender
                                last edited by

                                @Dashrender said:

                                But in a 20 drive array, you have 10 RAID 1 drives that are RAIDed 0 over them. So a single drive failure only requires the mirroring of a single drive.

                                Correct. Which doesn't mean that recovery is seconds or anything like that. But we measure the recovery time in hours with minimal impact rather than days or weeks with massive impact.

                                1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  If you move a RAID 10 array from 4TB drives to 2TB drives you essentially cut drive recovering time in half, too. So you can balance things depending on your needs.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller
                                    last edited by

                                    Handy thing to think about.....

                                    Parity RAID: Drive resilver time is determined by a combination of drive size and array size.
                                    Mirrored RAID: Drive resilver time is determined only by drive size.

                                    1 Reply Last reply Reply Quote 0
                                    • G
                                      GotiTServicesInc
                                      last edited by

                                      Thank you for all the information so far and I hope I'm not sucking you dry for information (I've read all the links you've posted already).

                                      so can't DRBD use FC for the replication? or does it have to use LAN? and if we're forced to do Lan, we should be able to trunk some ports together to get higher throughput no?

                                      scottalanmillerS 2 Replies Last reply Reply Quote 1
                                      • scottalanmillerS
                                        scottalanmiller @GotiTServicesInc
                                        last edited by

                                        @GotiTServicesInc said:

                                        so can't DRBD use FC for the replication? or does it have to use LAN?

                                        Well you can build a LAN on FC if you want 😉

                                        But DRBD can't talk SCSI so you can't use FC like you are thinking. DRBD isn't something that leverages other protocols, it is its own protocol natively that talks to DRBD on the other side. You don't add more protocols to it, it just talks over the network to itself.

                                        DRBD is NOT a SAN, it's just replication.

                                        1 Reply Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller @GotiTServicesInc
                                          last edited by

                                          @GotiTServicesInc said:

                                          and if we're forced to do Lan, we should be able to trunk some ports together to get higher throughput no?

                                          Yes, you would likely use 10GigE connections too, since no switch is required.

                                          1 Reply Last reply Reply Quote 0
                                          • 1
                                          • 2
                                          • 3
                                          • 2 / 3
                                          • First post
                                            Last post