ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ZFS Based Storage for Medium VMWare Workload

    SAM-SD
    zfs storage virtualization filesystems raid
    9
    156
    75.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @dafyre
      last edited by

      @dafyre said:

      It seems I remember @donaldlandru mentioning making one big 5 host cluster. If he were to use something such as XenServer he would get the big cluster and still be able to separate the workloads out between the dev servers and the ops servers and still have "Local" storage right?

      Even if the answer to the "Local" storage (I say that because XenServer can do its own shared storage now, right?) is a resounding "No", he can still leverage replicatoin to replicate the Dev hosts into the Ops environment and vice versa for maintenance and emergencies, right?

      Correct. This would actually make you question the term cluster as the boxes would actually not be associated with each other except that they are all managed from the same interface. Is that a cluster? Not to most people. Does it look like a single entity to someone managing it? Yes.

      He could replicate things into other environments, yes.

      dafyreD 1 Reply Last reply Reply Quote 0
      • dafyreD
        dafyre @scottalanmiller
        last edited by

        @scottalanmiller I was thinking in terms of XenServer doing its own shared storage amongst the 5 servers that make up the cluster.

        coliverC 1 Reply Last reply Reply Quote 0
        • donaldlandruD
          donaldlandru
          last edited by

          So to define cluster a little better in our environment.

          For the ops servers cluster is the likely the proper term. We have two nodes, ensure that each node has available resources to run the entire workload of both servers if needed and use VMware's HA to manage to this.

          For the dev servers, it is simply a single pane of glass, which really is all the essentials kit provides you and the access to the backup APIs.

          The politics are likely to be harder to play as we just renewed our SnS for both Essentials and Essentials plus in January for three years.

          Coupled with this, our offshore datacenter also has a 3 node development "cluster" which pushes us even further from truly having a single pane of glass (three so far if you are keeping count) which is also based on an essentials kit.

          Another important piece of information with the local storage is that everything is based on 2.5" disks -- and all but two servers only have two bays each, getting any really kind of local storage without going external direct attached (non-shared) is going to be a challenge.

          dafyreD 1 Reply Last reply Reply Quote 0
          • coliverC
            coliver @dafyre
            last edited by

            @dafyre said:

            @scottalanmiller I was thinking in terms of XenServer doing its own shared storage amongst the 5 servers that make up the cluster.

            I don't think XenServer has anything like VMWare's vSAN. I think you could probably do something like this inside of dom0 and make a RAIN or something.

            dafyreD 1 Reply Last reply Reply Quote 0
            • DashrenderD
              Dashrender @scottalanmiller
              last edited by

              @scottalanmiller said:

              By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

              Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • dafyreD
                dafyre @coliver
                last edited by

                @coliver That's what I was thinking. Somebody somewhere (Seems lik I remember Scott mentioning this) has said that XenServer uses DRBD under the hood for this.

                coliverC scottalanmillerS 2 Replies Last reply Reply Quote 0
                • DashrenderD
                  Dashrender
                  last edited by

                  What DAS chassis would someone recommend for this setup?

                  @donaldlandru mentioned that he needed at least 9 TB for the dev trio of servers, but not how much was needed for operations two servers.

                  1 Reply Last reply Reply Quote 0
                  • coliverC
                    coliver @dafyre
                    last edited by

                    @dafyre said:

                    @coliver That's what I was thinking. Somebody somewhere (Seems lik I remember Scott mentioning this) has said that XenServer uses DRBD under the hood for this.

                    You could use DRBD for this. But that would be network RAID-1, not sure if you can do other methods with DRBD..

                    1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @dafyre
                      last edited by

                      @dafyre said:

                      If that's the case, then @donaldlandru could just build one big 5-host cluster (assuming he can get the Politics taken care of and the CPUs are compatible -- if that is even an issue) on XenServer and be happy... Upgrade to 4 or 6TB drives per host (RAID 10) and also be happy.

                      Yes. Getting big drives and RAID 10 are critical to getting necessary speed. Using WD RE SAS drives is probably best to get up to the kinds of IOPS that he needs. Best to get some kind of caching going on to really make sure enough performance is there.

                      With RAID 10 WD RE SAS we are probably looking at around 500 - 700 Read IOPS per machine, which is tons better than what was stated as needed, but without the shared IOPS you want extra overhead on a node by node basis to be "safe" in performance terms.

                      The additional capacity will be a huge win. With 3TB drives he would have 6TB usable PER NODE rather than 9TB usable for all five machines. That's huge. Leaping from 9TB total to 30TB total. Go to 4TB, 5TB or 6TB drives and those numbers skyrocket to as high as 60TB total available space!

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @dafyre
                        last edited by

                        @dafyre said:

                        @coliver That's what I was thinking. Somebody somewhere (Seems lik I remember Scott mentioning this) has said that XenServer uses DRBD under the hood for this.

                        Sure does.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @Dashrender
                          last edited by

                          @Dashrender said:

                          @scottalanmiller said:

                          By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

                          Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

                          Those are the rough numbers. He has five nodes so that means either buying all licenses twice (so $1200 and $10,000) or being disqualified from Essentials pricing altogether and needing to move to Standard licensing options.

                          donaldlandruD DashrenderD 2 Replies Last reply Reply Quote 0
                          • donaldlandruD
                            donaldlandru @scottalanmiller
                            last edited by

                            @scottalanmiller said:

                            Currently you have an inverted pyramid of doom, not the best design as you know.

                            This is true, in all scenarios we are playing out we are left with this giant SPOF. This is why I really like the alternatives shared solutions, ZFS, openindiana, etc. because it will allow me to build a second storage node and do replication for failover.

                            The business is also screaming for reliability and 110% uptime, but falls short when it comes time to writing the check for what they want.

                            Do the dev environments need to be highly available -- IMO no, but the business sees that as it's bread and butter, they are aware that we still have not fulfilled this requirements.

                            DashrenderD scottalanmillerS 2 Replies Last reply Reply Quote 0
                            • donaldlandruD
                              donaldlandru @scottalanmiller
                              last edited by

                              @scottalanmiller said:

                              @Dashrender said:

                              @scottalanmiller said:

                              By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

                              Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

                              Those are the rough numbers. He has five nodes so that means either buying all licenses twice (so $1200 and $10,000) or being disqualified from Essentials pricing altogether and needing to move to Standard licensing options.

                              This is very much the case here.

                              To get this all into a single cluster (and hopefully using something like VSAN) would require us to upgrade to standard or higher, we would be able to use acceleration kits to get us there but is no small investment.

                              DashrenderD scottalanmillerS 2 Replies Last reply Reply Quote 0
                              • dafyreD
                                dafyre @donaldlandru
                                last edited by

                                @donaldlandru said:

                                The politics are likely to be harder to play as we just renewed our SnS for both Essentials and Essentials plus in January for three years.
                                <snip>
                                Another important piece of information with the local storage is that everything is based on 2.5" disks -- and all but two servers only have two bays each, getting any really kind of local storage without going external direct attached (non-shared) is going to be a challenge.

                                He brings a good point about the 2 bays and 2.5" drives... Do they even make 4 / 6 TB drives in 2.5" form yet?

                                If not, would it be worth getting an external DAS shelf for each of the servers?

                                DashrenderD 1 Reply Last reply Reply Quote 0
                                • DashrenderD
                                  Dashrender @scottalanmiller
                                  last edited by

                                  @scottalanmiller said:

                                  @Dashrender said:

                                  @scottalanmiller said:

                                  By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

                                  Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

                                  Those are the rough numbers. He has five nodes so that means either buying all licenses twice (so $1200 and $10,000) or being disqualified from Essentials pricing altogether and needing to move to Standard licensing options.

                                  Aww, gotcha - you were doubling them up.... His current spend was $5600.

                                  1 Reply Last reply Reply Quote 0
                                  • DashrenderD
                                    Dashrender @donaldlandru
                                    last edited by

                                    @donaldlandru said:

                                    @scottalanmiller said:

                                    Currently you have an inverted pyramid of doom, not the best design as you know.

                                    This is true, in all scenarios we are playing out we are left with this giant SPOF. This is why I really like the alternatives shared solutions, ZFS, openindiana, etc. because it will allow me to build a second storage node and do replication for failover.

                                    The business is also screaming for reliability and 110% uptime, but falls short when it comes time to writing the check for what they want.

                                    Do the dev environments need to be highly available -- IMO no, but the business sees that as it's bread and butter, they are aware that we still have not fulfilled this requirements.

                                    The question is - do they loose more money when the operations systems are down or when the dev environment is down?

                                    1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @donaldlandru
                                      last edited by

                                      @donaldlandru said:

                                      This is true, in all scenarios we are playing out we are left with this giant SPOF.

                                      It's important to recognize that it is a SPOF. But being a SPOF is not the core issue, believe it or not, just the one that causes the biggest emotional reaction. If you were to buy a super high end active/active EMC or HDS device for this (mainframe class storage, start around $50K for the smallest possible units) the fact that it was a SPOF would be heavily mitigated. The whole mainframe concept is built around making a SPOF that is unlikely to fail.

                                      But your issues are bigger. Here are the big issues that you are left with in both of your scenarios:

                                      • Single point of failure on which everything rests (the thing most likely to fail causes EVERYTHING to fail.)
                                      • No risk mitigation for the other layers in the dependency chain. This isn't a 3-2-1 as traditionally described but actually a (1/1/1-1) meaning ANY server failure results in unmitigated (literally) failure AND any storage failure results in total failure. You have a dramatic increase in failure risk with this design, not just a small or moderate increase like most people see (because most people are confused and heavily mitigate risk at one or two but not all three layers.) So it is very important to realize that this is at least one full order of magnitude more risky than a traditional inverted pyramid of doom.
                                      • The single point of failure that you have is actually a pretty fragile one. Probably more fragile than the servers themselves. So not only is the risk of failure doubled by having two completely places for things to fail, but the single point of failure that impacts everything is the most fragile piece of all.
                                      • This has the highest cost both today AND going into the future.
                                      donaldlandruD 1 Reply Last reply Reply Quote 0
                                      • DashrenderD
                                        Dashrender @donaldlandru
                                        last edited by

                                        @donaldlandru said:

                                        @scottalanmiller said:

                                        @Dashrender said:

                                        @scottalanmiller said:

                                        By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

                                        Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

                                        Those are the rough numbers. He has five nodes so that means either buying all licenses twice (so $1200 and $10,000) or being disqualified from Essentials pricing altogether and needing to move to Standard licensing options.

                                        This is very much the case here.

                                        To get this all into a single cluster (and hopefully using something like VSAN) would require us to upgrade to standard or higher, we would be able to use acceleration kits to get us there but is no small investment.

                                        But that is completely unnecessary if you move to Xen (or is it XenServer - still confused) or Hyper-V

                                        scottalanmillerS 1 Reply Last reply Reply Quote 1
                                        • DashrenderD
                                          Dashrender @dafyre
                                          last edited by

                                          @dafyre said:

                                          @donaldlandru said:

                                          The politics are likely to be harder to play as we just renewed our SnS for both Essentials and Essentials plus in January for three years.
                                          <snip>
                                          Another important piece of information with the local storage is that everything is based on 2.5" disks -- and all but two servers only have two bays each, getting any really kind of local storage without going external direct attached (non-shared) is going to be a challenge.

                                          He brings a good point about the 2 bays and 2.5" drives... Do they even make 4 / 6 TB drives in 2.5" form yet?

                                          If not, would it be worth getting an external DAS shelf for each of the servers?

                                          It's been 15 years, but I've seen DAS shelves that can be split between two hosts. Assuming those are still made, and there is enough needed disk slots, that would save a small amount.

                                          scottalanmillerS S 2 Replies Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @donaldlandru
                                            last edited by

                                            @donaldlandru said:

                                            To get this all into a single cluster (and hopefully using something like VSAN) would require us to upgrade to standard or higher, we would be able to use acceleration kits to get us there but is no small investment.

                                            Going to VSAN, Starwind, DRBD, etc. would be an "orders of magnitude leap" that is not warranted. It just can't make sense. What you have today and what you are talking about moving to are insanely "low availability." Crazy low. And no one had any worries or concerns about that, right?

                                            What I am proposing is that you make the single order of magnitude leap from "acceptably low" reliability to "standard reliability which is good enough for any normal SMB" while dropping your cost dramatically. It's a massive win. Saving a fortune AND leaping far beyond your reliability needs.

                                            Going to something like VSAN just can't make sense. You didn't need something like this before, why would you suddenly need to leapfrog from "super low availability" right over top of normal all the way to "they don't even need this on most of Wall St" super high availability at massively high cost that would require that you upgrade your compute nodes and licensing high cost storage replication technologies? Not only would it require that but it would require bigger or more nodes in order to handle those needs. It's a little like someone who has been riding a bicycle for years (but paying a fortune for it) finding out that they can get a Chevy Cruze for half the price, but having seen what cars are like, deciding that they should buy a Ferrari for their first car when a bicycle was fine all along.

                                            DashrenderD S 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 2 / 8
                                            • First post
                                              Last post