ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer, local storage, and redundancy/backups

    Scheduled Pinned Locked Moved IT Discussion
    xenserverbackupredundancy
    40 Posts 7 Posters 9.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • KellyK
      Kelly @DustinB3403
      last edited by

      @DustinB3403 said:

      Can you tear down the OpenStack Cloud while you work on rebuilding these systems into a reasonable configuration?

      I'm going to blow it away and reinstall completely fresh. The OpenStack version is pretty old and it is running on Ubuntu 12.04.

      1 Reply Last reply Reply Quote 0
      • DustinB3403D
        DustinB3403
        last edited by

        Since you have more than 2 hosts you could use the built in tools from Xen to create a highly reliable pool.

        I believe @halizard might even work with more than 2 host. But their bread and butter so to speak is a 2 host setup from everything I've read.

        scottalanmillerS 1 Reply Last reply Reply Quote 1
        • KellyK
          Kelly
          last edited by

          I thought @halizard was for more than two hosts and that you needed HA-ISCI for just two nodes?

          1 Reply Last reply Reply Quote 0
          • DustinB3403D
            DustinB3403
            last edited by

            No @halizard makes it so you can work with just 2 host.

            1 Reply Last reply Reply Quote 0
            • DustinB3403D
              DustinB3403
              last edited by

              Possibly more, but I haven't dug into that.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller
                last edited by

                Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

                If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

                Also, what is driving the move away from OpenStack? Just a desire for simplicity?

                KellyK 1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller @DustinB3403
                  last edited by

                  @DustinB3403 said:

                  Since you have more than 2 hosts you could use the built in tools from Xen to create a highly reliable pool.

                  I believe @halizard might even work with more than 2 host. But their bread and butter so to speak is a 2 host setup from everything I've read.

                  It is build on DRBD which is really just two hosts.

                  DustinB3403D 1 Reply Last reply Reply Quote 1
                  • DustinB3403D
                    DustinB3403 @scottalanmiller
                    last edited by

                    @scottalanmiller Thank you for the clarification.

                    1 Reply Last reply Reply Quote 0
                    • KellyK
                      Kelly @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

                      If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

                      Also, what is driving the move away from OpenStack? Just a desire for simplicity?

                      I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.

                      travisdh1T scottalanmillerS 2 Replies Last reply Reply Quote 0
                      • KellyK
                        Kelly
                        last edited by

                        As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • travisdh1T
                          travisdh1 @Kelly
                          last edited by

                          @Kelly said:

                          @scottalanmiller said:

                          Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

                          If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

                          Also, what is driving the move away from OpenStack? Just a desire for simplicity?

                          I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated.

                          Why do you feel the need to separate the storage and compute? What business reason exists to justify the added cost and management headache?

                          I get that you currently have a management headache, and I do like the idea of moving to something more reliable. XenServer with halizard and XenOrchestra would be a great drop-in replacement, it's what I'm migrating to at least.

                          1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @Kelly
                            last edited by

                            @Kelly said:

                            @scottalanmiller said:

                            Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?

                            If the former, why not keep CEPH and only move the top layer from OpenStack to XS?

                            Also, what is driving the move away from OpenStack? Just a desire for simplicity?

                            I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.

                            Gotcha, okay. was hoping that the CEPH infrastructure could remain. But I guess not.

                            1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @Kelly
                              last edited by

                              @Kelly said:

                              As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.

                              I totally get that private cloud isn't likely to make sense with just four nodes, that's pretty crazy 🙂

                              Was just thinking of what might be easy going forward.

                              1 Reply Last reply Reply Quote 1
                              • scottalanmillerS
                                scottalanmiller
                                last edited by

                                Do you really need HA? HA adds complication. Although there is an option here... with four nodes you could do TWO HA-Lizard clusters and I think put it all under XO for a single pane of glass. Not as nice as a single four node cluster, but free and works with what you have, more or less.

                                1 Reply Last reply Reply Quote 2
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.

                                  KellyK 1 Reply Last reply Reply Quote 0
                                  • KellyK
                                    Kelly @scottalanmiller
                                    last edited by

                                    @scottalanmiller said:

                                    The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.

                                    It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.

                                    scottalanmillerS Reid CooperR 2 Replies Last reply Reply Quote 1
                                    • scottalanmillerS
                                      scottalanmiller @Kelly
                                      last edited by

                                      @Kelly said:

                                      It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.

                                      You are going to lose storage capacity to either RAID or RAIN. Can't do any sort of failover without losing capacity. The simplest thing, if you are okay with it, would be to do RAID 6 or RAID 10 (depending on the capacity that you are willing to lose) using MD software RAID and not do HA but just run each machine individually. Use XO to manage them all as a pool.

                                      1 Reply Last reply Reply Quote 1
                                      • Reid CooperR
                                        Reid Cooper @Kelly
                                        last edited by

                                        @Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.

                                        KellyK 1 Reply Last reply Reply Quote 0
                                        • KellyK
                                          Kelly @Reid Cooper
                                          last edited by

                                          @Reid-Cooper said:

                                          @Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.

                                          There is an onboard controller, but it isn't running any RAID configuration that I can tell.

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @Kelly
                                            last edited by

                                            @Kelly said:

                                            There is an onboard controller, but it isn't running any RAID configuration that I can tell.

                                            It would not be for CEPH. CEPH is a RAIN system, there would be no RAID. But what it was doing isn't an issue. What we care about going forward is what we can do. The SAS controller has the drives attached and that's all that we would care about when looking at the software RAID from MD. The SAS controller isn't what provides the RAID, it is just what attaches the drives.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 1 / 2
                                            • First post
                                              Last post