ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer hyperconverged

    IT Discussion
    xenserver xenserver 7 xen orchestra hyperconvergence hyperconverged
    14
    111
    18.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender @olivier
      last edited by

      @olivier said in XenServer hyperconverged:

      @Dashrender Can you be more specific on the total number of nodes from the start?

      Let's assume your picture, 6 total nodes, but only 2 copies of data. So assume node1 in orange is shutdown, then 1 hour later node2 orange is shutdown. What happens?

      How about a 2 node setup? i.e. no other witnesses to the configuration.

      1 Reply Last reply Reply Quote 0
      • olivierO
        olivier @DustinB3403
        last edited by olivier

        @dustinb3403 said in XenServer hyperconverged:

        @olivier I think this is a 2 node setup that @Dashrender is discussing.

        Can the system scale to more than 2 nodes?

        This IS meant to scale (up to max pool size, 16 hosts, which is more a XenServer limit than ours 😛 )

        @Dashrender The picture is not clear maybe: each "disk" icon is not a disk, but a XenServer host. So you have 6 hosts there. When you lost "enough" hosts (2 replicated hosts in the 6 setup, in the same "mirror"), it will be in read only. As soon one of this 2 is back online, R/W is back.

        On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only. No split brain possible.

        DustinB3403D R3dPand4R 2 Replies Last reply Reply Quote 0
        • DashrenderD
          Dashrender
          last edited by

          In this picture
          https://i.imgur.com/VKOVnkU.png

          how many systems have the data on it? only 2?

          olivierO 1 Reply Last reply Reply Quote 0
          • DustinB3403D
            DustinB3403 @olivier
            last edited by

            @olivier said in XenServer hyperconverged:

            @dustinb3403 said in XenServer hyperconverged:

            @olivier I think this is a 2 node setup that @Dashrender is discussing.

            Can the system scale to more than 2 nodes?

            This IS meant to scale (up to max pool size, 16 hosts, which is more a XenServer limit than ours 😛 )

            @Dashrender The picture is not clear maybe: each "disk" icon is not a disk, but a XenServer host. So you have 6 hosts there. When you lost "enough" hosts (2 replicated hosts in the 6 setup, in the same "mirror"), it will be in read only. As soon one of this 2 is back online, R/W is back.

            On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only. No split brain possible.

            The question is (i think) if you lost all hosts in the orange group, would XOSAN and the operating VM's in the entire pool still be functional Read/Write until those servers are brought back online?

            1 Reply Last reply Reply Quote 0
            • R3dPand4R
              R3dPand4 @olivier
              last edited by

              @olivier So you just can't make any writes during this period of node failure/recovery? Are the writes cached? If so how much can be cached and for how long?

              1 Reply Last reply Reply Quote 0
              • olivierO
                olivier @Dashrender
                last edited by

                @dashrender said in XenServer hyperconverged:

                In this picture
                https://i.imgur.com/VKOVnkU.png

                how many systems have the data on it? only 2?

                This is a part of distributed-replicated setup. In this "branch", you have 2 XS hosts with 100GiB data on each, in "RAID1"-like.

                The others branches (not in the picture you displayed) are like a RAID0 on top.

                DustinB3403D 1 Reply Last reply Reply Quote 0
                • olivierO
                  olivier
                  last edited by

                  The question is (i think) if you lost all hosts in the orange group, would XOSAN and the operating VM's in the entire pool still be functional Read/Write until those servers are brought back online?

                  Because data are spread on all subvolumes (think like RAID10), you'll be in read only on the whole thing.

                  You can avoid that if you decide to NOT stripe files on all subvolumes (which is the default behavior in Gluster by the way), but it's NOT a good thing for VMs (because heal time would be horrible, and subvolumes won't be balanced)

                  1 Reply Last reply Reply Quote 0
                  • DustinB3403D
                    DustinB3403 @olivier
                    last edited by

                    so @olivier on each host in an XOSAN pool, is there a dedicated witness VM?

                    If so that witness acts as the arbitrator for that host. Meaning if the VM goes offline the available storage, ram and CPU for that host is unavailable.

                    It doesn't mean that there is an individual VM running on that host that wouldn't be able to move to either of the other 2 servers in the 3 server pool.

                    Am I correct in thinking that the Orange, Yellow and Pink boxes are individual XS servers, presenting 100GB each to the pool?

                    olivierO 1 Reply Last reply Reply Quote 0
                    • olivierO
                      olivier @DustinB3403
                      last edited by olivier

                      @dustinb3403 I suppose you speak about the 2 XS host setup. It's only in this case you need an arbiter, and it will be on one of the 2 host (to avoid split brain).

                      If only your arbiter VM is down, quorum is still met (2 on 3). If you lose more than 1/3 of nodes (arbiter or not), you are on read only (protecting against split brains).

                      And no, you are not correct: each disk picture is a XenServer host. There is a XOSAN VM controller on each host (and an extra arbiter ONLY in 2 hosts scenario, if you grow the XOSAN from 2 to 4, no more arbiter, it's automatically removed)

                      1 Reply Last reply Reply Quote 0
                      • olivierO
                        olivier
                        last edited by

                        I asked to modify the picture to draw a host with the disk inside to avoid confusion 🙂 Thanks for the feedback on that guys 😉

                        1 Reply Last reply Reply Quote 1
                        • FATeknollogeeF
                          FATeknollogee
                          last edited by

                          I haven't read the docs so this might be a stupid question...

                          Orange, Yellow & Pink each contain 2 hosts, correct?

                          You need to always add 2 hosts every time you scale up/out?

                          DustinB3403D olivierO 2 Replies Last reply Reply Quote 1
                          • DustinB3403D
                            DustinB3403
                            last edited by

                            @olivier said in XenServer hyperconverged:

                            And no, you are not correct: each disk picture is a XenServer host. There is a XOSAN VM controller on each host (and an extra arbiter ONLY in 2 hosts scenario, if you grow the XOSAN from 2 to 4, no more arbiter, it's automatically removed)

                            OK that is what I assumed in a private chat, that each disk image, inside of each orange, yellow and pink box was it's own server.

                            I also mentioned that in a PM that each server in the pool would have it's own witness.

                            The picture is a bit confusing though. (glad you asked for an more clear depiction)

                            http://i.imgur.com/zVwcGs7.png

                            1 Reply Last reply Reply Quote 0
                            • DustinB3403D
                              DustinB3403 @FATeknollogee
                              last edited by

                              @fateknollogee said in XenServer hyperconverged:

                              You need to always add 2 hosts every time you scale up/out?

                              Good question.

                              1 Reply Last reply Reply Quote 0
                              • olivierO
                                olivier @FATeknollogee
                                last edited by olivier

                                @fateknollogee If you are in replicated-2, you can add:

                                • 2 hosts to create a new extra RAID1 (bottom of a RAID0), creating a distributed-replicated (2x2)
                                • 1 host to go from replicated 2 replicated 3 (data is copied on three hosts, so it's a 1x3)
                                DustinB3403D 1 Reply Last reply Reply Quote 1
                                • DustinB3403D
                                  DustinB3403 @olivier
                                  last edited by

                                  @olivier said in XenServer hyperconverged:

                                  @fateknollogee If you are in replicated-2, you can add:

                                  • 2 hosts to create a new extra RAID1 (bottom of a RAID0), creating a distributed-replicated (2x2)
                                  • 1 host to go from replicated 2 replicated 3 (data is copied on three hosts, so it's a 1x3)

                                  So you can scale in any fashion you want.

                                  What are the benefits to scaling with 2 hosts at a time?

                                  What are the benefits to scaling with only a single host at a time?

                                  olivierO 1 Reply Last reply Reply Quote 0
                                  • olivierO
                                    olivier @DustinB3403
                                    last edited by

                                    @dustinb3403 Scaling at one host: going on "triplicate" (same data on each node, you have 1/3 of total disk capacity, but you can lose 2 hosts and still have data access).

                                    Scaling by adding 2 hosts at the same time: it's adding a subvolume, so more space.

                                    R3dPand4R 1 Reply Last reply Reply Quote 1
                                    • FATeknollogeeF
                                      FATeknollogee
                                      last edited by

                                      Lets say I go from 6 hosts (like your pic above shows- Brown/Yellow/Pink) to 8 hosts...

                                      Does the existing data get rebalanced/redistributed across all 8 hosts?

                                      olivierO 1 Reply Last reply Reply Quote 0
                                      • olivierO
                                        olivier @FATeknollogee
                                        last edited by

                                        @fateknollogee yes

                                        DashrenderD 1 Reply Last reply Reply Quote 0
                                        • DashrenderD
                                          Dashrender @olivier
                                          last edited by Dashrender

                                          @olivier said in XenServer hyperconverged:

                                          @fateknollogee yes

                                          And how many copies of the data exist? I.E. how many nodes can be shut down before you loose access to data?

                                          olivierO 1 Reply Last reply Reply Quote 0
                                          • olivierO
                                            olivier @Dashrender
                                            last edited by

                                            @dashrender As far you don't lose a complete mirror (2 hosts on the same pair), no problem. So on 8 nodes, it means up to 4 hosts if there in one by mirror.

                                            DashrenderD 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 4 / 6
                                            • First post
                                              Last post