ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Starwind AMA Ask Me Anything April 26 10am - 12pm EST

    Scheduled Pinned Locked Moved Starwind
    starwindama
    102 Posts 15 Posters 24.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • TheDeepStorageT
      TheDeepStorage Vendor @ABykovskyi
      last edited by

      @ABykovskyi said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

      @travisdh1 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

      Apparently I'm going to be starting with the unusual questions. In your V2V Converter, you have the option to convert to Microsoft VHD, why the Microsoft VHD and not just VHD?

      Thaks for great question. It should actually work with XEN as well.

      Looks like we'll have to double check this and then make some changes to the V2V to reflect it supports both.

      1 Reply Last reply Reply Quote 1
      • TheDeepStorageT
        TheDeepStorage Vendor @dafyre
        last edited by

        @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        @coliver said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        @DustinB3403 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        Is the Two-Node setup for Starwind an Active/Active setup or active/passive?

        StarWind's two-node setup is an active-active scenario.

        With this setup is there any risk of a split brain scenario occurring? If so what protections are built in to ensure that the system normalizes it's self?

        Instead of forcing people to use a 3rd host as withness we use a hearbeat channel (preferably set up on a separate physical NIC to avoid a SPOF) that constantly has the nodes pinging each other. This way if the synchronization channel fails the HA device that is second by priority will be disabled to avoid split-brain.

        What if it fails due to the primary hosts failure? Sudden power outage etc. In a two node setup would both nodes then go down?

        Both hosts are constantly in sync,and each VM has access to it's virtual disk on both hosts. If one of the node fails, then the cluster just continues working if it's VMware FT or the VMs failover and everything continues working in case of Hyper-V or VMware HA.

        How would we configure the iSCSI connection in Hyper-V for this? Using MPIO?

        Exactly. We actually have a lot of guidance to configuring MPIO properly on our webiste. Here is a Knowledge Base article with details: https://knowledgebase.starwindsoftware.com/guidance/how-to-configure-mpio-with-starwind-correctly/

        1 Reply Last reply Reply Quote 3
        • Reid CooperR
          Reid Cooper
          last edited by

          So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

          TheDeepStorageT 1 Reply Last reply Reply Quote 1
          • TheDeepStorageT
            TheDeepStorage Vendor @Reid Cooper
            last edited by

            @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

            So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

            With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
            The short version would be: from our side, the hypervisor is the limit.

            scottalanmillerS 1 Reply Last reply Reply Quote 3
            • dafyreD
              dafyre
              last edited by

              Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?

              ABykovskyiA StukaS 2 Replies Last reply Reply Quote 1
              • ABykovskyiA
                ABykovskyi @dafyre
                last edited by

                @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?

                We expect to release regular updates of Poweshell library.

                1 Reply Last reply Reply Quote 3
                • scottalanmillerS
                  scottalanmiller @TheDeepStorage
                  last edited by

                  @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                  @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                  So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

                  With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
                  The short version would be: from our side, the hypervisor is the limit.

                  That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

                  dafyreD TheDeepStorageT 2 Replies Last reply Reply Quote 1
                  • dafyreD
                    dafyre
                    last edited by

                    For the StarWind Appliances, how will those work for scaling out / adding more storage ?

                    Will we just be able to add another appliance or will it be more involved than that?

                    ABykovskyiA 1 Reply Last reply Reply Quote 1
                    • dafyreD
                      dafyre @scottalanmiller
                      last edited by

                      @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                      @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                      @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                      So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

                      With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
                      The short version would be: from our side, the hypervisor is the limit.

                      That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

                      But imagine the poor sysadmin who has to configure 64 CSVs... shudder

                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @dafyre
                        last edited by

                        @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                        @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                        @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                        @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                        So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

                        With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
                        The short version would be: from our side, the hypervisor is the limit.

                        That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

                        But imagine the poor sysadmin who has to configure 64 CSVs... shudder

                        Better than the system admin that loses one SAN and has to explain losing 64 hosts 🙂

                        1 Reply Last reply Reply Quote 4
                        • TheDeepStorageT
                          TheDeepStorage Vendor @scottalanmiller
                          last edited by

                          @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                          @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                          @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                          So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

                          With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
                          The short version would be: from our side, the hypervisor is the limit.

                          That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

                          Definitely, if you can manage this cluster, it will be a very resilient environment.
                          Ultimately, we might consider a promotion of providing Xanax to any admin that configures 64 CSVs free of charge.

                          1 Reply Last reply Reply Quote 6
                          • ABykovskyiA
                            ABykovskyi @dafyre
                            last edited by

                            @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                            For the StarWind Appliances, how will those work for scaling out / adding more storage ?

                            Will we just be able to add another appliance or will it be more involved than that?

                            StarWind does support Scale-Out and this procedure is quite straightforward. You simply add additional node thus increasing your storage capacity. However, you could take another route: just add individual disks to each of the nodes expanding a storage.

                            1 Reply Last reply Reply Quote 3
                            • StrongBadS
                              StrongBad
                              last edited by

                              How does @StarWind_Software work with NVMe?

                              LaMerkL StukaS 2 Replies Last reply Reply Quote 0
                              • Reid CooperR
                                Reid Cooper
                                last edited by

                                How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?

                                TheDeepStorageT 1 Reply Last reply Reply Quote 2
                                • LaMerkL
                                  LaMerk Vendor @StrongBad
                                  last edited by

                                  @StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                  How does @StarWind_Software work with NVMe?

                                  StarWind really works with NVMe. We have just added significant performance improvement, so the write performance is doubled now.

                                  1 Reply Last reply Reply Quote 2
                                  • StukaS
                                    Stuka @dafyre
                                    last edited by

                                    @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                    Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?

                                    It's actually going to be both, as Swordfish is being developed too.

                                    1 Reply Last reply Reply Quote 1
                                    • StrongBadS
                                      StrongBad
                                      last edited by

                                      What about Storage Spaces Direct (S2D)? Can you talk about how Starwind is competing with that and what the differentiating factors are? Maybe when we would choose one or the other if we are on Hyper-V?

                                      ABykovskyiA 1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller
                                        last edited by

                                        I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...

                                        What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?

                                        LaMerkL 1 Reply Last reply Reply Quote 1
                                        • LaMerkL
                                          LaMerk Vendor @scottalanmiller
                                          last edited by

                                          @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                          I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...

                                          What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?

                                          The ideal scenario for LSFS is slow spindle drives in RAID 5/50 and RAID 6/60. And the benefits are: eliminating I/O blender, snapshots, boosting synchronization process and overall performance in non-write intensive environments (maximum 60% read/40% write).

                                          scottalanmillerS 1 Reply Last reply Reply Quote 1
                                          • StukaS
                                            Stuka @StrongBad
                                            last edited by

                                            @StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                            How does @StarWind_Software work with NVMe?

                                            We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.

                                            StrongBadS 1 Reply Last reply Reply Quote 3
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 3 / 6
                                            • First post
                                              Last post