ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VMWare - bottleneck - Questions

    IT Discussion
    vmware esxi windows
    3
    15
    2.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender
      last edited by

      I'm moving a VM from one VM host to another while the VM is offline.

      The transfer seems to be slow, and my resources don't appear to be maxed out.

      I'm not sure what to look at for bottlenecks.

      1 Reply Last reply Reply Quote 2
      • scottalanmillerS
        scottalanmiller
        last edited by

        What is the speed that you are getting? What speed were you expecting?

        DashrenderD 1 Reply Last reply Reply Quote 0
        • DashrenderD
          Dashrender
          last edited by

          This is the server receiving the VM.

          0_1452982118819_in1.JPG
          0_1452982130858_in2.JPG
          0_1452982137201_in3.JPG
          0_1452982141648_in4.JPG
          0_1452982152963_in5.JPG
          0_1452982157747_in6.JPG

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller
            last edited by

            The reading disks, writing disks, network itself all have bottleneck potential. As does CPU if there is any kind of transformation going on. Is the VM being used while this happens?

            1 Reply Last reply Reply Quote 0
            • DashrenderD
              Dashrender
              last edited by

              This is the server sending the VM out.

              0_1452982193488_out1.JPG
              0_1452982198163_out2.JPG
              0_1452982200943_out3.JPG
              0_1452982208937_out4.JPG
              0_1452982222313_out5.JPG
              0_1452982238115_out6.JPG

              1 Reply Last reply Reply Quote 0
              • DashrenderD
                Dashrender @scottalanmiller
                last edited by

                @scottalanmiller said:

                What is the speed that you are getting? What speed were you expecting?

                Considering I have dual 1 Gb NICs on each server I was expecting at least something close to 1 Gb, instead assuming I reading the graphs right, I'm getting something like 80 Mb.

                scottalanmillerS 1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  Your disks are holding steady on 20MB/s it looks like. Could you be disk bound?

                  DashrenderD 1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @Dashrender
                    last edited by

                    @Dashrender said:

                    @scottalanmiller said:

                    What is the speed that you are getting? What speed were you expecting?

                    Considering I have dual 1 Gb NICs on each server I was expecting at least something close to 1 Gb, instead assuming I reading the graphs right, I'm getting something like 80 Mb.

                    Why do you assume that the NIC would be the bottleneck? Many shops use iSCSI over GigE for storage because they rarely are able to push enough IOPS from their arrays to saturate a GigE connection. In a streaming scenario, you might be able to, but it really depends on what the disks are doing and what they are. Keeping GigE saturated isn't trivial unless you are on SSD.

                    DashrenderD 1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      Your disks are holding steady on 20MB/s it looks like. Could you be disk bound?

                      Definitely. I don't know what I should expect.

                      Inbound server (IBM ML3650M2) has 8 NL SATA 500 GB drives in RAID 10
                      Outbound server (HP DL380p G8) has 8 SAS 300 GB drives in RAID 10

                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                      • DashrenderD
                        Dashrender @scottalanmiller
                        last edited by

                        @scottalanmiller said:

                        @Dashrender said:

                        @scottalanmiller said:

                        What is the speed that you are getting? What speed were you expecting?

                        Considering I have dual 1 Gb NICs on each server I was expecting at least something close to 1 Gb, instead assuming I reading the graphs right, I'm getting something like 80 Mb.

                        Why do you assume that the NIC would be the bottleneck? Many shops use iSCSI over GigE for storage because they rarely are able to push enough IOPS from their arrays to saturate a GigE connection. In a streaming scenario, you might be able to, but it really depends on what the disks are doing and what they are. Keeping GigE saturated isn't trivial unless you are on SSD.

                        Why do I think that? Because I can download files from my file server at over 600 Mbps, but you're right I definitely need to consider the disk.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @Dashrender
                          last edited by

                          @Dashrender said:

                          Inbound server (IBM ML3650M2) has 8 NL SATA 500 GB drives in RAID 10
                          Outbound server (HP DL380p G8) has 8 SAS 300 GB drives in RAID 10

                          Of the two, the writing SATA drives will be the bottleneck. RAID 10 cuts write performance in half so that is 4x SATA write speeds. Whereas the reading side is 8x SAS speeds. Don't know your spindle speeds as those could vary up to 100% on either, but assuming 7200 RPM and assuming that there is some amount of random IO and that the systems are not completely idle, the drives might be the bottleneck here. Very hard to say, but quite possible.

                          1 Reply Last reply Reply Quote 2
                          • mlnewsM
                            mlnews
                            last edited by

                            How did it go? What is the current status?

                            1 Reply Last reply Reply Quote 0
                            • DashrenderD
                              Dashrender
                              last edited by

                              The copy finished overnight after I left. I came in this morning and everything started right up with no issues.
                              Currently I don't know of a way to see when it finished.

                              1 Reply Last reply Reply Quote 0
                              • DashrenderD
                                Dashrender
                                last edited by

                                OK found the logs, I started the transfer at 3:18 PM, and it finished at 4:53 AM, damn it took nearly 14 hours to transfer 360 GB.

                                1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  That's definitely very slow.

                                  1 Reply Last reply Reply Quote 0
                                  • 1 / 1
                                  • First post
                                    Last post