ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Issue with Elasticsearch

    Scheduled Pinned Locked Moved Solved IT Discussion
    elasticsearchactivecollab
    38 Posts 4 Posters 6.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      Odd, I don't see memory freeing up after the crash.

      1 Reply Last reply Reply Quote 0
      • stacksofplatesS
        stacksofplates
        last edited by

        This doesn't fix the problem, but you could use something like monit or supervisord to start it up again after it quits.

        AmbarishrhA 1 Reply Last reply Reply Quote 1
        • AmbarishrhA
          Ambarishrh @stacksofplates
          last edited by

          @johnhooks monit looks promising. Is there any specific option by which I can monitor the Elasticsearch failing?

          stacksofplatesS 1 Reply Last reply Reply Quote 0
          • stacksofplatesS
            stacksofplates @Ambarishrh
            last edited by stacksofplates

            @Ambarishrh said:

            @johnhooks monit looks promising. Is there any specific option by which I can monitor the Elasticsearch failing?

            It's been a while since I used it, but I think it uses /var/log/monit.log and will track errors. If not, you can set the log in the config file under the set logfile line.

            1 Reply Last reply Reply Quote 0
            • AmbarishrhA
              Ambarishrh
              last edited by

              Seems like its java which is causing the issue. I have cloudlinux on the server, and that also might be an issue as it jails the user to use resources. Just removed elasticsearch user from cagefs to see if this solves the issue. Then i need to find out the right resources that needs to be allocated and bring back to caged setup. Waiting to see if it crashes again

              1 Reply Last reply Reply Quote 0
              • AmbarishrhA
                Ambarishrh
                last edited by

                After several tests, i thought of setting up Elastic on a separate server to make sure that its resources are not shared with anything else. Setup a new server, installed Java and Elastic Search. Now i need to give access to ActiveCollab server to use ElasticSearch, but even with the port 9200 opened/even disabling firewall, I am not able to access the server with port 9200. One place i read that Elastic Search won't be available over the internet. Is that so, or am i missing a config setting by which i can enable ES to grant access to the other server?

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  Use telnet from the remote machine to see if it is open properly.

                  Also verify that it is listening with netstat -tulpn

                  AmbarishrhA 1 Reply Last reply Reply Quote 0
                  • AmbarishrhA
                    Ambarishrh
                    last edited by

                    Figured that out. on the elasticsearch.yml file, I need to change network.host from localhost to an IP accessible from other servers; Public/Private

                    1 Reply Last reply Reply Quote 0
                    • AmbarishrhA
                      Ambarishrh @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      Use telnet from the remote machine to see if it is open properly.

                      Also verify that it is listening with netstat -tulpn

                      Sorry didn't see that message. It was not the firewall, was a config on ElasticSearch, thats solved now. I need to watch it for a day or two to make sure that this doesn't fail. 🙂

                      1 Reply Last reply Reply Quote 0
                      • AmbarishrhA
                        Ambarishrh
                        last edited by

                        So after 24+ hours of monitoring, ElasticSearch works fine, didn't fail! 🙂 Concluding that for ElasticSearch to function correctly, use minimum 16GB RAM server and keep it dedicated only for ES.

                        Hardware recommendation from ES site:
                        A machine with 64 GB of RAM is the ideal sweet spot, but 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductive

                        https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html

                        Closing this thread and marking it as solved. Thanks guys

                        JaredBuschJ 1 Reply Last reply Reply Quote 0
                        • JaredBuschJ
                          JaredBusch @Ambarishrh
                          last edited by

                          @Ambarishrh said:

                          So after 24+ hours of monitoring, ElasticSearch works fine, didn't fail! 🙂 Concluding that for ElasticSearch to function correctly, use minimum 16GB RAM server and keep it dedicated only for ES.

                          Hardware recommendation from ES site:
                          A machine with 64 GB of RAM is the ideal sweet spot, but 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductive

                          https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html

                          Closing this thread and marking it as solved. Thanks guys

                          IMO, that is an insane amount of RAM to be required.

                          1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller
                            last edited by

                            It is a lot, but in memory large scale databases often do similar. We had similar numbers with things like Cassandra.

                            1 Reply Last reply Reply Quote 0
                            • stacksofplatesS
                              stacksofplates
                              last edited by

                              Ha my ELK server has 3, but it's a small number of VMs.

                              JaredBuschJ 1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller
                                last edited by

                                We have run ELK on two pretty well. But I think that our new one is going to be more like eight.

                                1 Reply Last reply Reply Quote 1
                                • JaredBuschJ
                                  JaredBusch @stacksofplates
                                  last edited by

                                  @johnhooks said:

                                  Ha my ELK server has 3, but it's a small number of VMs.

                                  An ELK server is the reason I am concerned about this value. I don't have 16GB of RAM to just through at a VM without a damned good reason.

                                  I really want to get an ELK server setup at a couple clients, but none of their servers have that kind of RAM unallocated.

                                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                                  • scottalanmillerS
                                    scottalanmiller @JaredBusch
                                    last edited by

                                    @JaredBusch said:

                                    I really want to get an ELK server setup at a couple clients, but none of their servers have that kind of RAM unallocated.

                                    How many machines will they monitor? We've done ~20 normal servers to a 2GB ELK server, worked fine. Might have been more responsive with more, but it was just fine.

                                    The 64GB recommendation is when using Elastic as a clustered NoSQL database for other purposes where you are dealing with datasets larger than 64GB. No need for numbers like that on a normal SMB ELK install at all. You might want to look for more than 2GB, but you can do pretty well without much.

                                    If you get to the point where the log set that you are reporting on is not able to be in memory, you'll feel the lag on the interface for sure. But most SMBs aren't looking at ten year old logs in real time, either.

                                    1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller
                                      last edited by

                                      I think, unless you have some crazy log traffic, that if you can get 4GB for ELK in an SMB, you are nearly always good. I'd expect hundreds of servers to be able to log to that, as long as you have fast disks (it still has to get to disk fast enough no matter how much memory there is.)

                                      We've had massive Splunk databases with 32GB - 64GB, but those are taking data from thousands and thousands of servers and doing so as a high availability failover cluster, so they have to ingest, index and replicate in real time.

                                      1 Reply Last reply Reply Quote 0
                                      • 1
                                      • 2
                                      • 2 / 2
                                      • First post
                                        Last post