Logstash Unable to maintain connection to ES

I setup a stock ES node and a stock Logstash node on the same physical machine(myelastic). I’ve made 7 changes to the node (node name, cluster name, host, logpath, datapath, locking memory, 31gb of RAM). Kibana can work with the cluster fine and I can access it through my browser. However whenever I run logstash with my config file it constant connects and reconnects never able to push any data to the cluster. A pattern occurs and repeats endlessly. Why is this happening?

[2018-01-10T14:50:31,754][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}

After the above error occurs this series continues endlessly…

[2018-01-10T14:50:31,755][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
[2018-01-10T14:50:32,338][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://myelastic01:9200/, :path=>"/"}
[2018-01-10T14:50:32,346][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://myelastic01:9200/"}
[2018-01-10T14:50:38,086][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://myelastic01:9200/][Manticore::SocketException] Connection reset {:url=>http://myelastic01:9200/, :error_message=>"Elasticsearch Unreachable: [http://myelastic01:9200/][Manticore::SocketException] Connection reset", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}

Elastic Output

[2018-01-10T13:48:56,087][INFO ][o.e.n.Node               ] [myelastic01-0m] started
[2018-01-10T13:48:56,093][INFO ][o.e.g.GatewayService     ] [myelastic01-0m] recovered [0] indices into cluster_state

Logstash Output

output {
        if ([type] == "cama-usage") {
                elasticsearch {
                        hosts => [
                        index => "cama-usage-%{+YYYY-MM-dd}"
                        document_id => "%{[UniqueID]}"
                        action => "update"
                        doc_as_upsert => true


Can you post the output section of your logstash config? It looks like it just can not connect to the node.

I’ve updated the post with that data

What output do you get if you curl http://myelastic01:9200

@Soundarya I’ve updated the post


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.