Upon rebooting both my Graylog and Elasticsearch servers, suddenly Graylog could not connect to Elasticsearch. I checked to make sure the config files hadn’t changed, adequate disk space, both servers could ping each other, etc. As someone who is new to both Graylog and Elasticsearch it was definitely a head-scratcher. I ran a

1
tail -f /var/log/elasticsearch/graylog2.log
to see what was up.

[2015-08-03 08:14:49,874][INFO ][node] [White Tiger] initialized
[2015-08-03 08:14:49,875][INFO ][node] [White Tiger] starting ...
[2015-08-03 08:14:49,972][INFO ][transport] [White Tiger] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/0.0.0.0:9300]}
[2015-08-03 08:14:49,984][INFO ][discovery] [White Tiger] graylog2/HP46Fyz-RV-Z4UHSzxJYMg
[2015-08-03 08:14:53,764][INFO ][cluster.service] [White Tiger] new_master [White Tiger][HP46Fyz-RV-Z4UHSzxJYMg][localhost][inet[/0.0.0.0:9300]], reason: zen-disco-join (elected_as_master)
[2015-08-03 08:14:53,970][INFO ][http] [White Tiger] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/0.0.0.0:9200]}
[2015-08-03 08:14:53,970][INFO ][node] [White Tiger] started
[2015-08-03 08:14:55,018][INFO ][gateway] [White Tiger] recovered [9] indices into cluster_state
[2015-08-03 08:15:08,864][INFO ][cluster.service] [White Tiger] added {[graylog2-server][ETxBUIRtReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-receive(join from node[[graylog2-serv
er][ETxBUIRtReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false}])
[2015-08-03 08:15:09,190][INFO ][cluster.service          ] [White Tiger] removed {[graylog2-server][ETxBUIRtReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-node_left([graylog2-server][ETxBUIR
tReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false})
[2015-08-03 08:15:38,971][INFO ][cluster.service          ] [White Tiger] added {[graylog2-server][Ovc47S_XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-receive(join from node[[graylog2-serv
er][Ovc47S_XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false}])
[2015-08-03 08:15:39,107][INFO ][cluster.service          ] [White Tiger] removed {[graylog2-server][Ovc47S_XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-node_left([graylog2-server][Ovc47S_
XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false})

**note: I edited out my real IP Addresses.**

After searching through 3 pages of Google search results, I found a post that said to check your indices by running:

1
curl 'localhost:9200/_cat/indices?v'

Low and behold, two of the came back as “red” under the health column.

curl 'localhost:9200/_cat/indices?v'
health status index        pri rep docs.count docs.deleted store.size pri.store.size
yellow open   gpswae1.html   5   1          0            0       575b           575b
yellow open   webui          5   1          0            0       575b           575b
yellow open   phppath        5   1          0            0       575b           575b
green  open   graylog2_3     1   0   15736279            0       18gb           18gb
yellow open   perl           5   1          0            0       575b           575b
green  open   graylog2_2     1   0   20000243            0     22.7gb         22.7gb
red    open   graylog2_1     1   0   20000243            0     22.7gb         22.7gb
red    open   graylog2_0     1   0   20000243            0     22.7gb         22.7gb
yellow open   spipe          5   1          0            0       575b           575b

Since this server right now is a POC and not in full production, I deleted the two corrupt indexes by running:

curl -XDELETE 'http://localhost:9200/graylog2_0/'
curl -XDELETE 'http://localhost:9200/graylog2_1/'

Once deleted, I was able to restart Elasticsearch, and Graylog connected successfully. I decided to look into other ways to monitor Elasticsearch and found: ElasticHQ

Quick and easy install on the ElasticSearch server and I am now able to monitor the Elasticsearch server with better visibility.

[Tweet{.twitter-share-button}

Upon rebooting both my Graylog and Elasticsearch servers, suddenly Graylog could not connect to Elasticsearch. I checked to make sure the config files hadn’t changed, adequate disk space, both servers could ping each other, etc. As someone who is new to both Graylog and Elasticsearch it was definitely a head-scratcher. I ran a

1
tail -f /var/log/elasticsearch/graylog2.log
to see what was up.

[2015-08-03 08:14:49,874][INFO ][node] [White Tiger] initialized
[2015-08-03 08:14:49,875][INFO ][node] [White Tiger] starting ...
[2015-08-03 08:14:49,972][INFO ][transport] [White Tiger] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/0.0.0.0:9300]}
[2015-08-03 08:14:49,984][INFO ][discovery] [White Tiger] graylog2/HP46Fyz-RV-Z4UHSzxJYMg
[2015-08-03 08:14:53,764][INFO ][cluster.service] [White Tiger] new_master [White Tiger][HP46Fyz-RV-Z4UHSzxJYMg][localhost][inet[/0.0.0.0:9300]], reason: zen-disco-join (elected_as_master)
[2015-08-03 08:14:53,970][INFO ][http] [White Tiger] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/0.0.0.0:9200]}
[2015-08-03 08:14:53,970][INFO ][node] [White Tiger] started
[2015-08-03 08:14:55,018][INFO ][gateway] [White Tiger] recovered [9] indices into cluster_state
[2015-08-03 08:15:08,864][INFO ][cluster.service] [White Tiger] added {[graylog2-server][ETxBUIRtReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-receive(join from node[[graylog2-serv
er][ETxBUIRtReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false}])
[2015-08-03 08:15:09,190][INFO ][cluster.service          ] [White Tiger] removed {[graylog2-server][ETxBUIRtReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-node_left([graylog2-server][ETxBUIR
tReSC40zU3wWzEg][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false})
[2015-08-03 08:15:38,971][INFO ][cluster.service          ] [White Tiger] added {[graylog2-server][Ovc47S_XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-receive(join from node[[graylog2-serv
er][Ovc47S_XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false}])
[2015-08-03 08:15:39,107][INFO ][cluster.service          ] [White Tiger] removed {[graylog2-server][Ovc47S_XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false},}, reason: zen-disco-node_left([graylog2-server][Ovc47S_
XT0OyaLKtrVdXfQ][graylog.corp.waters.com][inet[/0.0.0.0:9350]]{client=true, data=false, master=false})

**note: I edited out my real IP Addresses.**

After searching through 3 pages of Google search results, I found a post that said to check your indices by running:

1
curl 'localhost:9200/_cat/indices?v'

Low and behold, two of the came back as “red” under the health column.

curl 'localhost:9200/_cat/indices?v'
health status index        pri rep docs.count docs.deleted store.size pri.store.size
yellow open   gpswae1.html   5   1          0            0       575b           575b
yellow open   webui          5   1          0            0       575b           575b
yellow open   phppath        5   1          0            0       575b           575b
green  open   graylog2_3     1   0   15736279            0       18gb           18gb
yellow open   perl           5   1          0            0       575b           575b
green  open   graylog2_2     1   0   20000243            0     22.7gb         22.7gb
red    open   graylog2_1     1   0   20000243            0     22.7gb         22.7gb
red    open   graylog2_0     1   0   20000243            0     22.7gb         22.7gb
yellow open   spipe          5   1          0            0       575b           575b

Since this server right now is a POC and not in full production, I deleted the two corrupt indexes by running:

curl -XDELETE 'http://localhost:9200/graylog2_0/'
curl -XDELETE 'http://localhost:9200/graylog2_1/'

Once deleted, I was able to restart Elasticsearch, and Graylog connected successfully. I decided to look into other ways to monitor Elasticsearch and found: ElasticHQ

Quick and easy install on the ElasticSearch server and I am now able to monitor the Elasticsearch server with better visibility.

](http://i.imgur.com/onOP16R.png)