Elasticsearch 2.x

Sources:

  1. https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
  2. https://stackoverflow.com/questions/29810531/elasticsearch-kibana-errors-data-too-large-data-for-timestamp-would-be-la

Heap memory:

Set heap memory/ Java memory in /etc/default/elasticsearch:

  • ES_HEAP_SIZE
  • ES_JAVA_OPTS

Documentations explain that Heap should be <32GB. It is bigger, Java will not compress “Ordinary object pointers”. So in reality setting heap to 32GB or more will actually hurt ES performance.

Cleaning cache:

If you see in log (/var/log/elasticsearch) error like this: “Data too large, data for [@timestamp] would be larger than limit” it means you need to clean ES memory cache. Best command for it is:

curl -XPOST 'http://localhost:9200/_cache/clear' -d '{ "fielddata": "true" }'

This command will clean cache for all indexes.