However, you may want to use ElasticSearch on a single host, mixed with your Web server, database and everything. The problem is that ES is quite a CPU and memory hog by default. Here’s what I found through trial and error and some heavy search.
This idea is to give ES some power, but leave some for the rest of the services. At the same time, if you tell ES that it can grab half of your memory and the OS needs some, ES will get killed, which isn’t nice.
My host was configured this way:
- ElasticSearch 0.19.9, official .deb package
- Ubuntu 12.04
- 1.5GB of RAM
- Dual-Core 2.6ghz
- LEMP stack
After installing the official package:
- Allow user elasticsearch to lock memory
- Edit /etc/security/limits.conf and add:
elasticsearch hard memlock 100000
- Edit the init script: /etc/init.d/elasticsearch
- Change ES_HEAP_SIZE to 10-20% of your machine, I used 128m
ChangeMAX_OPEN_FILES to something sensible. Default is 65536, I used 15000
Update: I asked the question on ElasticSearch group and it may be a bad idea, without giving any advantage.
- Change MAX_LOCKED_MEMORY to 100000 (~100MB)
Be sure to set it at the same value as 1.1
- Change JAVA_OPTS to "-server"
I don’t exactly know why, but if you check in the logs, you will see Java telling you to do so.
- Edit the config file: /etc/elasticsearch/elasticsearch.yml
- Disable replication capabilities
- index.number_of_shards: 1
- index.number_of_replicas: 0
- Reduce memory usage
- index.term_index_interval: 256
- index.term_index_divisor: 5
- Ensure ES is binded to localhost
- Enable blocking TCP because you are always on localhost
- Flush and restart the server
- curl localhost:9200/_flush
- /etc/init.d/elasticsearch restart