ElasticSearch 配置文件解析

jopen 8年前

Elasticsearch的config文件夹里面第一个是es的基本配置文件


[root@shnh-bak001 config]# cat elasticsearch.yml   ##################### Elasticsearch Configuration Example #####################    # This file contains an overview of various configuration settings,  # targeted at operations staff. Application developers should  # consult the guide at <http://elasticsearch.org/guide>.  #  # The installation procedure is covered at  # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>.  #  # Elasticsearch comes with reasonable defaults for most settings,  # so you can try it out without bothering with configuration.  #  # Most of the time, these defaults are just fine for running a production  # cluster. If you're fine-tuning your cluster, or wondering about the  # effect of certain configuration option, please _do ask_ on the  # mailing list or IRC channel [http://elasticsearch.org/community].    # Any element in the configuration can be replaced with environment variables  # by placing them in ${...} notation. For example:  #  #node.rack: ${RACK_ENV_VAR}    # For information on supported formats and syntax for the config file, see  # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>      ################################### Cluster ###################################    # Cluster name identifies your cluster for auto-discovery. If you're running    # multiple clusters on the same network, make sure you're using unique names.  # cluster.name可以确定你的集群名称,当你的elasticsearch集群在同一个网段中会自动的找到具有相同cluseter.name的elasticsearch服务.  #  所以同一个网段具有多个elasticsearch集群时,确保cluster.name成为同一个集群的标示.  #cluster.name: elasticsearch            #################################### Node #####################################    # Node names are generated dynamically on startup, so you're relieved  # from configuring them manually. You can tie this node to a specific name:  # 节点名称,可自动生成也可手动配置.  #node.name: "Franz Kafka"    # Every node can be configured to allow or deny being eligible as the master,  # and to allow or deny to store the data.  #   # Allow this node to be eligible as a master node (enabled by default):  # 允许该节点成为master(默认开启)  #node.master: true  #  # Allow this node to store data (enabled by default):  #允许该节点存储数据(默认开启)  #node.data: true    # You can exploit these settings to design advanced cluster topologies.  # 你可以通过这些选项配置高性能集群拓扑结构的模式.  # 1. You want this node to never become a master node, only to hold data.  #    This will be the "workhorse" of your cluster.  # 1. 如果你想让节点从不选举为主机节点,只用来存储数据,可作为负载器  #node.master: false  #node.data: true  #  # 2. You want this node to only serve as a master: to not store any data and  #    to have free resources. This will be the "coordinator" of your cluster.  # 2. 如果你想让节点成为主节点,且不存储任何数据,并保留空闲资源,可作为协调器  #node.master: true  #node.data: false  #  # 3. You want this node to be neither master nor data node, but  #    to act as a "search load balancer" (fetching data from nodes,  #    aggregating results, etc.)  # 3. 如果你想让节点既不成为主节点,又不成为数据节点,那么可将它作为搜索器,从节点获取数据,生成搜索结果.  #node.master: false  #node.data: false    # Use the Cluster Health API [http://localhost:9200/_cluster/health], the  # Node Info API [http://localhost:9200/_nodes] or GUI tools  # such as <http://www.elasticsearch.org/overview/marvel/>,  # <http://github.com/karmi/elasticsearch-paramedic>,  # <http://github.com/lukas-vlcek/bigdesk> and  # <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.    # A node can have generic attributes associated with it, which can later be used  # for customized shard allocation filtering, or allocation awareness. An attribute  # is a simple key value pair, similar to node.key: value, here is an example:  #  #node.rack: rack314    # By default, multiple nodes are allowed to start from the same installation location  # to disable it, set the following:  #node.max_local_storage_nodes: 1      #################################### Index ####################################    # You can set a number of options (such as shard/replica options, mapping  # or analyzer definitions, translog settings, ...) for indices globally,  # in this file.  #  # Note, that it makes more sense to configure index settings specifically for  # a certain index, either when creating it or by using the index templates API.  #  # See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and  # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>  # for more information.    # Set the number of shards (splits) of an index (5 by default):  # 设置索引的分片数,默认为5  #index.number_of_shards: 5    # Set the number of replicas (additional copies) of an index (1 by default):  # 设置索引的副本数,默认为1  #index.number_of_replicas: 1    # Note, that for development on a local machine, with small indices, it usually  # makes sense to "disable" the distributed features:  #  #index.number_of_shards: 1  #index.number_of_replicas: 0    # These settings directly affect the performance of index and search operations  # in your cluster. Assuming you have enough machines to hold shards and  # replicas, the rule of thumb is:  #  # 1. Having more *shards* enhances the _indexing_ performance and allows to  #    _distribute_ a big index across machines.  # 2. Having more *replicas* enhances the _search_ performance and improves the  #    cluster _availability_.  #  1.如果服务器够多,可以将分片提高,尽量将数据平均分布到大集群中去     2.增加副本数量可以提高搜索性能.  # The "number_of_shards" is a one-time setting for an index.  #   分片数索引一次性生成的,后去不可更改设置  # The "number_of_replicas" can be increased or decreased anytime,  # by using the Index Update Settings API.  #    副本数可以通过API去实时增加或减少.  # Elasticsearch takes care about load balancing, relocating, gathering the  # results from nodes, etc. Experiment with different settings to fine-tune  # your setup.    # Use the Index Status API (<http://localhost:9200/A/_status>) to inspect  # the index status.      #################################### Paths ####################################    # Path to directory containing configuration (this file and logging.yml):  # 配置文件存储位置  #path.conf: /path/to/conf    # Path to directory where to store index data allocated for this node.  # 数据存储位置  #path.data: /path/to/data  #  # Can optionally include more than one location, causing data to be striped across  # the locations (a la RAID 0) on a file level, favouring locations with most free  # space on creation. For example:  # 多个数据存储位置,有利于性能提升  #path.data: /path/to/data1,/path/to/data2    # Path to temporary files:  # 临时文件的路径  #path.work: /path/to/work    # Path to log files:  # 日志文件的路径  #path.logs: /path/to/logs    # Path to where plugins are installed:  # 插件安装路径  #path.plugins: /path/to/plugins      #################################### Plugin ###################################    # If a plugin listed here is not installed for current node, the node will not start.  # 设置插件作为启动条件,如果以下插件没有安装,则该节点服务不会启动  #plugin.mandatory: mapper-attachments,lang-groovy      ################################### Memory ####################################    # Elasticsearch performs poorly when JVM starts swapping: you should ensure that  # it _never_ swaps.  # 当JVM开始写入交换空间(swapping) Elasticsearch 性能会低下,应该保持它不会写入交换空间.  # Set this property to true to lock the memory:  # 设置这个属性为true来锁定内存  #bootstrap.mlockall: true    # Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set  # to the same value, and that the machine has enough memory to allocate  # for Elasticsearch, leaving enough memory for the operating system itself.  # 确保ES_MIN_MEM 和ES_MAX_MEM 环境变量设置为相同的值,以及机器有足够的内存分配给ES.     # You should also make sure that the Elasticsearch process is allowed to lock  # the memory, eg. by using `ulimit -l unlimited`.    #需要允许elasticsearch的进程可以锁住内存,linux 可以使用ulimit -l unlimited    ############################## Network And HTTP ###############################    # Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens  # on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node  # communication. (the range means that if the port is busy, it will automatically  # try the next port).    # Set the bind address specifically (IPv4 or IPv6):  # 设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0  #network.bind_host: 192.168.0.1    # Set the address other nodes will use to communicate with this node. If not  # set, it is automatically derived. It must point to an actual IP address.  # 设置其他节点和该节点交互的ip地址,如果不设置,值必须是个真实的ip地址  #network.publish_host: 192.168.0.1    # Set both 'bind_host' and 'publish_host':  # 同时设置bind_host和publish_host上面2个参数  #network.host: 192.168.0.1    # Set a custom port for the node to node communication (9300 by default):  # 设置节点间交互的tcp端口,默认是9300  #transport.tcp.port: 9300    # Enable compression for all communication between nodes (disabled by default):  # 设置是否压缩tcp传输时的数据,默认为false,不压缩  #transport.tcp.compress: true    # Set a custom port to listen for HTTP traffic:  # 设置对外服务的http端口  #http.port: 9200    # Set a custom allowed content length:  # 设置请求内容的最大容量,默认为100mb  #http.max_content_length: 100mb    # Disable HTTP completely:  # 使用http协议对外提供服务,默认为true,开启  #http.enabled: false      ################################### Gateway ###################################    # The gateway allows for persisting the cluster state between full cluster  # restarts. Every change to the state (such as adding an index) will be stored  # in the gateway, and when the cluster starts up for the first time,  # it will read its state from the gateway.    #下面配置控制怎样以及何时启动一整个集群重启的初始化恢复过程.(当使用shard gateway时,是为了尽可能的重用local data(本地数据))  # There are several types of gateway implementations. For more information, see  # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html>.    # The default gateway type is the "local" gateway (recommended):  # gateway类型,默认为local即为本地文件系统,可以设置为本地文件系统  #gateway.type: local    # Settings below control how and when to start the initial recovery process on  # a full cluster restart (to reuse as much local data as possible when using shared  # gateway).    # Allow recovery process after N nodes in a cluster are up:  # 在一个集群中的N个节点启动后,才允许恢复处理  #gateway.recover_after_nodes: 1    # Set the timeout to initiate the recovery process, once the N nodes  # from previous setting are up (accepts time value):  # 设置初始化恢复过程的超时时间,超时时间从上一个配置中配置的N个节点启动后算起  #gateway.recover_after_time: 5m    # Set how many nodes are expected in this cluster. Once these N nodes  # are up (and recover_after_nodes is met), begin recovery process immediately  # (without waiting for recover_after_time to expire):  # 设置这个集群中期望有多少个节点,一旦这N个节点启动(并且recover_after_nodes也符合),立即开始恢复过程(不等待revover_after_time超时)  #gateway.expected_nodes: 2      ############################# Recovery Throttling #############################    # These settings allow to control the process of shards allocation between  # nodes during initial recovery, replica allocation, rebalancing,  # or when adding and removing nodes.    #下面这些配置允许在初始化恢复,副本分配,再平衡或添加和删除节点时控制节点间的分片分配设置。  # Set the number of concurrent recoveries happening on a node:  #  # 1. During the initial recovery  # 1. 初始化数据恢复时,并发恢复线程的个数,默认为4  #cluster.routing.allocation.node_initial_primaries_recoveries: 4  #  # 2. During adding/removing nodes, rebalancing, etc  # 2. 添加删除节点或负载均衡时并发恢复线程的个数,默认为2  #cluster.routing.allocation.node_concurrent_recoveries: 2    # Set to throttle throughput when recovering (eg. 100mb, by default 20mb):  # 设置恢复时的吞吐量  #indices.recovery.max_bytes_per_sec: 20mb    # Set to limit the number of open concurrent streams when  # recovering a shard from a peer:  # 设置来限制从其他分片恢复数据时最大同时打开并发流的个数,默认为5  #indices.recovery.concurrent_streams: 5      ################################## Discovery ##################################    # Discovery infrastructure ensures nodes can be found within a cluster  # and master node is elected. Multicast discovery is the default.    # Set to ensure a node sees N other master eligible nodes to be considered  # operational within the cluster. This should be set to a quorum/majority of   # the master-eligible nodes in the cluster.  # 设置这个参数来保证集群中的节点可以知道其他N个有master资格的节点,默认为1,对应大的集群来说,可以设置大一点的值(2-4)  #discovery.zen.minimum_master_nodes: 1    # Set the time to wait for ping responses from other nodes when discovering.  # Set this option to a higher value on a slow or congested network  # to minimize discovery failures:  # 检测超时时间,默认3秒,设置这个选项应对网络拥挤的时候,防止脑裂,降低故障。  #discovery.zen.ping.timeout: 3s    # For more information, see  # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>    # Unicast discovery allows to explicitly control which nodes will be used  # to discover the cluster. It can be used when multicast is not present,  # or to restrict the cluster communication-wise.  #  # 1. Disable multicast discovery (enabled by default):  # 设置是否打开多播发现节点,默认是true  # 当多播不可用或者集群跨网段的时候集群通信还是用单播了。  #discovery.zen.ping.multicast.enabled: false  #  # 2. Configure an initial list of master nodes in the cluster  #    to perform discovery when new nodes (master or data) are started:  # 这是一个集群中的主节点的初始列表,当节点(主节点或者数据节点)启动时使用这个列表进行探测  #discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]    # EC2 discovery allows to use AWS EC2 API in order to perform discovery.  #  # You have to install the cloud-aws plugin for enabling the EC2 discovery.  #  # For more information, see  # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html>  #  # See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>  # for a step-by-step tutorial.    # GCE discovery allows to use Google Compute Engine API in order to perform discovery.  #  # You have to install the cloud-gce plugin for enabling the GCE discovery.  #  # For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-gce>.    # Azure discovery allows to use Azure API in order to perform discovery.  #  # You have to install the cloud-azure plugin for enabling the Azure discovery.  #  # For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-azure>.    ################################## Slow Log ##################################    # Shard level query and fetch threshold logging.    #index.search.slowlog.threshold.query.warn: 10s  #index.search.slowlog.threshold.query.info: 5s  #index.search.slowlog.threshold.query.debug: 2s  #index.search.slowlog.threshold.query.trace: 0ms    index.search.slowlog.threshold.fetch.warn: 1s  index.search.slowlog.threshold.fetch.info: 800ms  index.search.slowlog.threshold.fetch.debug: 500ms  index.search.slowlog.threshold.fetch.trace: 0ms    #index.indexing.slowlog.threshold.index.warn: 10s  #index.indexing.slowlog.threshold.index.info: 5s  #index.indexing.slowlog.threshold.index.debug: 2s  #index.indexing.slowlog.threshold.index.trace: 500ms    ################################## GC Logging ################################    #monitor.jvm.gc.young.warn: 1000ms  #monitor.jvm.gc.young.info: 700ms  #monitor.jvm.gc.young.debug: 400ms    #monitor.jvm.gc.old.warn: 10s  #monitor.jvm.gc.old.info: 5s  #monitor.jvm.gc.old.debug: 2s    ################################## Security ################################    # Uncomment if you want to enable JSONP as a valid return transport on the  # http server. With this enabled, it may pose a security risk, so disabling  # it unless you need it is recommended (it is disabled by default).  #  #http.jsonp.enable: true







来自: http://my.oschina.net/davehe/blog/591364