name | value | description |
hadoop.tmp.dir | /tmp/hadoop-${user.name} | A base for other temporary directories. |
hadoop.native.lib | true | Should native hadoop libraries, if present, be used. |
hadoop.http.filter.initializers | | A comma separated list of class names. Each class in the list
must extend org.apache.hadoop.http.FilterInitializer. The corresponding
Filter will be initialized. Then, the Filter will be applied to all user
facing jsp and servlet web pages. The ordering of the list defines the
ordering of the filters. |
hadoop.security.group.mapping | org.apache.hadoop.security.ShellBasedUnixGroupsMapping | Class for user to group mapping (get groups for a given user)
|
hadoop.security.authorization | false | Is service-level authorization enabled? |
hadoop.security.instrumentation.requires.admin | false |
Indicates if administrator ACLs are required to access
instrumentation servlets (JMX, METRICS, CONF, STACKS).
|
hadoop.security.authentication | simple | Possible values are simple (no authentication), and kerberos
|
hadoop.security.token.service.use_ip | true | Controls whether tokens always use IP addresses. DNS changes
will not be detected if this option is enabled. Existing client connections
that break will always reconnect to the IP of the original host. New clients
will connect to the host's new IP but fail to locate a token. Disabling
this option will allow existing and new clients to detect an IP change and
continue to locate the new host's token.
|
hadoop.security.use-weak-http-crypto | true | If enabled, use KSSL to authenticate HTTP connections to the
NameNode. Due to a bug in JDK6, using KSSL requires one to configure
Kerberos tickets to use encryption types that are known to be
cryptographically weak. If disabled, SPNEGO will be used for HTTP
authentication, which supports stronger encryption types.
|
hadoop.workaround.non.threadsafe.getpwuid | false | Some operating systems or authentication modules are known to
have broken implementations of getpwuid_r and getpwgid_r, such that these
calls are not thread-safe. Symptoms of this problem include JVM crashes
with a stack trace inside these functions. If your system exhibits this
issue, enable this configuration parameter to include a lock around the
calls as a workaround.
An incomplete list of some systems known to have this issue is available
at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations
|
hadoop.kerberos.kinit.command | kinit | Used to periodically renew Kerberos credentials when provided
to Hadoop. The default setting assumes that kinit is in the PATH of users
running the Hadoop client. Change this to the absolute path to kinit if this
is not the case.
|
hadoop.logfile.size | 10000000 | The max size of each log file |
hadoop.logfile.count | 10 | The max number of log files |
io.file.buffer.size | 4096 | The size of buffer for use in sequence files.
The size of this buffer should probably be a multiple of hardware
page size (4096 on Intel x86), and it determines how much data is
buffered during read and write operations. |
io.bytes.per.checksum | 512 | The number of bytes per checksum. Must not be larger than
io.file.buffer.size. |
io.skip.checksum.errors | false | If true, when a checksum error is encountered while
reading a sequence file, entries are skipped, instead of throwing an
exception. |
io.compression.codecs | org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec | A list of the compression codec classes that can be used
for compression/decompression. |
io.serializations | org.apache.hadoop.io.serializer.WritableSerialization | A list of serialization classes that can be used for
obtaining serializers and deserializers. |
fs.default.name | file:/// | The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem. |
fs.trash.interval | 0 | Number of minutes between trash checkpoints.
If zero, the trash feature is disabled.
|
fs.file.impl | org.apache.hadoop.fs.LocalFileSystem | The FileSystem for file: uris. |
fs.hdfs.impl | org.apache.hadoop.hdfs.DistributedFileSystem | The FileSystem for hdfs: uris. |
fs.s3.impl | org.apache.hadoop.fs.s3.S3FileSystem | The FileSystem for s3: uris. |
fs.s3n.impl | org.apache.hadoop.fs.s3native.NativeS3FileSystem | The FileSystem for s3n: (Native S3) uris. |
fs.kfs.impl | org.apache.hadoop.fs.kfs.KosmosFileSystem | The FileSystem for kfs: uris. |
fs.hftp.impl | org.apache.hadoop.hdfs.HftpFileSystem | |
fs.hsftp.impl | org.apache.hadoop.hdfs.HsftpFileSystem | |
fs.webhdfs.impl | org.apache.hadoop.hdfs.web.WebHdfsFileSystem | |
fs.ftp.impl | org.apache.hadoop.fs.ftp.FTPFileSystem | The FileSystem for ftp: uris. |
fs.ramfs.impl | org.apache.hadoop.fs.InMemoryFileSystem | The FileSystem for ramfs: uris. |
fs.har.impl | org.apache.hadoop.fs.HarFileSystem | The filesystem for Hadoop archives. |
fs.har.impl.disable.cache | true | Don't cache 'har' filesystem instances. |
fs.checkpoint.dir | ${hadoop.tmp.dir}/dfs/namesecondary | Determines where on the local filesystem the DFS secondary
name node should store the temporary images to merge.
If this is a comma-delimited list of directories then the image is
replicated in all of the directories for redundancy.
|
fs.checkpoint.edits.dir | ${fs.checkpoint.dir} | Determines where on the local filesystem the DFS secondary
name node should store the temporary edits to merge.
If this is a comma-delimited list of directoires then teh edits is
replicated in all of the directoires for redundancy.
Default value is same as fs.checkpoint.dir
|
fs.checkpoint.period | 3600 | The number of seconds between two periodic checkpoints.
|
fs.checkpoint.size | 67108864 | The size of the current edit log (in bytes) that triggers
a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
|
fs.s3.block.size | 67108864 | Block size to use when writing files to S3. |
fs.s3.buffer.dir | ${hadoop.tmp.dir}/s3 | Determines where on the local filesystem the S3 filesystem
should store files before sending them to S3
(or after retrieving them from S3).
|
fs.s3.maxRetries | 4 | The maximum number of retries for reading or writing files to S3,
before we signal failure to the application.
|
fs.s3.sleepTimeSeconds | 10 | The number of seconds to sleep between each S3 retry.
|
fs.automatic.close | true | By default, FileSystem instances are automatically closed at program
exit using a JVM shutdown hook. Setting this property to false disables this
behavior. This is an advanced option that should only be used by server applications
requiring a more carefully orchestrated shutdown sequence.
|
fs.s3n.block.size | 67108864 | Block size to use when reading files using the native S3
filesystem (s3n: URIs). |
io.seqfile.compress.blocksize | 1000000 | The minimum block size for compression in block compressed
SequenceFiles.
|
io.seqfile.lazydecompress | true | Should values of block-compressed SequenceFiles be decompressed
only when necessary.
|
io.seqfile.sorter.recordlimit | 1000000 | The limit on number of records to be kept in memory in a spill
in SequenceFiles.Sorter
|
io.mapfile.bloom.size | 1048576 | The size of BloomFilter-s used in BloomMapFile. Each time this many
keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter).
Larger values minimize the number of filters, which slightly increases the performance,
but may waste too much space if the total number of keys is usually much smaller
than this number.
|
io.mapfile.bloom.error.rate | 0.005 | The rate of false positives in BloomFilter-s used in BloomMapFile.
As this value decreases, the size of BloomFilter-s increases exponentially. This
value is the probability of encountering false positives (default is 0.5%).
|
hadoop.util.hash.type | murmur | The default implementation of Hash. Currently this can take one of the
two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash.
|
ipc.client.idlethreshold | 4000 | Defines the threshold number of connections after which
connections will be inspected for idleness.
|
ipc.client.kill.max | 10 | Defines the maximum number of clients to disconnect in one go.
|
ipc.client.connection.maxidletime | 10000 | The maximum time in msec after which a client will bring down the
connection to the server.
|
ipc.client.connect.max.retries | 10 | Indicates the number of retries a client will make to establish
a server connection.
|
ipc.server.listen.queue.size | 128 | Indicates the length of the listen queue for servers accepting
client connections.
|
ipc.server.tcpnodelay | false | Turn on/off Nagle's algorithm for the TCP socket connection on
the server. Setting to true disables the algorithm and may decrease latency
with a cost of more/smaller packets.
|
ipc.client.tcpnodelay | false | Turn on/off Nagle's algorithm for the TCP socket connection on
the client. Setting to true disables the algorithm and may decrease latency
with a cost of more/smaller packets.
|
webinterface.private.actions | false | If set to true, the web interfaces of JT and NN may contain
actions, such as kill job, delete file, etc., that should
not be exposed to public. Enable this option if the interfaces
are only reachable by those who have the right authorization.
|
hadoop.rpc.socket.factory.class.default | org.apache.hadoop.net.StandardSocketFactory | Default SocketFactory to use. This parameter is expected to be
formatted as "package.FactoryClassName".
|
hadoop.rpc.socket.factory.class.ClientProtocol | | SocketFactory to use to connect to a DFS. If null or empty, use
hadoop.rpc.socket.class.default. This socket factory is also used by
DFSClient to create sockets to DataNodes.
|
hadoop.socks.server | | Address (host:port) of the SOCKS server to be used by the
SocksSocketFactory.
|
topology.node.switch.mapping.impl | org.apache.hadoop.net.ScriptBasedMapping | The default implementation of the DNSToSwitchMapping. It
invokes a script specified in topology.script.file.name to resolve
node names. If the value for topology.script.file.name is not set, the
default value of DEFAULT_RACK is returned for all node names.
|
topology.script.file.name | | The script name that should be invoked to resolve DNS names to
NetworkTopology names. Example: the script would take host.foo.bar as an
argument, and return /rack1 as the output.
|
topology.script.number.args | 100 | The max number of args that the script configured with
topology.script.file.name should be run with. Each arg is an
IP address.
|
net.topology.table.file.name | | The file name for a topology file, which is used when the
topology.script.file.name property is set to
org.apache.hadoop.net.TableMapping. The file format is a two column text
file, with columns separated by whitespace. The first column is a DNS or
IP address and the second column specifies the rack where the address maps.
If no entry corresponding to a host in the cluster is found, then
/default-rack is assumed.
|
hadoop.security.uid.cache.secs | 14400 | NativeIO maintains a cache from UID to UserName. This is
the timeout for an entry in that cache. |
hadoop.http.authentication.type | simple |
Defines authentication used for Oozie HTTP endpoint.
Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
|
hadoop.http.authentication.token.validity | 36000 |
Indicates how long (in seconds) an authentication token is valid before it has
to be renewed.
|
hadoop.http.authentication.signature.secret.file | ${user.home}/hadoop-http-auth-signature-secret |
The signature secret for signing the authentication tokens.
If not set a random secret is generated at startup time.
The same secret should be used for JT/NN/DN/TT configurations.
|
hadoop.http.authentication.cookie.domain | |
The domain to use for the HTTP cookie that stores the authentication token.
In order to authentiation to work correctly across all Hadoop nodes web-consoles
the domain must be correctly set.
IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings.
For this setting to work properly all nodes in the cluster must be configured
to generate URLs with hostname.domain names on it.
|
hadoop.http.authentication.simple.anonymous.allowed | true |
Indicates if anonymous requests are allowed when using 'simple' authentication.
|
hadoop.http.authentication.kerberos.principal | HTTP/_HOST@LOCALHOST |
Indicates the Kerberos principal to be used for HTTP endpoint.
The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
|
hadoop.http.authentication.kerberos.keytab | ${user.home}/hadoop.keytab |
Location of the keytab file with the credentials for the principal.
Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.
|
hadoop.relaxed.worker.version.check | true |
This option changes the behavior of datanodes and tasktrackers to
only check for a version match (eg "0.20.2-cdh3u4") but ignore the
other build fields (revision, user, and source checksum) when
checking for compatibility with namenodes and jobtrackers. In
previous releases datanodes refused to connect to namenodes if
their build revision (svn revision) did not match, and
tasktrackers refused to connect to jobtrackers if their build
version (version, revision, user, and source checksum) did not
match. This behavior can be restored by disabling this option.
|