hadoop安装步骤

hiCamp 贡献于2017-01-11

作者 Administrator  创建于2015-10-19 05:36:00   修改者Administrator  修改于2015-10-21 05:11:05字数4220

文档摘要:
关键词:

Hadoop 2.6安装文档 1. JDK 安装   1. 解压JDK安装包    2. 配置 /etc/profile,加入以下内容          export JAVA_HOME=/home/hadoop/jdk1.7.0_45          export JRE_HOME=/home/hadoop/jdk1.7.0_45/jre         export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH          export PATH=$JAVA_HOME/bin:$PATH 3. 命令界面输入 java -version 测试安装是否成功 2. Hadoop安装 1. 解压hadoop安装包, tar -xzvf hadoop-2.6.0.tar.gz 2. 配置Hadoop,进入目录 $Hadoop_Home/etc/hadoop: 1. 配置hadoop-env.sh文件,添加 /home/hadoop/jdk1.7.0_45 2. 配置yarn-env.sh文件,添加 /home/hadoop/jdk1.7.0_45 3. 配置slaves文件,添加slaves节点ip地址,一行一个ip 4. 配置core-site.xml文件,添加内容: fs.defaultFS hdfs://10.15.100.180:9000 hadoop.tmp.dir /root/hadoop_tmp A base for other temporary Directories. 5. 配置hdfs-site.xml文件,添加内容: dfs.namenode.secondary.http-address 10.15.100.180:9001 dfs.namenode.name.dir /root/hadoop_tmp/dfs/name dfs.datanode.data.dir /root/hadoop_tmp/dfs/data dfs.replication 3 dfs.webhdfs.enabled true 6. 配置mapred-site.xml文件,添加内容: mapreduce.framework.name yarn mapreduce.jobhistory.address 10.15.100.180:10020 mapreduce.jobhistory.webapp.address 10.15.100.180:19888 7. 配置yarn-site.xml文件,添加内容: yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class< /name> org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address 10.15.100.180:8032 yarn.resourcemanager.scheduler.address 10.15.100.180:8030 yarn.resourcemanager.resource-tracker.address 10.15.100.180:8031 yarn.resourcemanager.admin.address 10.15.100.180:8033 yarn.resourcemanager.webapp.address 10.15.100.180:8088 3. 格式化Namenode 进入$Hadoop_Home/bin 目录,执行命令: ./hdfs namenode -format 4. 启动集群 进入$Hadoop_Home/sbin 目录,执行命令: ./start-all.sh 5. 查看集群状态 进入$Hadoop_Home/bin 目录,执行命令: ./hdfs dfsadmin -report 6. Web页面查看 RM: 10.15.100.180:8088 HDFS: 10.15.100.180:50070 3. Hive安装 1. 解压Hive安装包 tar -zxvf apache-hive-1.2.1-bin.tar 2. 配置 /etc/profile,添加内容: export HIVE_HOME=/root/hive-1.2.1 export PATH=$HIVE_HOME/bin:$PATH 3. 配置 $HIVE_HOME/conf/hive-site.xml,(将hive-default.xml.template复制一份并改名为hive-site.xml),修改内容: javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/hive javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionPassword hive hive.hwi.listen.port 9999 This is the port the Hive Web Interface will listen on datanucleus.autoCreateSchema true datanucleus.fixedDatastore false javax.jdo.option.ConnectionUserName hive Username to use against metastore database hive.exec.local.scratchdir /home/hdpsrc/hive/iotmp Local scratch space for Hive jobs hive.downloaded.resources.dir /home/hdpsrc/hive/iotmp Temporary local directory for added resources in the remote file system. hive.querylog.location /home/hdpsrc/hive/iotmp Location of Hive run time structured log file 4. 拷贝一份mysql-connector-java-5.1.18-bin.jar到hive的lib目录下 5. 将hive的jline-2.12.jar 包拷一份到hadoop的lib目录下$HADOOP_HOME/share/hadoop/yarn/lib/

下载文档到电脑,查找使用更方便

文档的实际排版效果,会与网站的显示效果略有不同!!

需要 10 金币 [ 分享文档获得金币 ] 0 人已下载

下载文档