全分布式 hadoop+hbase 安装实录


长风吹白 全分布式 hadoop+hbase 安装实录 记录人: 长风吹白 记录日期:2012-5-22 版本: v1 长风吹白 一、 安装环境 1. ubuntu: ubuntu-11.04 server astro@slave1:~$ cat /proc/version linux version 2.6.38-8-generic-pae (buildd@vernadsky) (gcc version 4.5.2 (ubuntu/linaro 4.5.2-8ubuntu3) ) #42-ubuntu smp mon apr 11 05:17:09 utc 2011 2. jdk: jdk-6u31-linux-i586.bin java version "1.6.0_31" java(tm) se runtime environment (build 1.6.0_31-b04) java hotspot(tm) client vm (build 20.6-b01, mixed mode, sharing) 3. hadoop hadoop-0.20.205.0.tar.gz 4. ssh openssh_5.8p1 openssh_5.8p1 debian-1ubuntu3, openssl 0.9.8o 01 jun 2010 bad escape character 'rsion'. 5. 三台服务器: slave1:222.197.221.34(slaves) slave2:222.197.221.35(slaves) master:222.197.221.36(master) 长风吹白 二、 主机名配置 1. 安装 ubuntu 系统 此步骤在此省略,ip 按照网络配置项配置好,这里不再赘述。 关闭每台机器(slaves)的防火墙 sudo ufw disable 注:三台机器最好安装相同的选项安装 2. 配置主机名 1) 在 222.197.221.36 机器上 sudo vi /etc/hosts 编辑如下: 127.0.0.1 localhost 222.197.221.34 slave1 222.197.221.35 slave2 222.197.221.36 master sudo vi /etc/hostname 编辑如下: master 2) 在 222.197.221.34 机器上 同样编辑 hosts 文件: 127.0.0.1 localhost 222.197.221.34 slave1 222.197.221.35 slave2 222.197.221.36 master sudo vi /etc/hostname 编辑如下: slave1 长风吹白 3) 在 222.197.221.35 机器上 同样编辑 hosts 文件: 127.0.0.1 localhost 222.197.221.34 slave1 222.197.221.35 slave2 222.197.221.36 master sudo vi /etc/hostname 编辑如下: slave1 4) 测试: 可以在任意一台机器上简单的测试下,例如 slave2 ping slave1 hostname 本节注意事项: 注意 hosts 文件和 hostname,不要配置错误 三、 安装 java 1. 安装 jdk 下载”jdk-6u31-linux-i586.bin” 安装 sudo mv jdk-6u31-linux-i586.bin /usr/local chmod +x jdk-6u31-linux-i586.bin sudo ./jdk-6u31-linux-i586.bin 安装后会在/usr/local 下自动生成:jdk1.6.0_31 目录 长风吹白 2. 配置 java: sudo vim /etc/profile 在该文件最后加入: #set java environment export java_home=/usr/local/jdk1.6.0_31 export path=$java_home/bin:$path export classpath=.:$java_home/lib/dt.jar:$java_home/lib/tools.jar:$classpath 如下图: 3. 测试 最简单的测试:直接在命令行里输入: java –version java javac 四、 建立无密码登录环境 1. 新建 hadoop 用户: sudo useradd hadoop sudo passwd hadoop sudo mkdir /home/hadoop sudo chown –r hadop /home/hadoop 2. 将 hadoop 添加到 sudoer 在管理账户下: su root vim /etc/sudoers 在最后一行加上 长风吹白 hadoop all=(all)all 保存退出。此时 hadoop 帐户就可以使用 sudo 命令了。 3. 修改 shell 如果觉得切换时所在的 shell 不习惯(无法使用 tab 自动补齐等等): 首先、查看系统安装的 shell cat /etc/shells 然后、先查看 hadoop 的 shell echo $shell 最后:改变 hadoop 的 shell(需要在一个可以具有使用 sudo 权限的账户下) sudo usermod –s /bin/bash 4. 安装 openssh sudo apt-get install openssh-server 查看打开 ssh 打开端口: netstat -nat | grep 22 注意上述步骤需要在 3 台机器上有同样的设置 现在进入 master(222.197.221.36) su hadoop cd /home/hadoop ssh-keygen -t rsa(一路回车,默认配置) 这个步骤完成后,会在 hadoop 目录下生产.ssh 目录 cd /home/hadoop/.ssh/ cat id_rsa.pub >> authorized_keys cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys authorized_keys(这个文件需要拷贝到 slave1, slave2 上的.ssh 文件夹内,如果机器上没有.ssh 这个文件夹请手动建立) 在 master 上使用如下命令完成拷贝: scp authorized_keys hadoop@222.197.221.34:/home/hadoop/.ssh/ scp authorized_keys hadoop@222.197.221.35:/home/hadoop/.ssh/  最后一步测试无密码登录环境: 在 master(222.197.221.36)上 ssh 222.197.221.34 ssh 222.197.221.35 即可无需输入密码登录。如果.ssh目录下没有authorized_keys文件,系统会提示要输入密码, 出现这样情况,请检查权限,上述步骤是否都完成 长风吹白 五、 安装 hadoop 1. 安装 hadoop: su hadoop tar –zxvf hadoop-0.20.205.0.tar.gz 注: hadoop-0.20.205.0.tar.gz 必须先下载下来放到/home/hadoop/目录下 这样解压后会在/home/hadoop/下自动生成 hadoop-0.20.205.0 目录,这里就是 hadoop 的主目录。 2. 编辑/etc/profile 文件,在文件最后加入: export hadoop_home=/home/hadoop/hadoop-0.20.205.0 export path=$hadoop_home/bin:$path 如下图: 执行如下命令,让修改的 profile 文件生效 注:第二步也可不必,这样就需要敲 hadoop 命令时候使用完整路径 source /etc/profile 3. 配置 conf/hadoop-env.sh 文件 cd /home/hadoop/ hadoop-0.20.205.0 vim conf/hadoop-env.sh 找到如下注释掉的段,取消注释,并修改为如下: # the java implementation to use. required. export java_home=/usr/local/jdk1.6.0_31 4. 配置 core-site.xml 长风吹白 fs.default.name hdfs://master:9000 true 截图: dfs.name.dir /home/hadoop/namedata dfs.permissions false dfs.replication 2 长风吹白 修改 mapred-site.xml mapred.job.tracker master:9001 ~ 截图: 长风吹白 5. 配置 masters master 截图: 6. 配置 slaves slave1 slave2 截图如下: 7. 测试安装 5) 启动 hadoop 先格式化文件系统: bin/hadoop namenode –format 可以通过如下命令启动: bin/start-all.sh 或者分步启动: bin/start-dfs.sh bin/start-mapred.sh 停止命令如下: bin/stop-all.sh bin/stop-dfs.sh bin/stop-mapred.sh 启动后可以通过: 长风吹白 http://222.197.221.36:50030/ http://222.197.221.36:50070/ http://222.197.221.36:50060/ 进入 web 界面 6) 使用 jps 查看是否启动如下进程: jps(注意在 master 机器上) jps(注意在 slave1 机器上) jps(注意在 slave2 机器上) 7) 查看 logs 下是否有 exception 日志: cd logs grep exception ./* 截图如下: 8) 打开管理页面: http://222.197.221.36:50030 长风吹白 注意有多少台机器,node 数与之对应,否则算是不正常配置。 9) 测试实例 i 通过 hadoop 命令在 hdfs 上创建目录: bin/hadoop fs -mkdir /tmp/wordcount 截图: ii 拷贝测试文件(测试文件可以自己生成)到 hdfs 上 bin/hadoop fs -copyfromlocal /home/hadoop/myscripts/result.txt /tmp/wordcount/word.txt 截图: iii 运行测试实例 bin/hadoop jar hadoop-examples-0.20.205.0.jar wordcount /tmp/wordcount/word.txt /tmp/wordcount/out 长风吹白 截图: iv 输出结果  命令行输出: 12/05/22 13:01:02 info input.fileinputformat: total input paths to process : 1 12/05/22 13:01:02 info mapred.jobclient: running job: job_201205221152_0003 12/05/22 13:01:03 info mapred.jobclient: map 0% reduce 0% 12/05/22 13:01:24 info mapred.jobclient: map 2% reduce 0% 12/05/22 13:01:25 info mapred.jobclient: map 3% reduce 0% 12/05/22 13:01:26 info mapred.jobclient: map 4% reduce 0% 12/05/22 13:01:27 info mapred.jobclient: map 5% reduce 0% 12/05/22 13:01:28 info mapred.jobclient: map 6% reduce 0% 12/05/22 13:01:29 info mapred.jobclient: map 7% reduce 0% 12/05/22 13:01:30 info mapred.jobclient: map 8% reduce 0% 12/05/22 13:01:31 info mapred.jobclient: map 9% reduce 0% 12/05/22 13:01:32 info mapred.jobclient: map 10% reduce 0% 12/05/22 13:01:33 info mapred.jobclient: map 12% reduce 0% 12/05/22 13:01:34 info mapred.jobclient: map 13% reduce 0% 12/05/22 13:01:35 info mapred.jobclient: map 14% reduce 0% 12/05/22 13:01:36 info mapred.jobclient: map 15% reduce 0% 12/05/22 13:01:37 info mapred.jobclient: map 17% reduce 0% 12/05/22 13:01:38 info mapred.jobclient: map 18% reduce 0% 12/05/22 13:01:39 info mapred.jobclient: map 19% reduce 0% 12/05/22 13:01:40 info mapred.jobclient: map 20% reduce 0% 12/05/22 13:01:41 info mapred.jobclient: map 21% reduce 0% 12/05/22 13:01:42 info mapred.jobclient: map 22% reduce 0% 12/05/22 13:01:43 info mapred.jobclient: map 23% reduce 0% 12/05/22 13:01:44 info mapred.jobclient: map 24% reduce 0% 12/05/22 13:01:45 info mapred.jobclient: map 25% reduce 0% 12/05/22 13:01:46 info mapred.jobclient: map 26% reduce 0% 12/05/22 13:01:47 info mapred.jobclient: map 28% reduce 0% 12/05/22 13:01:48 info mapred.jobclient: map 29% reduce 0% 12/05/22 13:01:49 info mapred.jobclient: map 30% reduce 0% 12/05/22 13:01:50 info mapred.jobclient: map 31% reduce 0% 12/05/22 13:01:51 info mapred.jobclient: map 33% reduce 0% 12/05/22 13:01:52 info mapred.jobclient: map 34% reduce 0% 12/05/22 13:01:53 info mapred.jobclient: map 35% reduce 0% 12/05/22 13:01:54 info mapred.jobclient: map 36% reduce 0% 12/05/22 13:01:55 info mapred.jobclient: map 37% reduce 0% 12/05/22 13:01:56 info mapred.jobclient: map 38% reduce 0% 长风吹白 12/05/22 13:01:57 info mapred.jobclient: map 39% reduce 0% 12/05/22 13:01:58 info mapred.jobclient: map 41% reduce 0% 12/05/22 13:01:59 info mapred.jobclient: map 42% reduce 0% 12/05/22 13:02:00 info mapred.jobclient: map 43% reduce 0% 12/05/22 13:02:01 info mapred.jobclient: map 44% reduce 0% 12/05/22 13:02:02 info mapred.jobclient: map 45% reduce 0% 12/05/22 13:02:03 info mapred.jobclient: map 47% reduce 0% 12/05/22 13:02:04 info mapred.jobclient: map 48% reduce 0% 12/05/22 13:02:05 info mapred.jobclient: map 49% reduce 0% 12/05/22 13:02:06 info mapred.jobclient: map 50% reduce 0% 12/05/22 13:02:07 info mapred.jobclient: map 51% reduce 0% 12/05/22 13:02:08 info mapred.jobclient: map 53% reduce 0% 12/05/22 13:02:10 info mapred.jobclient: map 54% reduce 0% 12/05/22 13:02:11 info mapred.jobclient: map 55% reduce 0% 12/05/22 13:02:12 info mapred.jobclient: map 56% reduce 0% 12/05/22 13:02:13 info mapred.jobclient: map 57% reduce 0% 12/05/22 13:02:14 info mapred.jobclient: map 58% reduce 0% 12/05/22 13:02:15 info mapred.jobclient: map 60% reduce 0% 12/05/22 13:02:16 info mapred.jobclient: map 61% reduce 0% 12/05/22 13:02:18 info mapred.jobclient: map 63% reduce 0% 12/05/22 13:02:19 info mapred.jobclient: map 64% reduce 0% 12/05/22 13:02:20 info mapred.jobclient: map 65% reduce 0% 12/05/22 13:02:21 info mapred.jobclient: map 66% reduce 0% 12/05/22 13:02:22 info mapred.jobclient: map 67% reduce 0% 12/05/22 13:02:23 info mapred.jobclient: map 69% reduce 0% 12/05/22 13:02:25 info mapred.jobclient: map 71% reduce 0% 12/05/22 13:02:27 info mapred.jobclient: map 73% reduce 0% 12/05/22 13:02:29 info mapred.jobclient: map 74% reduce 0% 12/05/22 13:02:30 info mapred.jobclient: map 76% reduce 0% 12/05/22 13:02:31 info mapred.jobclient: map 77% reduce 0% 12/05/22 13:02:32 info mapred.jobclient: map 78% reduce 0% 12/05/22 13:02:33 info mapred.jobclient: map 79% reduce 0% 12/05/22 13:02:34 info mapred.jobclient: map 80% reduce 0% 12/05/22 13:02:35 info mapred.jobclient: map 81% reduce 0% 12/05/22 13:02:36 info mapred.jobclient: map 82% reduce 0% 12/05/22 13:02:37 info mapred.jobclient: map 83% reduce 0% 12/05/22 13:02:38 info mapred.jobclient: map 84% reduce 0% 12/05/22 13:02:39 info mapred.jobclient: map 85% reduce 0% 12/05/22 13:02:40 info mapred.jobclient: map 86% reduce 0% 12/05/22 13:02:41 info mapred.jobclient: map 87% reduce 0% 12/05/22 13:02:42 info mapred.jobclient: map 89% reduce 0% 12/05/22 13:02:43 info mapred.jobclient: map 90% reduce 0% 12/05/22 13:02:45 info mapred.jobclient: map 91% reduce 0% 12/05/22 13:02:46 info mapred.jobclient: map 92% reduce 0% 长风吹白 12/05/22 13:02:47 info mapred.jobclient: map 93% reduce 0% 12/05/22 13:02:48 info mapred.jobclient: map 94% reduce 0% 12/05/22 13:02:49 info mapred.jobclient: map 95% reduce 0% 12/05/22 13:02:51 info mapred.jobclient: map 96% reduce 0% 12/05/22 13:02:52 info mapred.jobclient: map 97% reduce 0% 12/05/22 13:02:53 info mapred.jobclient: map 98% reduce 0% 12/05/22 13:02:56 info mapred.jobclient: map 99% reduce 0% 12/05/22 13:03:01 info mapred.jobclient: map 100% reduce 0% 12/05/22 13:03:38 info mapred.jobclient: map 100% reduce 16% 12/05/22 13:03:41 info mapred.jobclient: map 100% reduce 27% 12/05/22 13:03:59 info mapred.jobclient: map 100% reduce 66% 12/05/22 13:04:02 info mapred.jobclient: map 100% reduce 67% 12/05/22 13:04:05 info mapred.jobclient: map 100% reduce 68% 12/05/22 13:04:08 info mapred.jobclient: map 100% reduce 69% 12/05/22 13:04:14 info mapred.jobclient: map 100% reduce 70% 12/05/22 13:04:17 info mapred.jobclient: map 100% reduce 71% 12/05/22 13:04:20 info mapred.jobclient: map 100% reduce 72% 12/05/22 13:04:23 info mapred.jobclient: map 100% reduce 73% 12/05/22 13:04:29 info mapred.jobclient: map 100% reduce 74% 12/05/22 13:04:32 info mapred.jobclient: map 100% reduce 75% 12/05/22 13:04:35 info mapred.jobclient: map 100% reduce 76% 12/05/22 13:04:38 info mapred.jobclient: map 100% reduce 77% 12/05/22 13:04:44 info mapred.jobclient: map 100% reduce 78% 12/05/22 13:04:47 info mapred.jobclient: map 100% reduce 79% 12/05/22 13:04:50 info mapred.jobclient: map 100% reduce 80% 12/05/22 13:04:53 info mapred.jobclient: map 100% reduce 81% 12/05/22 13:04:59 info mapred.jobclient: map 100% reduce 82% 12/05/22 13:05:02 info mapred.jobclient: map 100% reduce 83% 12/05/22 13:05:05 info mapred.jobclient: map 100% reduce 84% 12/05/22 13:05:08 info mapred.jobclient: map 100% reduce 85% 12/05/22 13:05:14 info mapred.jobclient: map 100% reduce 86% 12/05/22 13:05:17 info mapred.jobclient: map 100% reduce 87% 12/05/22 13:05:20 info mapred.jobclient: map 100% reduce 88% 12/05/22 13:05:23 info mapred.jobclient: map 100% reduce 89% 12/05/22 13:05:29 info mapred.jobclient: map 100% reduce 90% 12/05/22 13:05:32 info mapred.jobclient: map 100% reduce 91% 12/05/22 13:05:35 info mapred.jobclient: map 100% reduce 92% 12/05/22 13:05:38 info mapred.jobclient: map 100% reduce 93% 12/05/22 13:05:44 info mapred.jobclient: map 100% reduce 94% 12/05/22 13:05:47 info mapred.jobclient: map 100% reduce 95% 12/05/22 13:05:50 info mapred.jobclient: map 100% reduce 96% 12/05/22 13:05:53 info mapred.jobclient: map 100% reduce 97% 12/05/22 13:05:59 info mapred.jobclient: map 100% reduce 98% 12/05/22 13:06:02 info mapred.jobclient: map 100% reduce 99% 长风吹白 12/05/22 13:06:08 info mapred.jobclient: map 100% reduce 100% 12/05/22 13:06:13 info mapred.jobclient: job complete: job_201205221152_0003 12/05/22 13:06:14 info mapred.jobclient: counters: 29 12/05/22 13:06:14 info mapred.jobclient: job counters 12/05/22 13:06:14 info mapred.jobclient: launched reduce tasks=1 12/05/22 13:06:14 info mapred.jobclient: slots_millis_maps=794709 12/05/22 13:06:14 info mapred.jobclient:total time spent by all reduces wai ting after reserving slots (ms)=0 12/05/22 13:06:14 info mapred.jobclient: total time spent by all maps waitin g after reserving slots (ms)=0 12/05/22 13:06:14 info mapred.jobclient: launched map tasks=6 12/05/22 13:06:14 info mapred.jobclient: data-local map tasks=6 12/05/22 13:06:14 info mapred.jobclient: slots_millis_reduces=167122 12/05/22 13:06:14 info mapred.jobclient: file output format counters 12/05/22 13:06:14 info mapred.jobclient: bytes written=352129837 12/05/22 13:06:14 info mapred.jobclient: filesystemcounters 12/05/22 13:06:14 info mapred.jobclient: file_bytes_read=1529713665 12/05/22 13:06:14 info mapred.jobclient: hdfs_bytes_read=388564468 12/05/22 13:06:14 info mapred.jobclient: file_bytes_written=2026703211 12/05/22 13:06:14 info mapred.jobclient: hdfs_bytes_written=352129837 12/05/22 13:06:14 info mapred.jobclient: file input format counters 12/05/22 13:06:14 info mapred.jobclient: bytes read=388563832 12/05/22 13:06:14 info mapred.jobclient: map-reduce framework 12/05/22 13:06:14 info mapred.jobclient: map output materialized bytes=496839455 12/05/22 13:06:14 info mapred.jobclient: map input records=7618497 12/05/22 13:06:14 info mapred.jobclient: reduce shuffle bytes=411114557 12/05/22 13:06:14 info mapred.jobclient: spilled records=138122069 12/05/22 13:06:14 info mapred.jobclient: map output bytes=612306738 12/05/22 13:06:14 info mapred.jobclient: cpu time spent (ms)=996810 12/05/22 13:06:14 info mapred.jobclient: total committed heap usage (bytes)= 980361216 12/05/22 13:06:14 info mapred.jobclient: combine input records=98790825 12/05/22 13:06:14 info mapred.jobclient: split_raw_bytes=636 12/05/22 13:06:14 info mapred.jobclient: reduce input records=32835756 12/05/22 13:06:14 info mapred.jobclient: reduce input groups=30755129 12/05/22 13:06:14 info mapred.jobclient: combine output records=72290456 12/05/22 13:06:14 info mapred.jobclient: physical memory (bytes) snapshot=1166610432 12/05/22 13:06:14 info mapred.jobclient: reduce output records=30755129 12/05/22 13:06:14 info mapred.jobclient: virtual memory (bytes) snapshot=2681049088 12/05/22 13:06:14 info mapred.jobclient: map output records=59336125  界面输出 长风吹白 六、 安装 hbase 注意:本步骤在建立在前面已经配置好的基础上进行,下面的步骤在 master 机器上进行 1. 下载 hbase 并解压 tar –zxvf hbase-0.92.1.tar.gz 2. 编辑 conf/hbase-env.sh 加入如下: 长风吹白 export java_home=/usr/local/jdk1.6.0_31 export hbase_manages_zk=true export hbase_home=/home/hadoop/hbase-0.92.1 export hadoop_home=/home/hadoop/hadoop-1.20.205.0 截图: 3. 编辑 conf/hbase-site.xml hbase.rootdir hdfs://master:9000/hbase hbase.cluster.distributed true hbase.master master:60000 hbase.zookeeper.property.datadir /home/hadoop/tmp/zookeeper hbase.zookeeper.quorum master,slave1,slave2 网上传说 hdfs://master:9000/hbase 这里,必须与 hadoop 集群的 core-site.xml 文件配 置 hdfs://master:9000/保持完全一致,而且不识别 ip 截图: 长风吹白 4. 编辑 conf/regionservers slave1 slave2 截图: 5. 复制 hadoop-core 这个 jar 包 删除 hbase 的 lib 文件下的 hadoop-core-x.y.z.jar 把 hadoop 的 home 目录下的 hadoop-core-0.20.205.0.jar 复制到 hbase 的 lib 目录下 长风吹白 6. 复制配置到其它机器 scp -r /home/hadoop/hbase-0.92.1 slave1:/home/hadoop scp -r /home/hadoop/hbase-0.92.1 slave2:/home/hadoop 截图(slave2): 7. 启动 hbase bin/start-hbase.sh 截图如下: 8. 测试 hbase 1) master 机器上: jps 截图如下: 2) slave1 机器上 jps 长风吹白 9. 界面测试 master 主页:http://222.197.221.36:60010 zk: http://222.197.221.36:60010/zk.jsp 长风吹白 长风吹白 slave-regionserver http://222.197.221.34:60030 http://222.197.221.35:60030 长风吹白 七、 其它工具 多服务器这种配置如果有好工具将非常有助于消除乏味的工作:  pssh  expect  Tcl 八、 测试数据 python 脚本 感谢 http://www.kgblog.net/2009/07/20/make-text-files.html 的生成文本文件的 python 脚本, 本来想自己写,但是找到了,就用起来 # coding=utf-8 import os import random 长风吹白 def getLine(diction,L,n): str = [] for i in range(n): str.append( diction[ random.randint(0,L-1) ] ) str.append( '\n' ) return ''.join( str ) def getDiction(): result = [] for i in range( ord('A'), ord('z')+1 ): result.append( chr(i) ) result.append(' ') #增加多一些空格 return ''.join(result) def main(): fileSize = 1024*1024 * 1024 #这里指定生成文件的大小 lineLen = 50 #这里指定每一行的字符个数 N = fileSize / (lineLen + 1) #计算出行数 left = fileSize % (lineLen + 1) di = getDiction() L = len(di) f = open('result.txt','w') #这里修改生成文件的文件名 for i in range(N): 长风吹白 f.write( getLine(di,L,lineLen) ) print '第'+str(i)+'行' if left > 0 : f.write( getLine(di,L, left) ) f.close() if __name__ == '__main__': main() 九、 自动执行删除 hbase 日志的小脚本(Tcl) #!/usr/bin/expect set Hosts {slave1 slave2 master} for {set i 0} {$i<[llength $Hosts]} {incr i} { set loginInfo "登录" append loginInfo [lindex $Hosts $i] puts "===============================" puts $loginInfo puts "===============================" set timeout 10 spawn ssh [lindex $Hosts $i] send "cd hbase \r" send "cd logs \r" send "ls -l\r" send "rm -rf * \r" send "ls -l\r" 长风吹白 send "exit\r" #send "ls -l\r" #[interact] expect eof } 十、 小结 配置简单,写下来,步骤鲜明的写下来有点不容易。有错误看日志,不断的重复重复再重复。 上述记录是成功后的记录,已经修改了几个版本,如有错误麻烦邮件指出,也请原谅: liuyingboo@gmail.com,先谢!
还剩26页未读

继续阅读

下载pdf到电脑,查找使用更方便

pdf的实际排版效果,会与网站的显示效果略有不同!!

需要 10 金币 [ 分享pdf获得金币 ] 3 人已下载

下载pdf

pdf贡献者

caiyifeng

贡献于2012-11-16

下载需要 10 金币 [金币充值 ]
亲,您也可以通过 分享原创pdf 来获得金币奖励!
下载pdf