JSON

离线安装Cloudera Manager 5和CDH5(最新版5.1.3) 完全教程(2)

字号+ 作者:H5之家 来源:H5之家 2016-07-07 15:00 我要评论( )

接下来是服务器检查,可能会遇到以下问题: Cloudera 建议将 /proc/sys/vm/swappiness 设置为 0。当前设置为 60。使用 sysctl 命令在运行时更改该设置并编辑 /etc/sysctl.conf 以在重启后保存该设置。您可以继续进

接下来是服务器检查,可能会遇到以下问题:

Cloudera 建议将 /proc/sys/vm/swappiness 设置为 0。当前设置为 60。使用 sysctl 命令在运行时更改该设置并编辑 /etc/sysctl.conf 以在重启后保存该设置。您可以继续进行安装,但可能会遇到问题,Cloudera Manager 报告您的主机由于交换运行状况不佳。以下主机受到影响:

通过 echo 0 > /proc/sys/vm/swappiness 即可解决。

接下来是选择安装服务:

服务配置,一般情况下保持默认就可以了(Cloudera Manager会根据机器的配置自动进行配置,如果需要特殊调整,自行进行设置就可以了):

接下来是数据库的设置,检查通过后就可以进行下一步的操作了:

下面是集群设置的审查页面,我这里都是保持默认配置的:

终于到安装各个服务的地方了,注意,这里安装Hive的时候可能会报错,因为我们使用了MySql作为hive的元数据存储,hive默认没有带mysql的驱动,通过以下命令拷贝一个就行了:

cp /opt/cm-5.1.3/share/cmf/lib/mysql-connector-java-5.1.33-bin.jar /opt/cloudera/parcels/CDH-5.1.3-1.cdh5.1.3.p0.12/lib/hive/lib/

服务的安装过程大约半小时内就可以完成:

安装完成后,就可以进入集群界面看一下集群的当前状况了。

这里可能会出现 无法发出查询:对 Service Monitor 的请求超时 的错误提示,如果各个组件安装没有问题,一般是因为服务器比较卡导致的,过一会刷新一下页面就好了:

测试

在集群的一台机器上执行以下模拟Pi的示例程序:

sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100

执行过程需要花一定的时间,通过YARN的后台也可以看到MapReduce的执行状态:

MapReduce执行过程中终端的输出如下:

Number of Maps = 10 Samples per Map = 100 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Starting Job 14/10/13 01:15:34 INFO client.RMProxy: Connecting to ResourceManager at n1/192.168.1.161:8032 14/10/13 01:15:36 INFO input.FileInputFormat: Total input paths to process : 10 14/10/13 01:15:37 INFO mapreduce.JobSubmitter: number of splits:10 14/10/13 01:15:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1413132307582_0001 14/10/13 01:15:40 INFO impl.YarnClientImpl: Submitted application application_1413132307582_0001 14/10/13 01:15:40 INFO mapreduce.Job: The url to track the job: :8088/proxy/application_1413132307582_0001/ 14/10/13 01:15:40 INFO mapreduce.Job: Running job: job_1413132307582_0001 14/10/13 01:17:13 INFO mapreduce.Job: Job job_1413132307582_0001 running in uber mode : false 14/10/13 01:17:13 INFO mapreduce.Job: map 0% reduce 0% 14/10/13 01:18:02 INFO mapreduce.Job: map 10% reduce 0% 14/10/13 01:18:25 INFO mapreduce.Job: map 20% reduce 0% 14/10/13 01:18:35 INFO mapreduce.Job: map 30% reduce 0% 14/10/13 01:18:45 INFO mapreduce.Job: map 40% reduce 0% 14/10/13 01:18:53 INFO mapreduce.Job: map 50% reduce 0% 14/10/13 01:19:01 INFO mapreduce.Job: map 60% reduce 0% 14/10/13 01:19:09 INFO mapreduce.Job: map 70% reduce 0% 14/10/13 01:19:17 INFO mapreduce.Job: map 80% reduce 0% 14/10/13 01:19:25 INFO mapreduce.Job: map 90% reduce 0% 14/10/13 01:19:33 INFO mapreduce.Job: map 100% reduce 0% 14/10/13 01:19:51 INFO mapreduce.Job: map 100% reduce 100% 14/10/13 01:19:53 INFO mapreduce.Job: Job job_1413132307582_0001 completed successfully 14/10/13 01:19:56 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=91 FILE: Number of bytes written=1027765 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=2560 HDFS: Number of bytes written=215 HDFS: Number of read operations=43 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Job Counters Launched map tasks=10 Launched reduce tasks=1 Data-local map tasks=10 Total time spent by all maps in occupied slots (ms)=118215 Total time spent by all reduces in occupied slots (ms)=11894 Total time spent by all map tasks (ms)=118215 Total time spent by all reduce tasks (ms)=11894 Total vcore-seconds taken by all map tasks=118215 Total vcore-seconds taken by all reduce tasks=11894 Total megabyte-seconds taken by all map tasks=121052160 Total megabyte-seconds taken by all reduce tasks=12179456 Map-Reduce Framework Map input records=10 Map output records=20 Map output bytes=180 Map output materialized bytes=340 Input split bytes=1380 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=340 Reduce input records=20 Reduce output records=0 Spilled Records=40 Shuffled Maps =10 Failed Shuffles=0 Merged Map outputs=10 GC time elapsed (ms)=1269 CPU time spent (ms)=9530 Physical memory (bytes) snapshot=3792773120 Virtual memory (bytes) snapshot=16157274112 Total committed heap usage (bytes)=2856624128 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1180 File Output Format Counters Bytes Written=97 Job Finished in 262.659 seconds Estimated value of Pi is 3.14800000000000000000 检查Hue

首次登陆Hue会让设置一个初试的用户名和密码,设置好,登陆到后台,会做一次检查,一切正常后会提示:

到这里表明我们的集群可以使用了。

 

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

相关文章
  • jsonc库的安装以及简单使用

    jsonc库的安装以及简单使用

    2016-05-15 13:03

  • centos6.5安装open-falcon

    centos6.5安装open-falcon

    2016-04-17 16:00

  • 安装JSON插件时的主要学习研究

    安装JSON插件时的主要学习研究

    2016-01-15 19:15

  • 轻松实现nbsp;Webnbsp;离线存储

    轻松实现nbsp;Webnbsp;离线存储

    2015-11-03 14:01

网友点评
<