解决running beyond virtual memory limits. Current usage: 35.5 MB of 1 GB physical memory used; 16.8 G-程序员宅基地

技术标签: jvm  hadoop  集群  大数据平台开发笔记(hadoop|storm|spark)  

1、刚在公司搭建好的一个集群,然后运行wordcount测试看是否能正常使用,发现报如下错误(我在自己电脑上也是用同一版本,并没有报错)

[root@S1PA124 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
14/08/20 09:51:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/08/20 09:51:35 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/08/20 09:51:36 INFO input.FileInputFormat: Total input paths to process : 1
14/08/20 09:51:36 INFO mapreduce.JobSubmitter: number of splits:1
14/08/20 09:51:36 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/08/20 09:51:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/08/20 09:51:36 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/08/20 09:51:37 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/08/20 09:51:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1408499127545_0001
14/08/20 09:51:37 INFO impl.YarnClientImpl: Submitted application application_1408499127545_0001 to ResourceManager at /0.0.0.0:8032
14/08/20 09:51:37 INFO mapreduce.Job: The url to track the job: http://S1PA124:8088/proxy/application_1408499127545_0001/
14/08/20 09:51:37 INFO mapreduce.Job: Running job: job_1408499127545_0001
14/08/20 09:51:44 INFO mapreduce.Job: Job job_1408499127545_0001 running in uber mode : false
14/08/20 09:51:44 INFO mapreduce.Job:  map 0% reduce 0%
14/08/20 09:51:49 INFO mapreduce.Job:  map 100% reduce 0%
14/08/20 09:51:54 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_0, Status : FAILED
Container [pid=26042,containerID=container_1408499127545_0001_01_000003] is running beyond virtual memory limits. Current usage: 35.5 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1408499127545_0001_01_000003 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 26047 26042 26042 26042 (java) 36 3 17963216896 8801 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_0 3 
        |- 26042 25026 26042 26042 (bash) 0 0 65409024 276 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_0 3 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003/stderr  

Container killed on request. Exit code is 143

14/08/20 09:52:00 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_1, Status : FAILED
Container [pid=26111,containerID=container_1408499127545_0001_01_000004] is running beyond virtual memory limits. Current usage: 100.3 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1408499127545_0001_01_000004 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 26116 26111 26111 26111 (java) 275 8 18016677888 25393 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_1 4 
        |- 26111 25026 26111 26111 (bash) 0 0 65409024 275 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_1 4 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004/stderr  

Container killed on request. Exit code is 143

14/08/20 09:52:06 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_2, Status : FAILED
Container [pid=26185,containerID=container_1408499127545_0001_01_000005] is running beyond virtual memory limits. Current usage: 100.4 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1408499127545_0001_01_000005 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 26190 26185 26185 26185 (java) 271 7 18025807872 25414 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_2 5 
        |- 26185 25026 26185 26185 (bash) 0 0 65409024 276 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_2 5 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005/stderr  

Container killed on request. Exit code is 143

14/08/20 09:52:13 INFO mapreduce.Job:  map 100% reduce 100%
14/08/20 09:52:13 INFO mapreduce.Job: Job job_1408499127545_0001 failed with state FAILED due to: Task failed task_1408499127545_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

14/08/20 09:52:13 INFO mapreduce.Job: Counters: 32
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=80425
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=895
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=3
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
        Job Counters 
                Failed reduce tasks=4
                Launched map tasks=1
                Launched reduce tasks=4
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=3082
                Total time spent by all reduces in occupied slots (ms)=11065
        Map-Reduce Framework
                Map input records=56
                Map output records=56
                Map output bytes=1023
                Map output materialized bytes=1141
                Input split bytes=96
                Combine input records=56
                Combine output records=56
                Spilled Records=56
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=25
                CPU time spent (ms)=680
                Physical memory (bytes) snapshot=253157376
                Virtual memory (bytes) snapshot=18103181312
                Total committed heap usage (bytes)=1011875840
        File Input Format Counters 
                Bytes Read=799
2、mapred-site.xml配置文件配置如下

<configuration>
         <property>
                <name>mapreduce.cluster.local.dir</name>
                <value>/root/install/hadoop/mapred/local</value>
        </property>
        <property>
                <name>mapreduce.cluster.system.dir</name>
                <value>/root/install/hadoop/mapred/system</value>
        </property>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>S1PA124:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>S1PA124:19888</value>
        </property>
<!--
        <property>
                 <name>mapred.child.java.opts</name>
                 <value>-Djava.awt.headless=true</value>
        </property>
        <property>
                 <name>yarn.app.mapreduce.am.command-opts</name>
                 <value>-Djava.awt.headless=true -Xmx1024m</value>
        </property>
        <property>
                 <name>yarn.app.mapreduce.am.admin-command-opts</name>
                 <value>-Djava.awt.headless=true</value>
         </property>

-->
</configuration>
3、解决办法

我把mapred-site.xml配置文件里配置与JVM运行内存空间的那几行配置注释掉,然后重新启动集群就解决了。具体原因暂时还没有时间来研究,大概知道是与机器JVM的分配情况有关。

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/panguoyuan/article/details/38703111

智能推荐

QT设置QLabel中字体的颜色_qolable 字体颜色-程序员宅基地

文章浏览阅读8k次,点赞2次,收藏6次。QT设置QLabel中字体的颜色其实,这是一个比较常见的问题。大致有几种做法:一是使用setPalette()方法;二是使用样式表;三是可以使用QStyle;四是可以在其中使用一些简单的HTML样式。下面就具体说一下,也算是个总结吧。第一种,使用setPalette()方法如下:QLabel *label = new QLabel(tr("Hello Qt!"));QP_qolable 字体颜色

【C#】: Import “google/protobuf/timestamp.proto“ was not found or had errors.问题彻底被解决!_import "google/protobuf/timestamp.proto" was not f-程序员宅基地

文章浏览阅读3.7k次。使用C# 作为开发语言,将pb文件转换为cs文件的时候相信很多人都会遇到一个很棘手的问题,那就是protoc3环境下,import Timestamp的问题,在头部 import “google/protobuf/timestamp.proto”;的时候会抛异常:google/protobuf/timestamp.proto" was not found or had errors;解决办法【博主「pamxy」的原创文章的分享】:(注:之后才发现,不需要添加这个目录也可以,因为timestamp.p_import "google/protobuf/timestamp.proto" was not found or had errors.

安卓抓取JD wskey + 添加脚本自动转换JD cookie_jd_wsck-程序员宅基地

文章浏览阅读4.1w次,点赞9次,收藏98次。一、准备工具: 1. app:VNET(抓包用)、京东; 安卓手机需要下载VNET软件。下载官网:https://www.vnet-tech.com/zh/ 2. 已安装部署好的青龙面板;二、抓包wskey: 1. 打开已下载的VNET软件,第一步先安装CA证书; 点击右下角三角形按钮(开始抓包按钮),会提示安装证书,点击确定即可,app就会将CA证书下载至手机里,随后在手机设置里进行安装,这里不同手机可能安装位置不同,具体..._jd_wsck

Mybatis-Plus自动填充失效问题:当字段不为空时无法插入_mybatisplus插入不放为空的字段-程序员宅基地

文章浏览阅读2.9k次,点赞7次,收藏3次。本文针对mybatis-plus自动填充第一次更新能正常填充,第二次更新无法自动填充问题。????mybatis-plus自动填充:当要填充的字段不为空时,填充无效问题的解决????先上一副官方的图:取自官方:https://mp.baomidou.com/guide/auto-fill-metainfo.html第三条注意事项为自动填充失效原因:MetaObjectHandler提供的默认方法的策略均为:如果属性有值则不覆盖,如果填充值为null则不填充以官方案例为例:```java_mybatisplus插入不放为空的字段

Matlab 生成exe执行文件_matlab exe-程序员宅基地

文章浏览阅读1w次,点赞25次,收藏94次。利用 Application Complier 完成MATLAB转exe文件_matlab exe

Android下集成Paypal支付-程序员宅基地

文章浏览阅读137次。近期项目需要研究paypal支付,官网上的指导写的过于复杂,可能是老外的思维和中国人不一样吧。难得是发现下面这篇文章:http://www.androidhive.info/2015/02/Android-integrating-paypal-using-PHP-MySQL-part-1/在这篇文章的基础上,查看SDK简化了代码,给出下面这个例子,..._paypal支付集成到anroid应用中

随便推点

MIT-BEVFusion系列五--Nuscenes数据集详细介绍,有下载好的图片_nuscense数据集-程序员宅基地

文章浏览阅读2.3k次,点赞29次,收藏52次。nuScenes 数据集 (pronounced /nu:ːsiː:nz/) 是由 Motional (以前称为 nuTonomy) 团队开发的自动驾驶公共大型数据集。nuScenes 数据集的灵感来自于开创性的 KITTI 数据集。nuScenes 是第一个提供自动驾驶车辆整个传感器套件 (6 个摄像头、1 个 LIDAR、5 个 RADAR、GPS、IMU) 数据的大型数据集。与 KITTI 相比,nuScenes 包含的对象注释多了 7 倍。_nuscense数据集

python mqtt publish_Python Paho MQTT:无法立即在函数中发布-程序员宅基地

文章浏览阅读535次。我正在实现一个程序,该程序可以侦听特定主题,并在ESP8266发布新消息时对此做出反应.从ESP8266收到新消息时,我的程序将触发回调并执行一系列任务.我在回调函数中发布了两条消息,回到了Arduino正在侦听的主题.但是,仅在函数退出后才发布消息.谢谢您的所有宝贵时间.我试图在回调函数中使用loop(1),超时为1秒.该程序将立即发布该消息,但似乎陷入了循环.有人可以给我一些指针如何在我的回调..._python 函数里面 mqtt调用publish方法 没有效果

win11怎么装回win10系统_安装win10后卸载win11-程序员宅基地

文章浏览阅读3.4w次,点赞16次,收藏81次。微软出来了win11预览版系统,很多网友给自己的电脑下载安装尝鲜,不过因为是测试版可能会有比较多bug,又只有英文,有些网友使用起来并不顺畅,因此想要将win11退回win10系统。那么win11怎么装回win10系统呢?今天小编就教下大家win11退回win10系统的方法。方法一:1、首先点击开始菜单,在其中找到“设置”2、在设置面板中,我们可以找到“更新和安全”3、在更新和安全中,找到点击左边栏的“恢复”4、恢复的右侧我们就可以看到“回退到上版本的win10”了。方法二:_安装win10后卸载win11

SQL Server菜鸟入门_sql server菜鸟教程-程序员宅基地

文章浏览阅读3.3k次,点赞2次,收藏3次。数据定义_sql server菜鸟教程

Leetcode 数组(简单题)[1-1000题]_给定一个浮点数数组nums(逗号分隔)和一个浮点数目标值target(与数组空格分隔),请-程序员宅基地

文章浏览阅读1.9k次。1. 两数之和给定一个整数数组 nums 和一个目标值 target,请你在该数组中找出和为目标值的那 两个 整数,并返回他们的数组下标。你可以假设每种输入只会对应一个答案。但是,你不能重复利用这个数组中同样的元素。示例:给定 nums = [2, 7, 11, 15], target = 9因为 nums[0] + nums[1] = 2 + 7 = 9所以返回 [0, 1]方法一..._给定一个浮点数数组nums(逗号分隔)和一个浮点数目标值target(与数组空格分隔),请

python性能优化方案_python 性能优化方法小结-程序员宅基地

文章浏览阅读152次。提高性能有如下方法1、Cython,用于合并python和c语言静态编译泛型2、IPython.parallel,用于在本地或者集群上并行执行代码3、numexpr,用于快速数值运算4、multiprocessing,python内建的并行处理模块5、Numba,用于为cpu动态编译python代码6、NumbaPro,用于为多核cpu和gpu动态编译python代码为了验证相同算法在上面不同实现..._np.array 测试gpu性能

推荐文章

热门文章

相关标签