當(dāng)前位置:首頁 >  站長(zhǎng) >  數(shù)據(jù)庫 >  正文

sqoopexport導(dǎo)出map100% reduce0%卡住原因及解決

 2021-01-07 16:56  來源: 腳本之家   我來投稿 撤稿糾錯(cuò)

  域名預(yù)訂/競(jìng)價(jià),好“米”不錯(cuò)過

這篇文章主要介紹了sqoop export導(dǎo)出 map100% reduce0% 卡住的多種原因及解決,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧

我稱這種bug是一個(gè)典型的“哈姆雷特”bug,就是指那種“報(bào)錯(cuò)情況相同但網(wǎng)上卻會(huì)有各種五花繚亂解決辦法”的bug,讓我們不知道哪一個(gè)才是癥結(jié)所在。

先看導(dǎo)入命令:

[root@host25 ~]#
sqoop export --connect "jdbc:mysql://172.16.xxx.xxx:3306/dbname?useUnicode=true&characterEncoding=utf-8"
--username=root --password=xxxxx --table rule_tag --update-key rule_code
--update-mode allowinsert
--export-dir /user/hive/warehouse/lmj_test.db/rule_tag --input-fields-terminated-by '\t'
--input-null-string '\\N' --input-null-non-string '\\N' -m1

這個(gè)導(dǎo)入命令語法上其實(shí)是完全沒問題的。

接下來是報(bào)錯(cuò):

#截取部分
19/06/11 09:39:57 INFO mapreduce.Job: The url to track the job: http://dthost25:8088/proxy/application_1554176896418_0537/
19/06/11 09:39:57 INFO mapreduce.Job: Running job: job_1554176896418_0537
19/06/11 09:40:05 INFO mapreduce.Job: Job job_1554176896418_0537 running in uber mode : false
19/06/11 09:40:05 INFO mapreduce.Job: map 0% reduce 0%
19/06/11 09:40:19 INFO mapreduce.Job: map 100% reduce 0%
19/06/11 09:45:34 INFO mapreduce.Job: Task Id : attempt_1554176896418_0537_m_000000_0, Status : FAILED
AttemptID:attempt_1554176896418_0537_m_000000_0 Timed out after 300 secs
19/06/11 09:45:36 INFO mapreduce.Job: map 0% reduce 0%
19/06/11 09:45:48 INFO mapreduce.Job: map 100% reduce 0%
19/06/11 09:51:04 INFO mapreduce.Job: Task Id : attempt_1554176896418_0537_m_000000_1, Status : FAILED
AttemptID:attempt_1554176896418_0537_m_000000_1 Timed out after 300 secs
19/06/11 09:51:05 INFO mapreduce.Job: map 0% reduce 0%
19/06/11 09:51:17 INFO mapreduce.Job: map 100% reduce 0%
19/06/11 09:56:34 INFO mapreduce.Job: Task Id : attempt_1554176896418_0537_m_000000_2, Status : FAILED
AttemptID:attempt_1554176896418_0537_m_000000_2 Timed out after 300 secs
19/06/11 09:56:35 INFO mapreduce.Job: map 0% reduce 0%
19/06/11 09:56:48 INFO mapreduce.Job: map 100% reduce 0%
19/06/11 10:02:05 INFO mapreduce.Job: Job job_1554176896418_0537 failed with state FAILED due to: Task failed task_1554176896418_0537_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
19/06/11 10:02:05 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2624852
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=1312426
Total vcore-seconds taken by all map tasks=1312426
Total megabyte-seconds taken by all map tasks=2687848448
19/06/11 10:02:05 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
19/06/11 10:02:05 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 1,333.3153 seconds (0 bytes/sec)
19/06/11 10:02:05 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
19/06/11 10:02:05 INFO mapreduce.ExportJobBase: Exported 0 records.
19/06/11 10:02:05 ERROR tool.ExportTool: Error during export: Export job failed!
Time taken: 1340 s
task IDE_TASK_ADE56470-B5A3-4303-EA75-44312FF8AA0C_20190611093945147 is complete.

可以看到,導(dǎo)入任務(wù)在INFO mapreduce.Job: map 100% reduce 0%時(shí)停住了,停了5分鐘,然后任務(wù)自動(dòng)重跑,又卡住停了5分鐘,最后任務(wù)報(bào)了個(gè)超時(shí)的錯(cuò)誤。

很顯然,任務(wù)失敗的直接原因是超時(shí),但是超時(shí)的原因是因?yàn)閷?dǎo)入過程的mapreduce任務(wù)卡住了,那mapreduce為什么會(huì)卡住呢?這個(gè)報(bào)錯(cuò)日志中并沒有提到,這就是查原因時(shí)最麻煩的地方。

先說一下結(jié)果,最后查了很久才發(fā)現(xiàn),是因?yàn)橛幸恍械臄?shù)據(jù)長(zhǎng)度,超過了mysql設(shè)定的字段長(zhǎng)度。也就是在往varchar(50)的字段里導(dǎo)入字符串“字符串很長(zhǎng)很長(zhǎng)很長(zhǎng)很長(zhǎng)很長(zhǎng)很長(zhǎng)很長(zhǎng)很長(zhǎng)很長(zhǎng)”時(shí),任務(wù)就阻塞住了。

在這里也跟大家匯總一下網(wǎng)上的各種原因,大家可以逐個(gè)檢查

在map 100% reduce 0%時(shí)卡住的可能原因:(以往mysql導(dǎo)出為例)

1、長(zhǎng)度溢出。導(dǎo)入的數(shù)據(jù)超過了mysql表的字段設(shè)定長(zhǎng)度

解決辦法:重設(shè)字段長(zhǎng)度即可

2、編碼錯(cuò)誤。導(dǎo)入的數(shù)據(jù)不在mysql的編碼字符集內(nèi)

解決辦法:其實(shí)在mysql數(shù)據(jù)庫中對(duì)應(yīng)UTF-8字符集的不是utf8編碼,而是utf8mb4編碼。所以當(dāng)你的導(dǎo)入數(shù)據(jù)里有若如Emoji表情或者一些生僻漢字時(shí),就會(huì)導(dǎo)不進(jìn)去造成阻塞卡住。所以你需要注意兩點(diǎn):

(1)導(dǎo)入語句中限定useUnicode=true&characterEncoding=utf-8,表示以u(píng)tf-8的格式導(dǎo)出;

(2)mysql建表語句中有ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

3、內(nèi)存不足。導(dǎo)入數(shù)據(jù)量可能過大,或者分配內(nèi)存太少

解決辦法:要么分批導(dǎo)入,要么給任務(wù)分配更多內(nèi)存

4、主機(jī)名錯(cuò)誤。

解決辦法:這個(gè)好像是涉及到主機(jī)名的配置問題

5、主鍵重復(fù) 。

解決辦法:這是因?yàn)槟銓?dǎo)入的數(shù)據(jù)中有重復(fù)的主鍵值,要針對(duì)性處理一下數(shù)據(jù)

補(bǔ)充:sqoop從數(shù)據(jù)庫到處數(shù)據(jù)到hdfs時(shí)mapreduce卡住不動(dòng)解決

在sqoop時(shí)從數(shù)據(jù)庫中導(dǎo)出數(shù)據(jù)時(shí),出現(xiàn)mapreduce卡住的情況

經(jīng)過百度之后好像是要設(shè)置yarn里面關(guān)于內(nèi)存和虛擬內(nèi)存的配置項(xiàng).我以前沒配置這幾項(xiàng),也能正常運(yùn)行。但是這次好像運(yùn)行的比較大。出現(xiàn)此故障的原因應(yīng)該是,在每個(gè)Docker分配的內(nèi)存和CPU資源太少,不能滿足Hadoop和Hive運(yùn)行所需的默認(rèn)資源需求。

解決方案如下:

在yarn-site.xml中加入如下配置:

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>20480</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>

關(guān)閉yarn重啟就好了?。?!

以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教。

來源:腳本之家

鏈接:https://www.jb51.net/article/203322.htm

申請(qǐng)創(chuàng)業(yè)報(bào)道,分享創(chuàng)業(yè)好點(diǎn)子。點(diǎn)擊此處,共同探討創(chuàng)業(yè)新機(jī)遇!

相關(guān)標(biāo)簽
UC

相關(guān)文章

熱門排行

信息推薦