hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException: The number of tasks for this job 156028 exceeds the configured limit 5000 at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) Caused by: java.io.IOException: The number of tasks for this job 156028 exceeds the configured limit 5000 at org.apache.hadoop.mapred.JobInProgress.checkTaskLimits(JobInProgress.java:509) at org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:485) at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3941) ... 10 more at org.apache.hadoop.ipc.Client.call(Client.java:1066) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at org.apache.hadoop.mapred.$Proxy11.submitJob(Unknown Source) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:921) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824) at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:447) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1336) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1122) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:935) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Job Submission failed with exception 'org.apache.hadoop.ipc.RemoteException(java.io.IOException: java.io.IOException: The number of tasks for this job 156028 exceeds the configured limit 5000 at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) Caused by: java.io.IOException: The number of tasks for this job 156028 exceeds the configured limit 5000 at org.apache.hadoop.mapred.JobInProgress.checkTaskLimits(JobInProgress.java:509) at org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:485) at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3941) ... 10 more )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
錯誤原因:
因為trackinfo表的數據太龐大,並且寫的sql語句
select product_id, track_time from trackinfo limit 5
性能耗損太大了,將會導致調度的Map數太多,超出了job的限制。
來看看hive中trackinfo表的數據量有多大:
-bash-3.2$ hadoop fs -dus /data/share/trackinfo Warning: $HADOOP_HOME is deprecated. hdfs://yhd-hadoop06.int.yihaodian.com:9000/data/share/trackinfo 19387740988708
能夠看到,trackinfo的數據量大概有19TB這么大
改寫sql語句,指定where條件,性能得到了提升
select product_id, track_time from trackinfo where ds='2014-5-13' limit 5
問題同一時候也被攻克了