I have Hadoop setup in fully distributed mode with one master and 3 slaves. I am trying to execute a jar file named Tasks.jar
which is taking arg[0]
as input directory and arg[1]
as output directory.
In my hadoop environment, I have the input files in /input
directory and there is no /output
directory in my hadoop environment.
I checked the above by using the hadoop fs -ls /
command
Now, when I try to execute my jar file by using the below command:
hadoop jar Tasks.jar ProgrammingAssigment/Tasks /input /output
I get the below exception:
ubuntu@ip-172-31-5-213:~$ hadoop jar Tasks.jar ProgrammingAssignment/Tasks /input /output
16/10/14 02:26:23 INFO client.RMProxy: Connecting to ResourceManager at ec2-52-55-2-64.compute-1.amazonaws.com/172.31.5.213:8032
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://ec2-52-55-2-64.compute-1.amazonaws.com:9000/input already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at ProgrammingAssignment.Tasks.main(Tasks.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Source Code:
public static void main(String []args)throws Exception{
Configuration conf=new Configuration();
Job wordCount=new Job(conf,"Word Count");
wordCount.setJarByClass(Tasks.class);
FileInputFormat.addInputPath(wordCount, new Path(args[0]));//input1
FileOutputFormat.setOutputPath(wordCount,new Path(args[1]));//output1 & input2
//FileInputFormat.addInputPath(wordCount, new Path("/input"));
//FileOutputFormat.setOutputPath(wordCount,new Path("/output"));
wordCount.setMapperClass(totalOccurenceMapper.class);
wordCount.setReducerClass(totalOccurenceReducer.class);
wordCount.setMapOutputKeyClass(Text.class);
wordCount.setMapOutputValueClass(Text.class);
wordCount.setOutputKeyClass(Text.class);
wordCount.setOutputValueClass(Text.class);
// wordCount.waitForCompletion(true);
System.exit(wordCount.waitForCompletion(true) ? 0 : 1);
}
If I hardcode the path where I have commented in the above code, I get the following output:
ubuntu@ip-172-31-5-213:~$ hadoop jar Tasks.jar ProgrammingAssignment/Tasks
16/10/14 15:51:19 INFO client.RMProxy: Connecting to ResourceManager at ec2-52-55-2-64.compute-1.amazonaws.com/172.31.5.213:8032 16/10/14 15:51:20 INFO ipc.Client: Retrying connect to server: ec2-52-55-2-64.compute-1.amazonaws.com/172.31.5.213:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 16/10/14 15:51:21 INFO ipc.Client: Retrying connect to server: ec2-52-55-2-64.compute-1.amazonaws.com/172.31.5.213:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 16/10/14 15:51:22 INFO ipc.Client: Retrying connect to server: ec2-52-55-2-64.compute-1.amazonaws.com/172.31.5.213:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 16/10/14 15:51:23 INFO ipc.Client: Retrying connect to server: ec2-52-55-2-64.compute-1.amazonaws.com/172.31.5.213:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)