从简单的 java 程序调用 mapreduce 作业

2022-09-02 03:11:59

我一直在尝试从同一包中的简单java程序调用mapreduce作业。我试图在我的java程序中引用mapreduce jar文件,并通过传递mapreduce作业的输入和输出路径来使用该方法调用它。但是程序不起作用。.runJar(String args[])


我如何运行这样一个程序,我只使用传递输入,输出和jar路径到其主方法?是否可以通过它运行mapreduce作业(jar)?我想这样做,因为我想一个接一个地运行几个mapreduce作业,其中我的java程序vl通过引用其jar文件来调用每个这样的作业。如果可能的话,我不妨使用一个简单的servlet来执行这样的调用,并引用其输出文件以用于图形目的。


/*
 * To change this template, choose Tools | Templates
 * and open the template in the editor.
 */

/**
 *
 * @author root
 */
import org.apache.hadoop.util.RunJar;
import java.util.*;

public class callOther {

    public static void main(String args[])throws Throwable
    {

        ArrayList arg=new ArrayList();

        String output="/root/Desktp/output";

        arg.add("/root/NetBeansProjects/wordTool/dist/wordTool.jar");

        arg.add("/root/Desktop/input");
        arg.add(output);

        RunJar.main((String[])arg.toArray(new String[0]));

    }
}

答案 1

哦,请不要用它来做,Java API非常好。runJar

了解如何从普通代码启动作业:

// create a configuration
Configuration conf = new Configuration();
// create a new job based on the configuration
Job job = new Job(conf);
// here you have to put your mapper class
job.setMapperClass(Mapper.class);
// here you have to put your reducer class
job.setReducerClass(Reducer.class);
// here you have to set the jar which is containing your 
// map/reduce class, so you can use the mapper class
job.setJarByClass(Mapper.class);
// key/value of your reducer output
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
// this is setting the format of your input, can be TextInputFormat
job.setInputFormatClass(SequenceFileInputFormat.class);
// same with output
job.setOutputFormatClass(TextOutputFormat.class);
// here you can set the path of your input
SequenceFileInputFormat.addInputPath(job, new Path("files/toMap/"));
// this deletes possible output paths to prevent job failures
FileSystem fs = FileSystem.get(conf);
Path out = new Path("files/out/processed/");
fs.delete(out, true);
// finally set the empty out path
TextOutputFormat.setOutputPath(job, out);

// this waits until the job completes and prints debug out to STDOUT or whatever
// has been configured in your log4j properties.
job.waitForCompletion(true);

如果您使用的是外部群集,则必须通过以下方式将以下信息放入配置中:

// this should be like defined in your mapred-site.xml
conf.set("mapred.job.tracker", "jobtracker.com:50001"); 
// like defined in hdfs-site.xml
conf.set("fs.default.name", "hdfs://namenode.com:9000");

当 位于应用程序容器类路径中时,这应该没有问题。但我认为你应该在你的网页上放一些进度指示器,因为完成一个hadoop作业可能需要几分钟到几小时;)hadoop-core.jar

For YARN (> Hadoop 2)

对于 YARN,需要设置以下配置。

// this should be like defined in your yarn-site.xml
conf.set("yarn.resourcemanager.address", "yarn-manager.com:50001"); 

// framework is now "yarn", should be defined like this in mapred-site.xm
conf.set("mapreduce.framework.name", "yarn");

// like defined in hdfs-site.xml
conf.set("fs.default.name", "hdfs://namenode.com:9000");

答案 2

从 java web 应用程序调用 MapReduce 作业 (Servlet)

您可以使用Java API从Web应用程序调用MapReduce作业。下面是从 servlet 调用 MapReduce 作业的一个小示例。步骤如下:

步骤 1:首先创建一个 MapReduce 驱动程序 servlet 类。同时开发地图和减少服务。下面是一个示例代码片段:

CallJobFromServlet.java

    public class CallJobFromServlet extends HttpServlet {

    protected void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException, IOException {

    Configuration conf = new Configuration();
    // Replace CallJobFromServlet.class name with your servlet class
        Job job = new Job(conf, " CallJobFromServlet.class"); 
        job.setJarByClass(CallJobFromServlet.class);
        job.setJobName("Job Name");
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        job.setMapperClass(Map.class); // Replace Map.class name with your Mapper class
        job.setNumReduceTasks(30);
        job.setReducerClass(Reducer.class); //Replace Reduce.class name with your Reducer class
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        // Job Input path
        FileInputFormat.addInputPath(job, new  
        Path("hdfs://localhost:54310/user/hduser/input/")); 
        // Job Output path
        FileOutputFormat.setOutputPath(job, new 
        Path("hdfs://localhost:54310/user/hduser/output")); 

        job.waitForCompletion(true);
   }
}

步骤2:将所有相关的jar(hadoop,特定于应用程序的jar)文件放在Web服务器(例如Tomcat)的lib文件夹中。这对于访问Hadoop配置是强制性的(hadoop'conf'文件夹有配置xml文件,即核心站点.xml,hdfs-site.xml等)。只需将jars从hadoop lib文件夹复制到Web server(tomcat)lib目录即可。jar 名称列表如下:

1.  commons-beanutils-1.7.0.jar
2.  commons-beanutils-core-1.8.0.jar
3.  commons-cli-1.2.jar
4.  commons-collections-3.2.1.jar
5.  commons-configuration-1.6.jar
6.  commons-httpclient-3.0.1.jar
7.  commons-io-2.1.jar
8.  commons-lang-2.4.jar
9.  commons-logging-1.1.1.jar
10. hadoop-client-1.0.4.jar
11. hadoop-core-1.0.4.jar
12. jackson-core-asl-1.8.8.jar
13. jackson-mapper-asl-1.8.8.jar
14. jersey-core-1.8.jar

步骤3:将Web应用程序部署到Web服务器(在Tomcat的“webapps”文件夹中)。

步骤 4:创建一个 jsp 文件,并在表单操作属性中链接 servlet 类 (CallJobFromServlet.java)。下面是一个示例代码片段:

索引.jsp

<form id="trigger_hadoop" name="trigger_hadoop" action="./CallJobFromServlet ">
      <span class="back">Trigger Hadoop Job from Web Page </span> 
      <input type="submit" name="submit" value="Trigger Job" />      
</form>

推荐