顯示具有 Hadoop 標籤的文章。 顯示所有文章
顯示具有 Hadoop 標籤的文章。 顯示所有文章

2014年12月11日 星期四

Hadoop Sequence file 看起來怪怪的?

有時候,透過 Flume 傳送file (使用spool soruce) 給 HDFS 儲存,內容如下:
a.txt
a file is here
b.txt
b file is here

當查看 HDFS上的 Hadoop squence file 時
使用
$ hdfs dfs -cat <SEQUENCEFILE>
 顯示

SEQ!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.TextK▒*▒▒     ▒▒▒▒͇J<▒▒a file is herJ<▒▒b file is here


這是因為存的是 sequence file ,所以前頭會顯示"SEQ" 而"!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.Text" 則是包含此sequence data中,key/value的 class type。

如果想要看到文字內容,可使用
$ hdfs dfs -text <SEQUENCEFILE>

1418355275278   a file is here
1418355275280   b file is here


如果下載下來(仍然是squence , 必須透過一些轉換指令才能觀看例如: strings, od .. 等)
$ hdfs dfs -get <SEQUENCEFILE>
$ strings <SEQUENCEFILE>

!org.apache.hadoop.io.LongWritable
org.apache.hadoop.io.Text
a file is here
b file is here

[Reference]
http://stackoverflow.com/questions/23827051/sequence-and-vectors-from-csv-file

2014年10月1日 星期三

Eclipse with Hadoop plugin

[Reference]
https://github.com/winghc/hadoop2x-eclipse-plugin  (package download)
http://blog.jourk.com/compile-hadoop-eclipse-plugin-2-4-1-plugin.html  (method)
http://blog.csdn.net/yueritian/article/details/23868175  (Debug)


Software:
eclpise:  eclipse-java-luna-R-linux-gtk-x86_64.tar.gz
Hadoop: 2.4.1
#Because Hadoop2.4.1 doesn't provide the plugin for eclipse, so we need to create one
Plugin create:
Download: https://github.com/winghc/hadoop2x-eclipse-plugin
1.Modify: 
(this tool now(2014/09/15)is set for hadoop2.2.0 version, so it need to be modify for 2.4.1v.)
[Change 1]
  Find ~/hadoop2x-eclipse-plungin-master/src/contrib/eclipse-plugin/build.xml
  Add commons-collections-3.2.1.jar setting :
...
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-collections-${commons-collections.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<!--     Add this Line         -->
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-collections-${commons-collections.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<!--                           -->
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-collections-${commons-collections.version}.jar" todir="${build.dir}/lib" verbose="true"/>
...

    Change
lib/commons-lang-2.5.jar
 to
        lib/commons-lang-2.6.jar

 Add
lib/commons-collections-${commons-collections.version}.jar, 
 
[Change 2]
Find ~/hadoop2x-eclipse-plungin-master/ivy/libraries.properties
Add
commons-collections.version=3.2.1

2.Jar file
$cd src/contrib/eclipse-plugin
$ant jar -Dversion=2.4.1 -Declipse.home=/home/hdp2/eclipse -Dhadoop.home=/home/hdp2/hadoop-2.4.1

Then the jar file will appear in the ~/hadoop2x-eclipse-plungin-master/build/contrib/eclipse-plugin

# -Dversion : the installed Hadoop version
  -Declipse.home : the dir of ECLIPSE_HOME
  -Dhadoop.home : the dir of HADOOP_HOME

3. Move the plugin to eclipse/plugins
$cp build/contrib/eclipse-plugin/hadoop-eclipse-plugin.2.4.1.jar /home/hdp2/eclipse/plugins

4. Start eclipse with debug parameter:
$ eclipse/eclipse -clean -consolelog -debug

[備註] 此指令很重要,若eclipse開啟後點選 map/reduce location 沒反應,可以從terminal去看error info!

5. Eclipse開啟後
- Windows > Open Perspective > Other > Map/Reduce
- Windows > Show View > Other > MapReduce Tools > Map/Reduce Locations
- 點選右下角藍色大象 > 設定Hadoop server Location
[備註] 若點選大象時,沒跳出設定畫面,請查詢terminal上面錯誤訊息為何

- Setting:
Location name : master
MapReduce Master >>  Host: 192.168.0.7  Port: 8032
DFS Master     >>  Host:(use M/R Master host) Port:9000
User name : hduser (hadoop user name)

Eclipse Submit Job from Hadoop Client to remote Hadoop Master

[Software]
    Hadoop2.4.1
    Eclipse IDE for Java Developers Luna Release (4.4.0)
[Problem]
If you want to submit job from local side to remote Hadoop server through runngin java application on eclipse directly without send jar file to Hadoop server(ex. eclipce on 192.168.0.51 and Hadoop master on 192.168.0.7) 

1. Point to the jar file which will be create by the following step
add following code to your which set conf. to point to the jar file which will be create by the following step

conf.set("mapred.jar", "/home/hdp2/workspace/HBaseGet/HbaseGet_v3.jar");

[Reference]
http://stackoverflow.com/questions/21793565/class-not-found-exception-in-eclipse-wordcount-program

[NOTICE 1]
The "/home/hdp2/workspace/HBaseGet/HbaseGet_v3.jar" is the jar file location at local side

[NOTICE 2]
If you don't do this step, Eclipse may occur following error:
[Error]
... Class org.apache....  Map not found ...


2. Set yarn master locaiton
 add following to " yarn-site.xml "
<property>  
<name>yarn.resourcemanager.address</name>  
<value>master:8032</value>  
</property>  
<property>  
<name>yarn.resourcemanager.scheduler.address</name>  
<value>master:8030</value>  
</property>  
<property>  
<name>yarn.resourcemanager.resource-tracker.address</name>  
<value>master:8031</value>  
</property>  

3. Set Configuration for Client Side
 Select "Run Configuration" > "Classpath" > "Advanced" > "Add External Folder"  > Select the Hadoop and HBase conf folder(ex. $HADOOP_HOME/etc/hadoop and $HBASE_HOME/conf)

[IMPORTANT !]
Conf settings in Both Hadoop and HBase conf folder MUST be consistent  with the conf in the Hadoop and HBase master!!
# Simple method is copy the conf folder from  master to local

[Notice]
Because the Eclipse is running locally without the correct server configurations,
so if you don't do this step, it will occur following errors:
[Error 1]
Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
[Error 2]
"eclipse"  ... Retrying connect to server: 0.0.0.0/0.0.0.0:8032 ....

4. Export Jar File
"File" > "Export..." > "Java" > "JAR file" > select resource to export > "Finish"

5. Run Application
"Run Configuration" > Set "Arguments" > "Run"


(6. Staging problem ( Multiple users in Hadoop )
If you are the Hadoop client not master(ex. client_username="hdp2", Hadoop_username="hduser"), it may occur error when execute app like:

2012-10-09 10:06:31,233 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=READ_EXECUTE, inode=”system”:mapred:supergroup:rwx——

[NEED TO DO]
http://amalgjose.wordpress.com/2013/02/09/setting-up-multiple-users-in-hadoop-clusters/
1. Create a new user(hdp2) at Hadoop master server
$sudo adduser hdp2

2. Create a user for client on HDFS
$hdfs dfs -mkdir /user/hdp2

3. Using new username(hdp2) to login Hadoop Master server, and execute any example jar(such as WordCount)
It will create the staging folder of new user in HDFS/tmp/
(Don't know why......)
4. Change priority
$hdfs dfs –chown –R hdp2:hadoop /tmp/hadoop-yarn/staging/hdp2/
$hdfs dfs -chmod 777 /tmp/

2014年9月30日 星期二

Hadoop-2.4.1 Example(WordCount) on Eclipse

[Software]
    Hadoop2.4.1
    Eclipse IDE for Java Developers Luna Release (4.4.0)


1. Open a Map/Reducer Project

2. Add lib jar:
- Right click the project > Build Path > Configure Build Path > Java Build Path > Libraries
 > Add External JARS  (including jars in following dir):
  - share/hadoop/common
  - share/hadoop/common/lib
  - share/hadoop/mapreduce
  - share/hadoop/mapreduce/lib
  - share/hadoop/yarn
  - share/hadoop/yarn/lib
  --------additional-----------
  - HDFS lib
  - HBase lib

3. On this project, add new:
- Mapper: Mp.java

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Mapper.Context;

public class Mp extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable ikey, Text ivalue, Context context)
throws IOException, InterruptedException {
String line = ivalue.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while(tokenizer.hasMoreTokens()){
word.set(tokenizer.nextToken());
context.write(word, one);
}

}

}


- Reducer: Rd.java

import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;


public class Rd extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text _key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
// process values
int sum =0;
for(IntWritable v : values){
sum += v.get();
}
context.write(_key, new IntWritable(sum));
}

}

- MapReduce Driver: WC.java

import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WC { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); @SuppressWarnings("deprecation") Job job = new Job(conf, "wordcount"); //Job job = Job.getInstance(conf, "wordcount"); job.setJarByClass(WC.class); // TODO: specify a mapper job.setMapperClass(Mp.class); // TODO: specify a reducer job.setReducerClass(Rd.class); // TODO: specify output types job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); // TODO: specify input and output DIRECTORIES (not files) FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); if (!job.waitForCompletion(true)) return; } }

4. Create jar file
- File > Export > JAR file
- Select resources and Jar File Loctaion

5. Run application
- Seclect "Run Configurations" >  Check "Java Application", "Name", "Project", "Mainclass"
- Enter "Arguments" > add "file1 Output" in Program arguments
 [備註] 因為main裡面沒有指定 input output, 所以這邊必須設定給app,
  相當於用terminal 執行 $ hadoop jar project.jar file1 Output1  ,
  如果不加路徑,預設input 及output位置在本機的 $ECLIPSE_WORKSPACE/PROJECT_FOLDER

- Click "Run"

[ 問題 ]  如何Run appliction 於現有的Hadoop系統上,而非 Local端

===> 2014/09/18  目前測試,必須export jar file 丟到master上運行是OK的。

[Solution]

Spark vs Yarn (Simple Grep example)

Environment:
-HARDWARE:
Command:
-cat /proc/cpuinfo # 顯示CPU的資訊
-cat /proc/meminfo # 顯示記憶體的資訊
-sudo smartctl -i /dev/sda # 顯示硬碟型號及規格, apt-get install smartmontools


HOSTNAME IPADDRESS CPU CORE MEM DISK OS
----------------------------------------------------------------------------------------------------------------
master 192.168.0.7 4 8 3.5GB 500GB Ubuntu 14.04.1 LTS
regionserver2 192.168.0.23 2 4 3.5GB 500GB Ubuntu 14.04.1 LTS

-SOFTWARE:
-Hadoop 2.4.1
-Spark 1.0.2
-Scala 2.10.4
-java version: 1.7.0_65

Test Info:
-INPUT
Total Size: 2.8GB
INFO: Linux Redhat / Fedora, Snort NIDS, iptables firewall log file(2006-allog.1 ~ 2006-allog.9)
Date Collected: Sep - Dec 2006
DOWNLOAD: http://log-sharing.dreamhosters.com/   (Bundle 5)

* put data into HDFS
$hdfs dfs -put DATA_DIR/DATA_FOLDER /user/hduser/LogFile
$hdfs dfs -du /user/hduser  # Get the size of "LogFile" folder

-Example : GREP (Count "Dec" in log file)
Using Spark:
$spark-shell --master yarn-client
scala> val textFile = sc.textFile("/user/hduser/LogFile/2006-allog.1")
scala> textFile.filter(line => line.contains("Dec")).count()
scala> exit

Using Hadoop:
$hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep LogFile/2006-allog.1 OutPut "Dec"

# This example will execute two jobs, Grep and Sort. We only check the running time of Grep job.

Result

Data Size   spark Hadoop
-------------------------------------------------------------- 
119MB(2006-allog.1) 3.8 sec 21 sec
686MB(2006-allog.9) 5.8 sec 60 sec
2.8GB(LogFile) 25.3 sec        194 sec

Hadoop Simple WordCount Example

[Software]
    Hadoop2.4.1

[Reference]
http://azure.microsoft.com/zh-tw/documentation/articles/hdinsight-develop-deploy-java-mapreduce/

1. Open Notepad.
2. Copy WordCount.java and paste the following program into notepad.

package org.apache.hadoop.examples;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper 
       extends Mapper<object intwritable="" text="">{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer 
       extends Reducer<text ext="" ntwritable=""> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<intwritable> values, 
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

3.Make Dir:
$mkdir Word_Count
4.Compile java:
$javac -classpath $HADOOP_INSTALL/share/hadoop/common/hadoop-common-2.4.1.jar:$HADOOP_INSTALL/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:$HADOOP_INSTALL/share/hadoop/common/lib/commons-cli-1.2.jar:$HADOOP_INSTALL/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar -d Word_Count WordCount.java

5.Create jar file:
$jar -cvf WordCount.jar -C Word_Count/ . 

Then the WordCount.jar will be created at the current DIR.

6.Put input file to HDFS:
$hdfs dfs -put WordCount.java /WordCount/Input/file1

6.Execute the jar
$hadoop jar WordCount.jar org.apache.hadoop.examples.WordCount /WordCount/Input /WordCount/Output

7.Check Result:
$hdfs dfs -cat /WordCount/Output/part-r-00000

2012年11月26日 星期一

TaskTracker send heartbeat cases

 可以看到TaskTracker中對於 heartbeat傳送時機有兩種


TaskTracker.java
...
private long getHeartbeatInterval(int numFinishedTasks) {
    return (heartbeatInterval / (numFinishedTasks * oobHeartbeatDamper + 1));
  }

...

heartbeatInterval預設是 HEARTBEAT_INTERVAL_MIN 為 3*1000 ms = 3sec
numFinishedTasks是此TaskTracker目前完成的Task數量
oobHeartbeatDamper是預設1000,000

也就是說getHeartbeatInterval(int numFinishedTasks)回傳的值是:
1. 當TaskTracker尚未完成任何task >>> 3000/(0*1000,000+1) = 3000ms = 3sec
    即3sec才傳一次heartbeat

2. 當TaskTracker完成一個以上task時 >>> 3000/(1*1000,000+1) = 0.00299 = 0sec
    即馬上傳一次heartbeat告知JobTracker有task完成

2012年9月30日 星期日

Reduce Task assign 時機

在Hadoop中, Reduce task 並非等到所有 Map task 做完才被assign下去做,
預設:

JobInProgress.java
...
public synchronized Task obtainNewReduceTask(TaskTrackerStatus tts, int clusterSize,
int numUniqueHosts) throws IOException {
...
if (!scheduleReduces()) {
      return null;
    }
...
}
...
public synchronized boolean scheduleReduces() {
    return finishedMapTasks >= completedMapsForReduceSlowstart;
  }

completedMapsForReduceSlowstart
=(預設DEFAULT_COMPLETED_MAPS_PERCENT_FOR_REDUCE_SLOWSTART)*numMapTasks
= 0.05*這個job的map task 數量

目的:Ensure we have sufficient map outputs ready to shuffle before scheduling reduces

...............................................................................................................
舉例來說, 假設此Job有 40 map tasks, 1 reduce task
則此Job的completedMapsForReduceSlowstart = 0.05*40=2

及表示這個Job至少必須完成 2 個map tasks, 才可以assign reduce task

2012年6月18日 星期一

task complete percent

在執行hadoop job時, 可以從管理的網頁觀察到 job 運行的狀態,
亦可觀察到job中的 task 運作的情況,
如下圖
好奇的是, 每個 task 的"Complete"的%數是怎麼來的

簡單去查看了一下jobtasks.jsp source code,
發現他是用到 Hadoop API 中的 TaskReports 這個class裡面的 getProgress()

getProgress() 是以 0~1 表現task完成度
所以 jobtasks.jsp 中在

--------
...
for (int i = start_index ; i < end_index; i++) {
          TaskReport report = reports[i];
          out.print("<tr><td><a href=\"taskdetails.jsp?tipid=" +
            report.getTaskID() + "\">"  + report.getTaskID() + "</a></td>");
         out.print("<td>" + StringUtils.formatPercent(report.getProgress(),2) +
                   ServletUtil.percentageGraph(report.getProgress() * 100f, 80) + "</td>");
...
----------

乘上100來便於觀測完成的%數

2012年6月10日 星期日

ignore the yes/no checking while ssh connect first time

[Problem]
當每回第一次連線到一個新的ip/host時總會產生以下的訊息:

user@host:$ ssh 192.168.1.1
The authenticity of host '192.168.1.1 (192.168.1.1)' can't be established.
ECDSA key fingerprint is c7:16:8b:55:c8:39:24:5a:db:cd:e9:79:8c:24:59:39.
Are you sure you want to continue connecting (yes/no)?

此時就又必須在輸入一次" yes"才會完成連線

那假若不想輸入yes就直接連線進去可用以下方法

[Method]
在 ~/.ssh/ 中新創一份檔案叫 config
然後在config中加入

    Host 192.168.1.1
        StrictHostKeyChecking no
        UserKnownHostsFile=/dev/null

儲存.

而之後若是第一次作ssh連線到192.168.1.1的host時,
就不會在出現以上的訊息, 也不用在輸入yes就可直接進入連線


[Note]
適用情況之一:
假設像是hadoop這種大規模數量的node建置時,
master必須先把authorized_keys scp給每個slave, 但手動每次的scp太花費時間,
當寫成shell script時又會卡在上述問題的詢問上,
此時就可先在master的~/.ssh/config 上利用Method加入所有slave

2012年6月6日 星期三

hadoop管理網頁的修改

有時單純輸出log檔然後再去收集觀察不一定方便
有空再來研究一下如何去修改監測jobtracker及namenode兩個網頁

[Reference]
Hadoop 網頁管理介面樣板修改的可能性?

Virtualbox-Build multiple VMs cross mutiple physical servers

[Motivation]
當想模擬數十台甚至數百台VM所組成的cluster時, 單一實體server會受限於cpu數量以及memory大小, 而無法建立如此多台VM, 所以必須跨連到其他實體的servers, 此時VM又該如何佈署及設置網路?

[Virtual Machine]
本篇使用的是VirtualBox4.1

[Physical Machine]
HP ProLiant DL380 G6 (2core, 20G mem, 500G hd ) X 3
各自OS為 Ubuntu 12.04

[Method]
1. Install virtualbox at each physical servers   在各個server上安裝virtualbox
    參考:http://lifenote332.blogspot.tw/2012/06/virtualbox-install-on-ubuntu.html

2. Start Virtualbox and Create a new VM   新增一個新VM
    a. 設定[名稱], 選擇[作業系統]及[版本]
        (在此的名稱與VM內的OS完全沒關, 只是Virtualbox辨識用)
    b. 設定[記憶體大小] (在這設1G)
    c. 啟動硬碟-->選擇[新建硬碟]
    d. 檔案類型-->選擇[VDI]
    e. 存放裝置選擇詳細資料-->選擇[動態配置]
    f. 虛擬磁碟檔案位置-->選擇(即將創造的vdi檔的存放位置), 大小-->選擇(在此設成50G)
    g. 完成建立

3. Set two network interface cards for the VM   給予此VM兩張網卡
    a. NAT(Network Address Translation):
        概念:模擬自身就是本台實體server(Host OS), 以便對外連線, 但會使得實體server無法偵測到自己(此VM即Guest OS), 也因此無法連線到其他實體server上的VM
        功用:便利此VM上網作些更新或下載的動作
       
    b. Bridge
        概念:模擬自身與本台實體server(Host OS)存在於同一個Hob or Router底下,使得實體server能偵測得到自己, 因此在其他實體server上的VM也同樣的設定下, 可偵測到彼此
        功用:讓VM不僅可在本台實體server上互連, 也可與其他台實體server上的VM互連
       

4. Install OS and set network configuration on the VM   安裝OS及設定網路
     a. 在此安裝linux OS(Ubuntu 11.10)
     b. 網路設定的部份, 選擇[編輯連線], 因為先前給予了兩張網卡所以會有兩個連線, 而連線對映的設置應為:
         I.  裝置的MAC Address : (eth0) 即為給予的NAT網卡 --> IPv4設定 :自動(DHCP)
         II. 裝置的MAC Address : (eth1) 即為給予的Bridge網卡 --> IPv4設定 :手動(manual)
             address : 192.168.X.X(X表示自行設置, 在此設成192.168.123.1)
             mask:      255.255.X.X(在此設成255.255.255.0)
             gateway:  192.168.X.254(在此設成192.168.123.254, 網路上似乎常用254)
             DNS server: (同gateway)

5. Clone VDI   複製vdi
    當一個VM建好時, 他所有的設定都包成了一個vdi檔, 當然也可以複製此vdi來開啟另一個新的VM, 使其具備與原始的VM有同樣的設定, 但無法直接複製, 必須透過VirtualBox上的指令
    a. 執行:vboxmanage clonvdi old.vdi new.vdi
        便會產生新的vdi來開啟新的VM
    b. 執行:vboxmanage internalcommands sethduuid new.vdi
        因為有可能新舊兩個VM的uuid重複導致Virtualbox無法去開啟, 所以需在給new.vdi一個隨機的新uuid

6. Deploy new VM    開始佈署VM
    複製好的vdi, 可在本機開啟成新的VM, 亦可透過scp傳給其他台實體server 來開啟新的VM
    a. 再次實行2. , 只是到2. c.時, 選擇[使用現有硬碟], 然後選取要使用的vdi檔
        (因為在"啟動硬碟"此步驟時, virtualbox會建立一個新VM的資料夾, 我習慣先把要開啟的vdi檔先移到此然後在選取開啟, 這樣新VM的資料夾就對應新vdi, 比較方便管理)
    b. 執行3.
    c. 執行4. , 但不用在裝OS(因為已經有了), 只需調整網卡的部份及IP
        (IP一樣設成192.168.X.X)
    d. 修改hostname, 執行:sudo gedit /etc/hostname  修改完要重開VM
        (不要跟其他VM重複, 自己容易辨識的就行)

    重複5., 6. 即可佈署大量的VM

[Recommendation]
A. 假設想要所有VM都裝一些相同套件(ex. hadoop and Java)或設定, 可在一開始建第一個VM就都先下載安裝好, 如此複製此VM所產生的vdi也會具有相同的套件安裝

B. 想建立ssh免密碼連線, 可以先在第一台VM做好自己連自己的免密碼

C. 若想建立一個大型互連的cluster, 在host設定上為了減少麻煩, 可在第一台VM先設置好
    /etc/hosts, 將想要互連的host全部設進去, 之後的VM建立就不用在一個個去修改hosts

D. 當每次新建好一個VM時(就是網卡及hostname等全部設好),
    1. 先ssh 到先前已經建好的VM, 若OK則表示Bridge設置沒問題
    2. 然後開啟網頁瀏覽器, 若能正常開啟連外網頁, 則表示NAT沒問題

[延伸]
X. 曾經想過, 若VMs分佈在很多台physical machines上時, 控管每個VM要不斷轉換去使用各個physical machine, 這樣顯得很麻煩, 後來有找到一套軟體可以在單一一台server上就可去連到各個physical machine上的VM,  叫做"RemoteBox",  不過在嘗試使用下, 雖確實可以連到各台的VMs, 但卻無法開啟該VM的視窗畫面, 這點尚未突破

RemoteBox官網:
remotebox.knobgoblin.org.uk/downloads.cgi

[Reference]
多台實體machine建VM hadoop cluster
http://changyy.pixnet.net/blog/post/25612440-%5Blinux%5D-%E5%AE%89%E8%A3%9D-hadoop-0.20.1-multi-node-cluster-@-ubuntu-9.1

remotebox相關介紹
http://www.openfoundry.org/index.php?option=com_content&task=view&id=8319&Itemid=40

vboxmanage 指令doc
http://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevm

vboxmanage clonevdi指令介紹
http://lbt95.pixnet.net/blog/post/30970746-%5Bvirtualbox%5D-%E4%BD%BF%E7%94%A8vboxmanage-%E8%A4%87%E8%A3%BD%E5%87%BA%E7%AC%AC%E4%BA%8C%E5%80%8B%E4%BD%9C%E6%A5%AD%E7%B3%BB%E7%B5%B1

virtualbox 網卡type介紹
http://www.wretch.cc/blog/mk99/22949462

2012年5月29日 星期二

fair scheduler中的comparator

Hadoop中的 fair scheduler在assign task時,
會先對所有的jobs做sort再去挑選哪個job可以assign task,
而在sort中的任兩個elements 比較用的comparator程式碼小複雜, 概念簡述如下:

1. 先比較誰有滿足minshare
    a. 滿足的, 相對assign的優先權較低
    b. 若兩者皆不滿足 --> 誰的越接近滿足minshare比例的, 相對assign的優先權較低
    c. 若兩者皆已滿足minshare --> 誰的weight值越小, 相對assign的優先權較低
2. 若以上都相等, 比較 start time(再不簡述行就比job name)

註:
當 c. 中兩者weight都一樣時, 就是比兩者目前的running tasks數,
在此也就是Hadoop提出之fair scheduler的想法:
1. 先滿足min share
2. 再滿足fair share

[Reference]
Hadoop-0.21.0公平调度器算法分析

2012年4月18日 星期三

Safe mode will be turned off automatically

昨天幫學弟架完他們要用的Hadoop(hadoop1)後發現今天在開自己的Hadoop(hadoop)時,一直沒辦法偵測到第三個datanode, 且jobtracker偵測到的tasktracker數為0也無法動作, 去看了一下jobtracker的log檔發現一直出現下列訊息:
 ...
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-hadoop/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.6726 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
...
表示一直沒辦法離開safe mode, 上網找了一下
意思是說在開啟Hadoop系統時, namenode會去check各個datanode的data block,
若datablock遺失的比率超過 1-0.999 時, 他就會一直處於safemode.

讓系統離開safe mode有兩種方式:
1.
    可以用
                         hadoop dfsadmin -safemode leave
    來離開 safe mod
2.
    或調低遺失比率的threshold

使用1.後,  jobtracker也偵測到兩個tasktracker但就是偵測不到第三個,
後來直接去第三個tracker上面紀錄的 tasktracker log,
發現他說他沒辦法辨識我namenode的host name!

這才想起來, 昨天在幫他們架時有修改到 /etc/hosts,
忘記再把我自己的namenode的 host name及IP加進去,
導致這個tasktracker在執行時, 無法辨識namenode是誰,
加進去後就解決了.... 腦殘

2012年3月8日 星期四

remotely submit job to jobtracker

之前一直想到一個問題, 如何去同遠端的node而非master node來submit job給master node的JobTracker?

在Hadoopg手冊(Chapter 6: MapReduce如何運作) 中提到可透過JobClient去submit job到 JobTracker上去執行, 而在圖6-1中看到他把 JobClient括起來標示成 client node, JobTracker用 jobtracker node標示起來, 立刻想到是否可在Hadoop系統中的slaver node上(即已有從master node複製過去的hadoop整個資料夾)submit job?

便從master(adalab)  start hadoop, 然後在從slaver(algoq-ProLiant-DL380-G6) 上面執行:

bin/hadoop jar hadoop-examples-0.20.205.0.jar grep ../../gridmix/data/MonsterQueryBlockCompressed/ g_outtest 's'

然後發現可以執行!

2012年3月5日 星期一

fairscheduler修改無效

[Version]
hadoop 0.20.205.0

[Problem]
之前一直在嘗試修改fairscheduler的source code, 利用ant package指令編譯完後產生jar file 在HADOOP_HOME/build/contrib/fairscheduler中, 根據官網手冊, 把fairscheduler的jar檔放進HADOOP_HOME/lib中(照過去經驗與想法上也是如此, 要被引用的class理應都該放到lib中)

http://hadoop.apache.org/common/docs/r0.20.205.0/fair_scheduler.html

Installation


To run the fair scheduler in your Hadoop installation, you need to put it on the CLASSPATH. The easiest way is to copy the hadoop-*-fairscheduler.jar from HADOOP_HOME/build/contrib/fairscheduler to HADOOP_HOME/lib. Alternatively you can modify HADOOP_CLASSPATH to include this jar, in HADOOP_CONF_DIR/hadoop-env.sh


然而怎麼去修改src都沒發現有任何改變, 從HADOOP_HOME/logs/fairscheduler/hadoop-hadoop-fairscheduler-adalab.log中也沒發現加入的東西

然後又發現, 當我把fairscheduler從HADOOP_HOME/lib中拿掉後, 觀察http://<JobTracker>:50030/scheduler, FairScheduler依然在執行!
代表Hadoop看的lib 不是 HADOOP_HOME/lib !

後來一一拿掉hadoop中所有的fairscheuduler jar 檔, 發現當拿掉HADOOP_HOME/share/hadoop/lib/中的那個, FairScheduler才停止運作, 意即系統看的是這個路徑的lib

[Solution]
1. 檢查HADOOP_HOME/conf/hadoop-env.sh , 發現HADOOP_CLASSPATH被註解掉便啟用,
    export HADOOP_CLASSPATH=/home/hadoop/hadoop-0.20.205.0/lib:${HADOOP_CLASSPATH}
    但這是個錯誤的嘗試, 因為已經知道他看得不是HADOOP_HOME/lib

2. 查看了hadoop資料夾看是否有人引用了" /share"這個路徑,  發現

HADOOP_HOME/bin/hadoop:
=================================
if [ -e $HADOOP_PREFIX/share/hadoop/hadoop-core-* ]; then
......
  # add libs to CLASSPATH
  for f in $HADOOP_PREFIX/share/hadoop/lib/*.jar; do
    CLASSPATH=${CLASSPATH}:$f;
  done
......
else
......
  # add libs to CLASSPATH
  for f in $HADOOP_HOME/lib/*.jar; do
    CLASSPATH=${CLASSPATH}:$f;
  done
=================================

if [ -e file]; then
    A
else
    B
指的是, 如果指定的file存在, 則作A, 否則作B,
所以hadoop執行時先看到了share這個資料夾有存在, 所以指定了share中的hadoop lib給系統,換言之就是預設是不會去看放在HADOOP_HOME/lib中的jar檔, 也就是為何把fairscheduler的jar檔放進去沒用!

最簡單的方式就是把編譯好的jar file放到HADOOP_HOME/share/hadoop/lib中, 而不是HADOOP_HOME/lib.


放置到HADOOP_HOME/share/hadoop/lib後確實有看到修改的跡象:

HADOOP_HOME/logs/fairscheduler/hadoop-hadoop-fairscheduler-adalab.log
...
2012-03-05 18:49:22,803    PREEMPT_VARS    default    MAP    0    0
2012-03-05 18:49:22,803    PREEMPT_VARS    default    REDUCE    0    0
2012-03-05 18:49:22,803    HELLOOOOOOOOOOOOOOOOOOOOOO!!!!!
2012-03-05 18:49:23,303    PREEMPT_VARS    default    MAP    0    0
2012-03-05 18:49:23,303    PREEMPT_VARS    default    REDUCE    0    0
2012-03-05 18:49:23,303    HELLOOOOOOOOOOOOOOOOOOOOOO!!!!!
2012-03-05 18:49:23,368    HEARTBEAT    tracker_algoq-ProLiant-DL380-G6:localhost/127.0.0.1:60564
2012-03-05 18:49:23,368    RUNNABLE_TASKS    0    0    0    0
2012-03-05 18:49:23,368    INFO    Can't assign another MAP to HEEEEEEEEEEEELLLLLOOOOO!!!!!!!tracker_algoq-ProLiant-DL380-G6:localhost/127.0.0.1:60564
2012-03-05 18:49:23,368    HEEEEEEEEEEEELLLLLOOOOO!!!!!!!
... 

[Reference]
hadoop scheduler的編譯的編譯

2012年2月28日 星期二

setup/cleanup task

hadoop 在運行一個job的時候, 除了自身的map/reduce tasks之外,
其開始執行前有個setup task, 當setup task 執行完, 此job才會進入到RUNNING state,
而當所有map/reduce tasks都執行完時, 會有一個cleanup task,
當cleanup task 執行完後, 此job就會進入 SUCCEDED/FAILED/KILLED states.

備註:setup/cleanup task 的功能要再補充