HBase操作(Shell與Java API)


 
 

轉: http://blog.csdn.net/u013980127/article/details/52443155

下面代碼在Hadoop 2.6.4 + Hbase 1.2.2 + centos 6.5 + jdk 1.8上運行通過。

HBase操作

一般操作

命令 說明
status 顯示集群狀態. 選項:‘summary’, ‘simple’, or ‘detailed’. 默認值:‘summary’. 
hbase> status 
hbase> status ‘simple’ 
hbase> status ‘summary’ 
hbase> status ‘detailed’
version 顯示版本。
hbase> version
whoami 顯示當前用戶與組。
hbase> whoami

表管理

1. alter

修改表結構必須先disable

Shell:

語法:alter 't1', {NAME => 'f1'}, {NAME => 'f2', METHOD => 'delete'}
必須指定列族。示例:
表t1的列族f1,修改或增加VERSIONS為5

hbase> alter ‘t1’, NAME => ‘f1’, VERSIONS => 5

也可以同時修改多個列族:

hbase> alter ‘t1’, ‘f1’, {NAME => ‘f2’, IN_MEMORY => true}, {NAME => ‘f3’, VERSIONS => 5}

刪除表t1的f1列族:

hbase> alter ‘t1’, NAME => ‘f1’, METHOD => ‘delete’
或
hbase> alter ‘t1’, ‘delete’ => ‘f1’

也可以修改table-scope屬性,例如MAX_FILESIZE, READONLY,
MEMSTORE_FLUSHSIZE, DEFERRED_LOG_FLUSH等。
例如,修改region的最大大小為128MB:

hbase> alter ‘t1’, MAX_FILESIZE => ‘134217728’

也可以設置表的coprocessor屬性:

hbase> alter ‘t1’,
‘coprocessor’=>’hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2’

可以設置復數個coprocessor,這時會自動添加序列以唯一標示coprocessor。

coprocessor屬性設置語法:
[coprocessor jar file location] | class name | [priority] | [arguments]

也可以設置configuration給表或列族:

hbase> alter ‘t1’, CONFIGURATION => {‘hbase.hregion.scan.loadColumnFamiliesOnDemand’ => ‘true’}
hbase> alter ‘t1’, {NAME => ‘f2’, CONFIGURATION => {‘hbase.hstore.blockingStoreFiles’ => ’10’}}

也可以移除table-scope屬性:

hbase> alter ‘t1’, METHOD => ‘table_att_unset’, NAME => ‘MAX_FILESIZE’

hbase> alter ‘t1’, METHOD => ‘table_att_unset’, NAME => ‘coprocessor$1’

可以通過一個命令進行多項修改:

hbase> alter ‘t1’, { NAME => ‘f1’, VERSIONS => 3 },
{ MAX_FILESIZE => ‘134217728’ }, { METHOD => ‘delete’, NAME => ‘f2’ },
OWNER => ‘johndoe’, METADATA => { ‘mykey’ => ‘myvalue’ }

Java實現:

/**
 * 修改表結構,增加列族
 *
 * @param tableName 表名
 * @param family    列族
 *
 * @throws IOException
 */
public static void putFamily(String tableName, String family) throws IOException {
    try (Connection connection = ConnectionFactory.createConnection(configuration);
         Admin admin = connection.getAdmin()
    ) {
        TableName tblName = TableName.valueOf(tableName);
        if (admin.tableExists(tblName)) {
            admin.disableTable(tblName);
            HColumnDescriptor cf = new HColumnDescriptor(family);
            admin.addColumn(TableName.valueOf(tableName), cf);
            admin.enableTable(tblName);
        } else {
            log.warn(tableName + " not exist.");
        }
    }
}

# 調用示例
putFamily("blog", "note");

  

2. create

創建表。

Shell:

語法:
create 'table', { NAME => 'family', VERSIONS => VERSIONS } [, { NAME => 'family', VERSIONS => VERSIONS }]

示例:

hbase> create ‘t1’, {NAME => ‘f1’, VERSIONS => 5}
hbase> create ‘t1’, {NAME => ‘f1’}, {NAME => ‘f2’}, {NAME => ‘f3’}
hbase> # The above in shorthand would be the following:
hbase> create ‘t1’, ‘f1’, ‘f2’, ‘f3’
hbase> create ‘t1’, {NAME => ‘f1’, VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create ‘t1’, {NAME => ‘f1’, CONFIGURATION => {‘hbase.hstore.blockingStoreFiles’ => ’10’}}

Java示例:

/** * 創建表 * * @param tableName 表名 * @param familyNames 列族 * * @throws IOException */ public static void createTable(String tableName, String[] familyNames) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Admin admin = connection.getAdmin() ) { TableName table = TableName.valueOf(tableName); if (admin.tableExists(table)) { log.info(tableName + " already exists"); } else { HTableDescriptor hTableDescriptor = new HTableDescriptor(table); for (String family : familyNames) { hTableDescriptor.addFamily(new HColumnDescriptor(family)); } admin.createTable(hTableDescriptor); } } } # 調用例 createTable("blog", new String[]{"author", "contents"});

 

3. describe

查詢表結構

hbase> describe ‘t1’

 

 

4. disable

無效化指定表

hbase> disable ‘t1’

 

 

5. disable_all

無效化(正則)匹配的表

hbase> disable_all ‘t.*’

 

6. is_disabled

驗證指定的表是否是無效的

hbase> is_disabled ‘t1’

1

 

7. drop

刪除表。表必須是無效的。

Shell:

hbase> drop ‘t1’
  • 1
  • 1

Java實現:

/** * 刪除表 * * @param tableName 表名 * * @throws IOException */ public static void dropTable(String tableName) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Admin admin = connection.getAdmin() ) { TableName table = TableName.valueOf(tableName); if (admin.tableExists(table)) { admin.disableTable(table); admin.deleteTable(table); } } } #調用例 dropTable("blog");

 

8. drop_all

刪除所有正則匹配的表。

hbase> drop_all ‘t.*’
  • 1
  • 1

9. enable

使指定表有效化。

hbase> enable ‘t1’
  • 1
  • 1

10. enable_all

使正則匹配的所有表有效。

hbase> enable_all ‘t.*’
  • 1
  • 1

11. is_enabled

驗證指定表是否有效

hbase> is_enabled ‘t1’
  • 1
  • 1

12. exists

指定表是否存在。

hbase> exists ‘t1’
  • 1
  • 1

13. list

列出HBase中所有表,可以通過正則過濾。

hbase> list hbase> list ‘abc.*’
  • 1
  • 2
  • 1
  • 2

14. show_filters

顯示所有過濾器。

hbase> show_filters hbase(main):066:0> show_filters DependentColumnFilter KeyOnlyFilter ColumnCountGetFilter SingleColumnValueFilter PrefixFilter SingleColumnValueExcludeFilter FirstKeyOnlyFilter ColumnRangeFilter TimestampsFilter FamilyFilter QualifierFilter ColumnPrefixFilter RowFilter MultipleColumnPrefixFilter InclusiveStopFilter PageFilter ValueFilter ColumnPaginationFilter 

 

15. alter_status

獲取alter執行的狀態。 
語法:alter_status ‘tableName’

hbase> alter_status ‘t1’
  • 1
  • 1

16. alter_async

異步執行alter,通過alter_status獲取執行狀態。

數據操作

1. count

統計表的行數。

Shell:

該操作執行的時間可能會比較長 (運行mapreduce執行統計 '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount'). 
默認每1000行(可以指定步數)顯示當前總行數。Scan
caching默認開啟,默認大小為10,也可以設置:

hbase> count ‘t1’
hbase> count ‘t1’, INTERVAL => 100000
hbase> count ‘t1’, CACHE => 1000
hbase> count ‘t1’, INTERVAL => 10, CACHE => 1000

也可以通過表的引用執行:

hbase> t.count
hbase> t.count INTERVAL => 100000
hbase> t.count CACHE => 1000
hbase> t.count INTERVAL => 10, CACHE => 1000

Java實現:

/** * 統計行數 * * @param tableName 表名 * * @return 行數 * * @throws IOException */ public static long count(String tableName) throws IOException { final long[] rowCount = {0}; try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Scan scan = new Scan(); scan.setFilter(new FirstKeyOnlyFilter()); ResultScanner resultScanner = table.getScanner(scan); resultScanner.forEach(result -> { rowCount[0] += result.size(); }); } System.out.println("行數: " + rowCount[0]); return rowCount[0]; } #調用示例 count("blog");

 

2. delete

刪除指定數據。

Shell:

語法:delete 'table', 'rowkey',  'family:column' [, 'timestamp']
刪除t1表的r1行、c1列並且時間戳為ts1的數據:

hbase> delete ‘t1’, ‘r1’, ‘c1’, ts1

也可以通過表引用調用該命令:

hbase> t.delete ‘r1’, ‘c1’, ts1

Java實現:

/** * 刪除指定數據 * <p> * columns為空, 刪除指定列族的全部數據; * family為空時, 刪除指定行鍵的全部數據; * </p> * * @param tableName 表名 * @param rowKey 行鍵 * @param family 列族 * @param columns 列集合 * * @throws IOException */ public static void deleteData(String tableName, String rowKey, String family, String[] columns) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Delete delete = new Delete(Bytes.toBytes(rowKey)); if (null != family && !"".equals(family)) { if (null != columns && columns.length > 0) { // 刪除指定列 for (String column : columns) { delete.addColumn(Bytes.toBytes(family), Bytes.toBytes(column)); } } else { // 刪除指定列族 delete.addFamily(Bytes.toBytes(family)); } } else { // 刪除指定行 // empty, nothing to do } table.delete(delete); } } # 調用示例 deleteData("blog", "rk12", "author", new String[] { "name", "school" }); deleteData("blog", "rk11", "author", new String[] { "name" }); deleteData("blog", "rk10", "author", null); deleteData("blog", "rk9", null, null);

 

3. deleteall

刪除行。

語法:deleteall 'tableName', 'rowkey' [, 'column', 'timestamp']

hbase> deleteall ‘t1’, ‘r1’
hbase> deleteall ‘t1’, ‘r1’, ‘c1’
hbase> deleteall ‘t1’, ‘r1’, ‘c1’, ts1

也可以通過表引用調用該命令:

hbase> t.deleteall ‘r1’
hbase> t.deleteall ‘r1’, ‘c1’
hbase> t.deleteall ‘r1’, ‘c1’, ts1

4. get

獲取某行數據。

Shell:

語法:
get 'tableName', 'rowkey',[,....]
選項包括:列集合、時間戳、時間范圍或版本
示例:

hbase> get ‘t1’, ‘r1’
hbase> get ‘t1’, ‘r1’, {TIMERANGE => [ts1, ts2]}
hbase> get 'blog', 'rk1', 'author:name'
hbase>get 'blog', 'rk1', { COLUMN => 'author:name' }
hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}
hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}
hbase> get ‘t1’, ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}
hbase> get ‘t1’, ‘r1’, ‘c1’
hbase> get ‘t1’, ‘r1’, ‘c1’, ‘c2’
hbase> get ‘t1’, ‘r1’, [‘c1’, ‘c2’]

也可以在列上指定FORMATTER,默認toStringBinary。
可以使用org.apache.hadoop.hbase.util.Bytes中預定義的方法 (例如:toInt, toString) ;
也可以自定義方法:'c(MyFormatterClass).format'。
例如 cf:qualifier1 and cf:qualifier2:

hbase> get ‘t1’, ‘r1’ {COLUMN => [‘cf:qualifier1:toInt’,
‘cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt’] }

注:只能在列上指定FORMATTER,不能針對列族的所有列。

表的引用(通過get_table or
create_table獲得引用)也可以使用get命令,例如
t是表t1的引用(t = get_table 't1'),則:

hbase> t.get ‘r1’
hbase> t.get ‘r1’, {TIMERANGE => [ts1, ts2]}
hbase> t.get ‘r1’, {COLUMN => ‘c1’}
hbase> t.get ‘r1’, {COLUMN => [‘c1’, ‘c2’, ‘c3’]}
hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}
hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}
hbase> t.get ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}
hbase> t.get ‘r1’, ‘c1’
hbase> t.get ‘r1’, ‘c1’, ‘c2’
hbase> t.get ‘r1’, [‘c1’, ‘c2’]

Java實現:

/** * 獲取指定數據 * <p> * column為空, 檢索指定列族的全部數據; * family為空時, 檢索指定行鍵的全部數據; * </p> * * @param tableName 表名 * @param rowKey 行鍵 * @param family 列族 * @param columns 列名集合 * * @throws IOException */ public static void getData(String tableName, String rowKey, String family, String[] columns) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Get get = new Get(Bytes.toBytes(rowKey)); Result result = table.get(get); if (null != family && !"".equals(family)) { if (null != columns && columns.length > 0) { // 表里指定列族的列值 for (String column : columns) { byte[] rb = result.getValue(Bytes.toBytes(family), Bytes.toBytes(column)); System.out.println(Bytes.toString(rb)); } } else { // 指定列族的所有值 Map<byte[], byte[]> columnMap = result.getFamilyMap(Bytes.toBytes(family)); for (Map.Entry<byte[], byte[]> entry : columnMap.entrySet()) { System.out.println(Bytes.toString(entry.getKey()) + " " + Bytes.toString(entry.getValue())); } } } else { // 指定行鍵的所有值 Cell[] cells = result.rawCells(); for (Cell cell : cells) { System.out.println("family => " + Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()) + "\n" + "qualifier => " + Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()) + "\n" + "value => " + Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())); } } } } # 調用示例 getData("blog", "rk1", null, null); getData("blog", "rk1", "author", null); getData("blog", "rk1", "author", new String[] { "name", "school" });

 

5. get_counter

獲取計數器的值。

語法:get_counter 'tableName', 'row', 'column'
示例:

hbase> get_counter ‘t1’, ‘r1’, ‘c1’

同樣,也可以在表引用上使用:

hbase> t.get_counter ‘r1’, ‘c1’

6. incr

計數器

Shell:

語法:incr 'tableName', 'row', 'column', value
例如:表t1的r1行c1列增加1(可省略),或10:

hbase> incr ‘t1’, ‘r1’, ‘c1’
hbase> incr ‘t1’, ‘r1’, ‘c1’, 1
hbase> incr ‘t1’, ‘r1’, ‘c1’, 10

同樣,也可以在表引用上使用

hbase> t.incr ‘r1’, ‘c1’
hbase> t.incr ‘r1’, ‘c1’, 1
hbase> t.incr ‘r1’, ‘c1’, 10

Java實現:

/** * 計數器自增 * * @param tableName 表名 * @param rowKey 行鍵 * @param family 列族 * @param column 列 * @param value 增量 * * @throws IOException */ public static void incr(String tableName, String rowKey, String family, String column, long value) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { long count = table.incrementColumnValue(Bytes.toBytes(rowKey), Bytes.toBytes(family), Bytes.toBytes(column), value); System.out.println("增量后的值: " + count); } } #調用示例 incr("scores", "lisi", "courses", "eng", 2);

 

7. put

插入數據。

Shell:

語法:put 'table','rowkey','family:column','value'[,'timestamp']
例如:插入表t1,行r1,列c1,時間戳ts1

hbase> put ‘t1’, ‘r1’, ‘c1’, ‘value’, ts1


同樣,也可以在表引用上使用

hbase> t.put ‘r1’, ‘c1’, ‘value’, ts1

Java實現:

/** * 插入數據 * * @param tableName 表名 * @param rowKey 行鍵 * @param familys 列族信息(Key: 列族; value: (列名, 列值)) */ public static void putData(String tableName, String rowKey, Map<String, Map<String, String>> familys) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Put put = new Put(Bytes.toBytes(rowKey)); for (Map.Entry<String, Map<String, String>> family : familys.entrySet()) { for (Map.Entry<String, String> column : family.getValue().entrySet()) { put.addColumn(Bytes.toBytes(family.getKey()), Bytes.toBytes(column.getKey()), Bytes.toBytes(column.getValue())); } } table.put(put); } } # 調用例 // 行鍵1 Map<String, Map<String, String>> map1 = new HashMap<>(); // 列族author的列值 Map<String, String> author1 = new HashMap<>(); author1.put("name", "張三"); author1.put("school", "MIT"); map1.put("author", author1); // 列族contents的列值 Map<String, String> contents1 = new HashMap<>(); contents1.put("content", "吃飯了嗎?"); map1.put("contents", contents1); putData("blog", "rk1", map1);

 

8. scan

掃描全表。

語法:scan 'table' [, {COLUMNS => [ 'family:column', .... , LIMIT => num} ]
可以使用以下限定詞:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
, COLUMNS, CACHE。
如果沒有限定詞,則掃描全表。
列族的列指定為空時,掃描列族中全部數據('col_family:')。
指定過濾條件有兩種方式:
1. 使用過濾字符串 – 詳細見[HBASE-4176 JIRA](https://issues.apache.org/jira/browse/HBASE-4176)
2. 使用過濾器的整個包名稱。

示例如下:

hbase> scan ‘.META.’
hbase> scan ‘.META.’, {COLUMNS => ‘info:regioninfo’}
hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], LIMIT => 10, STARTROW => ‘xyz’}
hbase> scan ‘t1’, {COLUMNS => ‘c1’, TIMERANGE => [1303668804, 1303668904]}
hbase> scan ‘t1’, {FILTER => “(PrefixFilter (‘row2’) AND
(QualifierFilter (>=, ‘binary:xyz’))) AND (TimestampsFilter ( 123, 456))”}
hbase> scan ‘t1’, {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}

CACHE_BLOCKS:切換block caching,默認可用。
示例:

hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], CACHE_BLOCKS => false}


RAW:掃描返回所有數據 (包括delete markers和uncollected deleted)。
該選項不能和指定COLUMNS共用。默認disable。
示例:

hbase> scan ‘t1’, {RAW => true, VERSIONS => 10}

默認使用toStringBinary格式化,scan支持對列的自定義格式化。
FORMATTER約定:

1. 使用org.apache.hadoop.hbase.util.Bytes的方法(例如toInt, toString);
2. 使用自定義類的方法例如'c(MyFormatterClass).format'。

例如 cf:qualifier1 和 cf:qualifier2:
hbase> scan ‘t1’, {COLUMNS => [‘cf:qualifier1:toInt’,
‘cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt’] }

注:只能指定列的FORMATTER,不能指定列族中所有列的FORMATTER。

可以使用表的引用調用該方法:

hbase> t = get_table ‘t’
hbase> t.scan

Java實現:

/** * 全表掃描 * * @param tableName 表名 * * @throws IOException */ public static void scan(String tableName) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Scan scan = new Scan(); ResultScanner resultScanner = table.getScanner(scan); for (Result result : resultScanner) { List<Cell> cells = result.listCells(); for (Cell cell : cells) { System.out.println("row => " + Bytes.toString(CellUtil.cloneRow(cell)) + "\n" + "family => " + Bytes.toString(CellUtil.cloneFamily(cell)) + "\n" + "qualifier => " + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\n" + "value => " + Bytes.toString(CellUtil.cloneValue(cell))); } } } } # 調用示例 scan("blog");

 

9. truncate

無效、刪除並重新創建表。

Shell:

hbase>truncate ‘t1’
  • 1
  • 1

Java示例:

工具

命令 說明
assign 分配region,如果region已經被分配,將強制重新分配。
hbase> assign ‘REGION_NAME’
balancer 觸發集群均衡器。
hbase> balancer
balance_switch 切換均衡器。
hbase> balance_switch true
hbase> balance_switch false
close_region 關閉region。
hbase>close_region 'REGIONNAME', 'SERVER_NAME'
compact Compact all regions in a table:
hbase> compact ‘t1’
Compact an entire region:
hbase> compact ‘r1’
Compact only a column family within a region:
hbase> compact ‘r1’, ‘c1’
Compact a column family within a table:
hbase> compact ‘t1’, ‘c1’
flush Flush all regions in passed table or pass a region row to flush an individual region. 
For example:
hbase> flush ‘TABLENAME’
hbase> flush ‘REGIONNAME’
major_compact Compact all regions in a table:
hbase> major_compact ‘t1’
Compact an entire region:
hbase> major_compact ‘r1’
Compact a single column family within a region:
hbase> major_compact ‘r1’, ‘c1’
Compact a single column family within a table:
hbase> major_compact ‘t1’, ‘c1’
move 隨機移動到某region server
hbase> move ‘ENCODED_REGIONNAME’
移動region到指定的server
hbase>move 'ENCODED_REGIONNAME', 'SERVER_NAME'
split Split entire table or pass a region to split individual region. With the second parameter, you can specify an explicit split key for the region.
Examples:
split ‘tableName’
split ‘regionName’ # format: ‘tableName,startKey,id’
split ‘tableName’, ‘splitKey’
split ‘regionName’, ‘splitKey’
unassign Unassign a region. Unassign will close region in current location and then reopen it again. Pass ‘true’ to force the unassignment (‘force’ will clear all in-memory state in master before the reassign. If results in double assignment use hbck -fix to resolve. To be used by experts). Use with caution. For expert use only.
Examples:
hbase> unassign ‘REGIONNAME’
hbase> unassign ‘REGIONNAME’, true
hlog_roll Roll the log writer. That is, start writing log messages to a new file. The name of the regionserver should be given as the parameter. A ‘server_name’ is the host, port plus startcode of a regionserver. For example:
host187.example.com,60020,1289493121758 (find servername in master ui or when you do detailed status in shell)
hbase>hlog_roll
zk_dump Dump status of HBase cluster as seen by ZooKeeper. Example:
hbase>zk_dump

集群復制

命令 說明
add_peer Add a peer cluster to replicate to, the id must be a short and the cluster key is composed like this:
hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent This gives a full path for HBase to connect to another cluster.Examples:
hbase>add_peer ‘1’, “server1.cie.com:2181:/hbase”
hbase>add_peer ‘2’, “zk1,zk2,zk3:2182:/hbase-prod”
remove_peer Stops the specified replication stream and deletes all the meta information kept about it. Examples:
hbase> remove_peer ‘1’
list_peers List all replication peer clusters.
hbase> list_peers
enable_peer Restarts the replication to the specified peer cluster, continuing from where it was disabled. Examples:
hbase> enable_peer ‘1’
disable_peer Stops the replication stream to the specified cluster, but still keeps track of new edits to replicate.Examples: 
hbase> disable_peer ‘1’
start_replication Restarts all the replication features. The state in which each stream starts in is undetermined. 
WARNING: start/stop replication is only meant to be used in critical load situations. Examples:
hbase> start_replication
stop_replication Stops all the replication features. The state in which each stream stops in is undetermined. 
WARNING: start/stop replication is only meant to be used in critical load situations. Examples: 
hbase> stop_replication

權限控制

命令 說明
grant 授予用戶指定權限. 
語法:集合’RWXCA’中任意個字符。
READ(‘R’)
WRITE(‘W’)
EXEC(‘X’)
CREATE(‘C’)
ADMIN(‘A’)
例如:
hbase> grant ‘bobsmith’, ‘RWXCA’
hbase> grant ‘bobsmith’, ‘RW’, ‘t1’, ‘f1’, ‘col1’
revoke 移除用戶權限。
語法:revoke
hbase> revoke ‘bobsmith’, ‘t1’, ‘f1’, ‘col1’
user_permission 顯示用戶權限。
語法:user_permission ‘table’
hbase> user_permission ‘table1’

參考

  1. HBase shell scan 模糊查詢

  2. HBase 5種寫入數據方式

  3. HBase 常用Shell命令

  4. hbase 1.1.4增刪查改demo

  5. java 獲取 hbase數據 springdatahadoop – hbasetemplate

  6. HBase shell commands

  7. HBase Maven Dependency

  8. HBase內置過濾器的一些總結

  9. HBase(0.96以上版本)過濾器Filter詳解及實例代碼

  10. HBase java 統計表行數

  11. HBase之計數器


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM