-- linux已经自带了libsnappy.so.1文件,用于编译。如果系统没有libsnappy.so.1,需要把编译好的so拷贝到$HADOOP_HOME/lib/native目录下(方便拷贝到其他机器)。
--
-- https://www.rpmfind.net/linux/rpm2html/search.php?query=snappy&submit=Search+...&system=&arch=
-- 去看这里看下系统版本有哪些snappy版本,然后再下载相应的snappy版本编译
-- http://google.github.io/snappy/
--
[root@cu2 ~]# yum install -y libtool*
[root@cu2 ~]# exit
logout
[hadoop@cu2 snappy-1.1.3]$ ./autogen.sh
[hadoop@cu2 snappy-1.1.3]$
[hadoop@cu2 snappy-1.1.3]$ ./configure --prefix=/home/hadoop/snappy
[hadoop@cu2 snappy-1.1.3]$ make
[hadoop@cu2 snappy-1.1.3]$ make install
# -Dbundle.snappy=true -Dsnappy.lib=/usr/lib64
[hadoop@cu2 hadoop-2.6.3-src]$ mvn package -Pdist -Pnative -Dtar -Dmaven.javadoc.skip=true -DskipTests -Dsnappy.prefix=/home/hadoop/snappy -Drequire.snappy=true
[hadoop@cu2 ~]$ tar zxvf sources/hadoop-2.6.3-src/hadoop-dist/target/hadoop-2.6.3.tar.gz
[hadoop@cu2 ~]$ cd hadoop-2.6.3
[hadoop@cu2 hadoop-2.6.3]$ bin/hadoop checknative
16/01/09 19:25:46 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
16/01/09 19:25:46 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /home/hadoop/hadoop-2.6.3/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4: true revision:99
bzip2: false
openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)!
[hadoop@cu2 ~]$ for h in hadoop-slaver1 hadoop-slaver2 hadoop-slaver3 ; do rsync -vaz --delete --exclude=logs hadoop-2.6.3 $h:~/ ; done
正文部分
1) build snappy
编译Snappy,并把lib拷贝/同步到hadoop的native目录下。
12345678910
tar zxf snappy-1.1.1.tar.gz
cd snappy-1.1.1
./configure --prefix=/home/hadoop/snappy
make
make install
cd snappy
cd lib/
# 拷贝到hadoop/lib目录下
rysnc -vaz * ~/hadoop-2.2.0/lib/native/
2) rebuild hadoop common project
重新编译hadoop的lib,覆盖原来的文件。
1234567891011121314151617
[hadoop@master1 hadoop-common]$ mvn package -Dmaven.javadoc.skip=true -DskipTests -Dsnappy.prefix=/home/hadoop/snappy -Drequire.snappy=true -Pnative
[hadoop@master1 hadoop-common]$ cd ~/hadoop-2.2.0-src/hadoop-common-project/hadoop-common/
[hadoop@master1 hadoop-common]$ cd target/native/target/usr/local/lib/
[hadoop@master1 lib]$ ll
total 1252
-rw-rw-r--. 1 hadoop hadoop 820824 Jul 30 00:18 libhadoop.a
lrwxrwxrwx. 1 hadoop hadoop 18 Jul 30 00:18 libhadoop.so -> libhadoop.so.1.0.0
-rwxrwxr-x. 1 hadoop hadoop 455542 Jul 30 00:18 libhadoop.so.1.0.0
[hadoop@master1 lib]$ rsync -vaz * ~/hadoop-2.2.0/lib/native/
sending incremental file list
libhadoop.a
libhadoop.so.1.0.0
sent 409348 bytes received 53 bytes 818802.00 bytes/sec
total size is 1276384 speedup is 3.12
[hadoop@master1 lib]$
3) check
检查程序snappy是否已经配置成功
1234567891011
[hadoop@master1 ~]$ hadoop checknative -a
14/07/30 00:22:14 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
14/07/30 00:22:14 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /home/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /home/hadoop/hadoop-2.2.0/lib/native/libsnappy.so.1
lz4: true revision:43
bzip2: false
14/07/30 00:22:14 INFO util.ExitUtil: Exiting with status 1
[hadoop@master1 ~]$
hbase(main):001:0> create 'st1', 'f1'
hbase(main):005:0> alter 'st1', {NAME=>'f1', COMPRESSION=>'snappy'}
Updating all regions with the new schema...
0/1 regions updated.
1/1 regions updated.
Done.
0 row(s) in 2.7880 seconds
hbase(main):010:0> create 'sst1','f1'
0 row(s) in 0.5730 seconds
=> Hbase::Table - sst1
hbase(main):011:0> flush 'sst1'
0 row(s) in 2.5380 seconds
hbase(main):012:0> flush 'st1'
0 row(s) in 7.5470 seconds
[hadoop@master1 hadoop]$ hadoop fs -put slaves /
14/07/29 15:18:21 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /slaves._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
#!/bin/sh
#
# Simple Redis init.d script conceived to work on linux systems
# as it does use of the /proc filesystem.
REDISPORT=6379
EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli
PIDFILE=/var/run/redis_${REDISPORT}.pid
CONF=/etc/redis/${REDISPORT}.conf
case "$1" in
start)
if [ -f $PIDFILE ]
then
echo "$PIDFILE exists, process is already running or crashed"
else
echo "Starting Redis server..."
$EXEC $CONF
fi
::
stop)
if [ ! -f $PIDFILE ]
then
echo "$PIDFILE does not exists, process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping..."
$CLIEXEC -p $REDISPORT shutdown
while [ -x /proc/$PID ]
do
echo "Waiting for Redis to shutdown..."
sleep 1
done
echo "Redis stopped"
fi
::
*)
echo "Please use start or stop as first argument"
::
esac
del可以删除多个键值,返回值为删除的个数。del命令的参数不支持通配符,但可以通过linux的实现批量删除redis-cli DEL $(redis-cli KEYS "user:*")(有长度限制)来达到效果,效果比xargs效果更好。
获取keyvalue值的类型
12345678910
127.0.0.1:6379> set foo 1
OK
127.0.0.1:6379> lpush foo 1
(error) WRONGTYPE Operation against a key holding the wrong kind of value
127.0.0.1:6379> lpush foa 1
(integer) 1
127.0.0.1:6379> type foo
string
127.0.0.1:6379> type foa
list
set key value
get key
incr key # 对应的值需为数值
set foo 1
incr foo
set foo b
incr foo
# (error) ERR value is not an integer or out of range
# 增加指定的整数
incrby key increment
decr key
decr key decrement
increbyfloat key increment
append key value
strlen key # 字节数,和java字符串的length不同
mget key [key ...]
mset key value [key value ...]
getbit key offset
setbit key offset value
bitcount key [start] [end]
bitop operation destkey key [key ...] # AND OR XOR NOT
set foo1 bar
set foo2 aar
BITOP OR res foo1 foo2 # 位操作命令可以非常紧凑地存储布尔值
GET res
散列值
12345678910111213141516
hset key field value
hget key field
hmset key field value [field value ...]
hmget key field [field ...]
hgetall key
hexists key field
hsetnx key field value # 当字段不存在时赋值 if not exists
hincrby key field increment
hdel key field [field ...]
hkeys key # 仅key
hvals key # 仅value
hlen key # 字段数量
列表
双端队列型列表
123456789101112131415161718192021
lpush key value [value ...]
rpush key value [value ...]
lpop key
rpop key
llen key
lrange key start stop # 可以使用负索引,从0开始,包括最右边的元素
lrem key count value
# 删除列表中前count个值为value的元素,返回的是实际删除的元素个数。
# count为负数是从右边开始删除
# count为0时删除所有值为value的元素
# 获得/设置指定索引的元素值
lindex key index # index为负数是从右边开始
lset key index value
ltrim key start end # 只保留列表指定的片段
linsert key BEFORE/AFTER pivotvalue value
poplpush source destination # 将元素从给一个列表转到另一个列表
127.0.0.1:6379> multi
OK
127.0.0.1:6379> set key 1
QUEUED
127.0.0.1:6379> sadd key 2
QUEUED
127.0.0.1:6379> set key 3
QUEUED
127.0.0.1:6379> exec
1) OK
2) (error) WRONGTYPE Operation against a key holding the wrong kind of value
3) OK
127.0.0.1:6379> get key
"3"
127.0.0.1:6379> watch key
OK
127.0.0.1:6379> set key 2
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> set key 3
QUEUED
127.0.0.1:6379> exec
(nil)
127.0.0.1:6379> get key
"2"
127.0.0.1:6379> lpush tag:ruby:posts 1 2 3
(integer) 3
127.0.0.1:6379> hmset post:1 time 140801 name HelloWorld
OK
127.0.0.1:6379> hmset post:2 time 140802 name HelloWorld2
OK
127.0.0.1:6379> hmset post:3 time 140803 name HelloWorld3
OK
127.0.0.1:6379> sort tag:ruby:posts BY post:*->time desc
1) "3"
2) "2"
3) "1"
127.0.0.1:6379> sort tag:ruby:posts BY post:*->time DESC GET post:*->name
1) "HelloWorld3"
2) "HelloWorld2"
3) "HelloWorld"
一个sort命令中可以有多个GET参数(而BY参数只能有一个),所以还可以这样用:
1234567
127.0.0.1:6379> sort tag:ruby:posts BY post:*->time desc GET post:*->name GET post:*->time
1) "HelloWorld3"
2) "140803"
3) "HelloWorld2"
4) "140802"
5) "HelloWorld"
6) "140801"
如果还需要返回文章ID,可以使用GET #获得,也就是返回元素本身的值。
12345678910
127.0.0.1:6379> sort tag:ruby:posts BY post:*->time desc GET post:*->name GET post:*->time GET #
1) "HelloWorld3"
2) "140803"
3) "3"
4) "HelloWorld2"
5) "140802"
6) "2"
7) "HelloWorld"
8) "140801"
9) "1"
localtimes=redis.call('incr', KEYS[1])
if times==1 then
redis.call('expire', KEYS[1], ARGV[1])
end
if times>tonumber(ARGV[2]) then
return 0
end
return 1
# redis-cli --eval ratelimiting.lua rate.limiting:127.0.0.1 , 10 3 逗号前的是键,后面的是参数
lua语法(和shell脚本有点像,更简洁)
123456789101112
本地变量 local x=10
注释 --xxx
多行注释 --[[xxxx]]
赋值 local a,b=1,2 # a=1, b=2
local a={1,2,3}
a[1]=5
数字操作符的操作数如果是字符串会自动转成数字
tonumber
tostring
只要操作数不是nil或者false,逻辑操作符就认为操作数为真,否则为假!
用..来实现字符串连接
取长度 print(#"hello") -- 5
local result={}
for i,v in ipairs(KEYS) do
result[i]=redis.call("HGETALL", v)
end
return result
获取并删除有序集合中分数最小的元素
12345
local element=redis.call("ZRANGE", KEY[1], 0, 0)[1]
if element the
redis.call('ZREM', KEYS[1], element)
end
return element
处理JSON
123456789
local sum=0
local users=redis.call('mget', unpack(KEYS))
for _,user in ipairs(users) do
local courses=cjson.decode(user).course
for _,score in pairs(courses) do
sum=sum+score
end
end
return sum