Redis
Redis
注意:
此文档对应的 Redis 版本为 7.2.1。
概述
Redis 是一款开源内存键值数据库,常用于缓存和高速读写。
部署
下面介绍如何在 CentOS 7 环境安装 Redis,建议在安装前先关闭防火墙和 SELinux。
Docker 安装
可以使用 Docker 安装运行 Redis。
[root@stone ~]# docker run -itd --name redis -p 6378:6379 redis
源码安装
在官方网站下载最新稳定版本进行安装。
[root@stone ~]# yum -y install gcc wget
[root@stone ~]# wget https://download.redis.io/redis-stable.tar.gz
[root@stone ~]# tar -xvzf redis-stable.tar.gz
[root@stone ~]# cd redis-stable
[root@stone redis-stable]# make
[root@stone redis-stable]# make install
[root@stone redis-stable]# redis-server
7986:C 22 Sep 2023 10:21:53.820 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
7986:C 22 Sep 2023 10:21:53.820 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7986:C 22 Sep 2023 10:21:53.820 * Redis version=7.2.1, bits=64, commit=00000000, modified=0, pid=7986, just started
7986:C 22 Sep 2023 10:21:53.820 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
7986:M 22 Sep 2023 10:21:53.821 * Increased maximum number of open files to 10032 (it was originally set to 1024).
7986:M 22 Sep 2023 10:21:53.821 * monotonic clock: POSIX clock_gettime
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 7.2.1 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 7986
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | https://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
7986:M 22 Sep 2023 10:21:53.821 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
7986:M 22 Sep 2023 10:21:53.823 * Server initialized
7986:M 22 Sep 2023 10:21:53.823 * Ready to accept connections tcp
配置
源码安装启动后,可以看到程序运行在前台,其输入日志中出现了一些警告,需要调整相关配置。
调整内核参数:
[root@stone ~]# vi /etc/sysctl.conf
vm.overcommit_memory=1
net.core.somaxconn=551
[root@stone ~]# sysctl -p
调整资源限制:
[root@stone ~]# vi /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
[root@stone ~]# logout
[root@stone ~]# ulimit -n
65535
修改配置文件:
[root@stone ~]# cp /root/redis-stable/redis.conf /root/redis-stable/redis.conf.bak
[root@stone ~]# vi /root/redis-stable/redis.conf
bind 127.0.0.1 -::1 192.168.92.128
daemonize yes
protected-mode no
logfile /var/log/redis.log
其中:
bind
:绑定的网口地址,可以通过绑定的地址访问 Redisdaemonize
:后台运行logfile
:日志文件
然后重新运行 Redis:
[root@stone ~]# redis-server /root/redis-stable/redis.conf
关闭 Redis:
[root@stone ~]# redis-cli
127.0.0.1:6379> shutdown
服务
创建 Redis 服务。
[root@stone ~]# vi /usr/lib/systemd/system/redis.service
[Unit]
Description=Redis Server
After=network.target
[Service]
Type=forking
ExecStart=/usr/local/bin/redis-server /root/redis-stable/redis.conf
PrivateTmp=true
[Install]
WantedBy=multi-user.target
[root@stone ~]# systemctl daemon-reload
[root@stone ~]# systemctl start redis
[root@stone ~]# systemctl status redis
● redis.service - Redis Server
Loaded: loaded (/usr/lib/systemd/system/redis.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2023-09-22 14:14:20 CST; 15s ago
Process: 14971 ExecStart=/usr/local/bin/redis-server /root/redis-stable/redis.conf (code=exited, status=0/SUCCESS)
Main PID: 14972 (redis-server)
CGroup: /system.slice/redis.service
└─14972 /usr/local/bin/redis-server 127.0.0.1:6379
Sep 22 14:14:20 stone systemd[1]: Starting Redis Server...
Sep 22 14:14:20 stone systemd[1]: Started Redis Server.
[root@stone ~]# systemctl enable redis
Created symlink from /etc/systemd/system/multi-user.target.wants/redis.service to /usr/lib/systemd/system/redis.service.
客户端
可以使用客户端来操作 Redis。
命令客户端
安装 Redis 默认安装了命令客户端 redis-cli
。
语法:
redis-cli [OPTIONS] [cmd [arg [arg ...]]]
常用选项有:
-h
:指定绑定的服务器地址,默认为 127.0.0.1。-p
:指定服务器端口,默认为 6379。-a
:指定访问密码。-n
:数据库序号。
[root@stone ~]# redis-cli ping
PONG
如果不指定 cmd
则进入交互模式。
[root@stone ~]# redis-cli
127.0.0.1:6379> ping
PONG
数据类型
Redis 作为键值数据库,其键一般为 String 类型,值主要有五种数据类型:
- String:字符串类型,是 Redis 最基本的数据类型。String 类型是二进制安全的,意味着 string 可以包含任何数据,例如图片或者序列化的对象。
- Hash:哈希类型,值是一个键值对的集合。
- List:列表类型,值是一个有序的字符串集,可以理解成字符串的数组。
- Set:集合类型,值是一个无序的字符串集合,具有很高的查找速度。
- Zset:有序集合类型,与 set 相似,但每个元素都会关联一个 double 类型的分数,根据分数进行排序。
命令
Redis 有很多命令,可以在官方文档查看命令用法,也可以在命令客户端使用 help
命令查看用法:
[root@stone ~]# redis-cli
127.0.0.1:6379> help
redis-cli 7.2.1
To get help about Redis commands type:
"help @<group>" to get a list of commands in <group>
"help <command>" for help on <command>
"help <tab>" to get a list of possible help topics
"quit" to exit
To set redis-cli preferences:
":set hints" enable online hints
":set nohints" disable online hints
Set your preferences in ~/.redisclirc
通用命令
通用命令是指各数据类型都可以使用的命令。
KEYS
使用 KEYS
命令返回符合模式的所有键名称。
语法:
KEYS pattern
支持的模式有:
?
:匹配任意一个字符*
:匹配任意多个字符[]
:匹配列出的字符[^]
:不匹配列出的字符
例子:
127.0.0.1:6379> MSET firstname Jack lastname Stuntman age 35
OK
127.0.0.1:6379> KEYS *name*
1) "firstname"
2) "lastname"
127.0.0.1:6379> KEYS a??
1) "age"
127.0.0.1:6379> KEYS *
1) "firstname"
2) "lastname"
3) "age"
注意:
在生产环境谨慎使用该命令,避免影响性能。
DEL
使用 DEL
命令删除一个或多个键。
语法:
DEL key [key ...]
例子:
127.0.0.1:6379> SET key1 "Hello"
OK
127.0.0.1:6379> SET key2 "World"
OK
127.0.0.1:6379> DEL key1 key2 key3
(integer) 2
UNLINK
使用 UNLINK
命令异步删除一个或多个键,常用于删除 BigKey。
语法:
UNLINK key [key ...]
例子:
127.0.0.1:6379> SET key1 "Hello"
OK
127.0.0.1:6379> SET key2 "World"
OK
127.0.0.1:6379> UNLINK key1 key2 key3
(integer) 2
EXISTS
使用 EXISTS
命令查看一个或者多个键是否存在
语法:
EXISTS key [key ...]
例子:
127.0.0.1:6379> SET key1 "Hello"
OK
127.0.0.1:6379> EXISTS key1
(integer) 1
127.0.0.1:6379> EXISTS nosuchkey
(integer) 0
127.0.0.1:6379> SET key2 "World"
OK
127.0.0.1:6379> EXISTS key1 key2 nosuchkey
(integer) 2
EXPIRE
使用 EXPIRE
命令设置键过期时间,以秒为单位。
语法:
EXPIRE key seconds [NX|XX|GT|LT]
其中:
NX
:仅为没有设置过期时间的键设置XX
:仅为已有设置过期时间的键设置GT
:仅为大于当前过期时间的键设置LT
:仅为小于当前过期时间的键设置
例子:
127.0.0.1:6379> SET mykey "Hello"
OK
127.0.0.1:6379> EXPIRE mykey 10
(integer) 1
127.0.0.1:6379> TTL mykey
(integer) 2
127.0.0.1:6379> SET mykey "Hello World"
OK
127.0.0.1:6379> TTL mykey
(integer) -1
127.0.0.1:6379> EXPIRE mykey 10 XX
(integer) 0
127.0.0.1:6379> TTL mykey
(integer) -1
127.0.0.1:6379> EXPIRE mykey 10 NX
(integer) 1
127.0.0.1:6379> TTL mykey
(integer) 5
TTL
使用 TTL
命令返回键的存活时间。
语法:
TTL key
例子:
127.0.0.1:6379> SET mykey "Hello"
OK
127.0.0.1:6379> EXPIRE mykey 10
(integer) 1
127.0.0.1:6379> TTL mykey
(integer) 6
127.0.0.1:6379> TTL mykey
(integer) 2
127.0.0.1:6379> TTL mykey
(integer) -2
String
String 类型的值为字符串,根据字符串的格式,可分为:
- string:普通字符串
- int:整数类型,可进行自增,自减操作
- float:浮点类型,可进行自增,自减操作
SET
使用 SET
命令设置 String 类型的键值。
语法:
SET key value [NX|XX] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]
其中:
NX
:仅为不存在的键设置值XX
:仅为已存在的键设置值GET
:返回旧值EX seconds
:指定过期时间,以秒为单位PX milliseconds
:指定过期时间,以毫秒为单位EXAT unix-time-seconds
:使用 Unix 时间指定过期时间,以秒为单位PXAT unix-time-milliseconds
:使用 Unix 时间指定过期时间,以毫秒为单位KEEPTTL
:保留存活时间。
例子:
127.0.0.1:6379> SET mykey "Hello"
OK
127.0.0.1:6379> GET mykey
"Hello"
127.0.0.1:6379> SET anotherkey "will expire in a minute" EX 60
OK
GET
使用 GET
命令获取 String 类型的键值。
语法:
GET key
例子:
127.0.0.1:6379> GET nonexisting
(nil)
127.0.0.1:6379> SET mykey "Hello"
OK
127.0.0.1:6379> GET mykey
"Hello"
MSET
使用 MSET
命令设置多个 String 类型的键值。
语法:
MSET key value [key value ...]
例子:
127.0.0.1:6379> MSET key1 "Hello" key2 "World"
OK
127.0.0.1:6379> GET key1
"Hello"
127.0.0.1:6379> GET key2
"World"
MGET
使用 MGET
命令获取多个 String 类型的键值。
语法:
MGET key [key ...]
例子:
127.0.0.1:6379> SET key1 "Hello"
OK
127.0.0.1:6379> SET key2 "World"
OK
127.0.0.1:6379> MGET key1 key2 nonexisting
1) "Hello"
2) "World"
3) (nil)
INCR
使用 INCR
命令为整型键值增加 1。
语法:
INCR key
例子:
127.0.0.1:6379> SET mykey "10"
OK
127.0.0.1:6379> INCR mykey
(integer) 11
127.0.0.1:6379> GET mykey
"11"
INCRBY
使用 INCRBY
命令为整型键值增加指定值。
语法:
INCRBY key increment
例子:
127.0.0.1:6379> SET mykey "10"
OK
127.0.0.1:6379> INCRBY mykey 5
(integer) 15
127.0.0.1:6379> GET mykey
"15"
INCRBYFLOAT
使用 INCRBYFLOAT
命令为浮点型键值增加指定值。
语法:
INCRBYFLOAT key increment
例子:
127.0.0.1:6379> SET mykey 10.50
OK
127.0.0.1:6379> INCRBYFLOAT mykey 0.1
"10.6"
127.0.0.1:6379> INCRBYFLOAT mykey -5
"5.6"
127.0.0.1:6379> SET mykey 5.0e3
OK
127.0.0.1:6379> INCRBYFLOAT mykey 2.0e2
"5200"
Hash
Hash 类型的值为一个无序字典。
HSET
使用 HSET
命令设置 Hash 类型的键值。
语法:
HSET key field value [field value ...]
例子:
127.0.0.1:6379> HSET myhash field1 "Hello"
(integer) 1
127.0.0.1:6379> HGET myhash field1
"Hello"
127.0.0.1:6379> HSET myhash field2 "Hi" field3 "World"
(integer) 2
127.0.0.1:6379> HGET myhash field2
"Hi"
127.0.0.1:6379> HGET myhash field3
"World"
127.0.0.1:6379> HSET auth:user:1 id 1 name stone age 18
(integer) 3
HGET
使用 HGET
命令获取 Hash 类型的域值。
语法:
HGET key field
例子:
127.0.0.1:6379> HSET myhash field1 "foo"
(integer) 0
127.0.0.1:6379> HGET myhash field1
"foo"
127.0.0.1:6379> HGET myhash field2
"Hi"
HGETALL
使用 HGETALL
命令获取 Hash 类型的所有键值。
语法:
HGETALL key
例子:
127.0.0.1:6379> HSET myhash field1 "Hello"
(integer) 1
127.0.0.1:6379> HSET myhash field2 "World"
(integer) 1
127.0.0.1:6379> HGETALL myhash
1) "field1"
2) "Hello"
3) "field2"
4) "World"
127.0.0.1:6379> HGETALL auth:user:1
1) "id"
2) "1"
3) "name"
4) "stone"
5) "age"
6) "18"
HKEYS
使用 HKEYS
命令获取 Hash 类型的所有域。
语法:
HKEYS key
例子:
127.0.0.1:6379> HSET myhash field1 "Hello"
(integer) 0
127.0.0.1:6379> HSET myhash field2 "World"
(integer) 0
127.0.0.1:6379> HKEYS myhash
1) "field1"
2) "field2"
127.0.0.1:6379> HKEYS auth:user:1
1) "id"
2) "name"
3) "age"
HVALS
使用 HVALS
命令获取 Hash 类型的所有域值。
语法:
HVALS key
例子:
127.0.0.1:6379> HSET myhash field1 "Hello"
(integer) 0
127.0.0.1:6379> HSET myhash field2 "World"
(integer) 0
127.0.0.1:6379> HVALS myhash
1) "Hello"
2) "World"
127.0.0.1:6379> HVALS auth:user:1
1) "1"
2) "stone"
3) "18"
HINCRBY
使用 HINCRBY
命令为整型域值增加指定值。
语法:
HINCRBY key field increment
例子:
127.0.0.1:6379> HSET myhash field 5
(integer) 1
127.0.0.1:6379> HINCRBY myhash field 1
(integer) 6
127.0.0.1:6379> HINCRBY myhash field -1
(integer) 5
127.0.0.1:6379> HINCRBY myhash field -10
(integer) -5
127.0.0.1:6379> HINCRBY auth:user:1 age 10
(integer) 28
HSETNX
使用 HSETNX
命令在域不存在时设置 Hash 类型的域值。
语法:
HSETNX key field value
例子:
127.0.0.1:6379> HSETNX myhash field "Hello"
(integer) 1
127.0.0.1:6379> HSETNX myhash field "World"
(integer) 0
127.0.0.1:6379> HGET myhash field
"Hello"
List
List 类型与 Java 中的 LinkedList 类似,有序且可重复,可以看作是一个双向链表结构,既可以正向检索,也可以反向检索。
LPUSH
使用 LPUSH
命令向列表左侧插入一个或多个元素。
语法:
LPUSH key element [element ...]
例子:
127.0.0.1:6379> LPUSH mylist "world"
(integer) 1
127.0.0.1:6379> LPUSH mylist "hello"
(integer) 2
127.0.0.1:6379> LRANGE mylist 0 -1
1) "hello"
2) "world"
RPUSH
使用 RPUSH
命令向列表右侧插入一个或多个元素。
语法:
RPUSH key element [element ...]
例子:
127.0.0.1:6379> RPUSH mylist "hello"
(integer) 1
127.0.0.1:6379> RPUSH mylist "world"
(integer) 2
127.0.0.1:6379> LRANGE mylist 0 -1
1) "hello"
2) "world"
LPOP
使用 LPOP
命令移除并返回列表左侧的一个或多个元素,没有则返回 nil
。
语法:
LPOP key [count]
例子:
127.0.0.1:6379> RPUSH mylist "one" "two" "three" "four" "five"
(integer) 5
127.0.0.1:6379> LPOP mylist
"one"
127.0.0.1:6379> LPOP mylist 2
1) "two"
2) "three"
127.0.0.1:6379> LRANGE mylist 0 -1
1) "four"
2) "five"
RPOP
使用 RPOP
命令移除并返回列表右侧的一个或多个元素,没有则返回 nil
。
语法:
RPOP key [count]
例子:
127.0.0.1:6379> RPUSH mylist "one" "two" "three" "four" "five"
(integer) 5
127.0.0.1:6379> RPOP mylist
"five"
127.0.0.1:6379> RPOP mylist 2
1) "four"
2) "three"
127.0.0.1:6379> LRANGE mylist 0 -1
1) "one"
2) "two"
LRANGE
使用 LRANGE
命令返回列表中指定范围的元素。
语法:
LRANGE key start stop
例子:
127.0.0.1:6379> RPUSH mylist "one"
(integer) 1
127.0.0.1:6379> RPUSH mylist "two"
(integer) 2
127.0.0.1:6379> RPUSH mylist "three"
(integer) 3
127.0.0.1:6379> LRANGE mylist 0 0
1) "one"
127.0.0.1:6379> LRANGE mylist -3 2
1) "one"
2) "two"
3) "three"
127.0.0.1:6379> LRANGE mylist -100 100
1) "one"
2) "two"
3) "three"
127.0.0.1:6379> LRANGE mylist 5 10
(empty array)
Set
Set 集合类型与 Java 中的 HashSet 类似,无序且不可重复,可以看作是值为空的 HashMap,查找快,支持交集,并集和差集。
SADD
使用 SADD
命令向集合中增加一个或者多个成员。
语法:
SADD key member [member ...]
例子:
127.0.0.1:6379> SADD myset "Hello"
(integer) 1
127.0.0.1:6379> SADD myset "World"
(integer) 1
127.0.0.1:6379> SADD myset "World"
(integer) 0
127.0.0.1:6379> SMEMBERS myset
1) "Hello"
2) "World"
SREM
使用 SREM
命令从集合中移除一个或者多个成员,移除最后一个成员后会删除集合。
语法:
SREM key member [member ...]
例子:
127.0.0.1:6379> SADD myset "one"
(integer) 1
127.0.0.1:6379> SADD myset "two"
(integer) 1
127.0.0.1:6379> SADD myset "three"
(integer) 1
127.0.0.1:6379> SREM myset "one"
(integer) 1
127.0.0.1:6379> SREM myset "four"
(integer) 0
127.0.0.1:6379> SMEMBERS myset
1) "two"
2) "three"
SCARD
使用 SCARD
命令返回集合成员数量。
语法:
SCARD key
例子:
127.0.0.1:6379> SADD myset "Hello"
(integer) 1
127.0.0.1:6379> SADD myset "World"
(integer) 1
127.0.0.1:6379> SCARD myset
(integer) 2
SISMEMBER
使用 SISMEMBER
命令查看某个成员是否属于集合。
语法:
SISMEMBER key member
例子:
127.0.0.1:6379> SADD myset "one"
(integer) 1
127.0.0.1:6379> SISMEMBER myset "one"
(integer) 1
127.0.0.1:6379> SISMEMBER myset "two"
(integer) 0
SMEMBERS
使用 SMEMBERS
命令返回集合所有成员。
语法:
SMEMBERS key
例子:
127.0.0.1:6379> SADD myset "Hello"
(integer) 1
127.0.0.1:6379> SADD myset "World"
(integer) 1
127.0.0.1:6379> SMEMBERS myset
1) "Hello"
2) "World"
SINTER
使用 SINTER
命令返回多个集合的交集
语法:
SINTER key [key ...]
例子:
127.0.0.1:6379> SADD key1 "a" "b" "c"
(integer) 3
127.0.0.1:6379> SADD key2 "c" "d" "e"
(integer) 3
127.0.0.1:6379> SINTER key1 key2
1) "c"
SDIFF
使用 SDIFF
命令返回多个集合的差集
语法:
SDIFF key [key ...]
例子:
127.0.0.1:6379> SADD key1 "a" "b" "c"
(integer) 3
127.0.0.1:6379> SADD key2 "c" "d" "e"
(integer) 3
127.0.0.1:6379> SDIFF key1 key2
1) "a"
2) "b"
SUNION
使用 SUNION
命令返回多个集合的并集
语法:
SUNION key [key ...]
例子:
127.0.0.1:6379> SADD key1 "a" "b" "c"
(integer) 3
127.0.0.1:6379> SADD key2 "c" "d" "e"
(integer) 3
127.0.0.1:6379> SUNION key1 key2
1) "a"
2) "b"
3) "c"
4) "d"
5) "e"
SortedSet
SortedSet 集合类型与 Java 中的 TreeSet 类似,有序且不可重复,查找快,经常用于实现排行榜这样的功能。
ZADD
使用 ZADD
命令向有序集合中增加一个或者多个成员,或者更新其分数。
语法:
ZADD key [NX|XX] [GT|LT] [CH] [INCR] score member [score member ...]
其中:
NX
:只增加新元素,不更新已存在元素XX
:只更新已存在元素,不增加新元素GT
:只更新分数大于当前分数的元素,不阻止增加新元素LT
:只更新分数小于当前分数的元素,不阻止增加新元素CH
:返回值为新增元素和被修改分数元素的总数量,而不是默认的新增元素的数量INCR
:类似ZINCRBY
,增加某个成员的分数
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZADD myzset 1 "uno"
(integer) 1
127.0.0.1:6379> ZADD myzset 2 "two" 3 "three"
(integer) 2
127.0.0.1:6379> ZRANGE myzset 0 -1 WITHSCORES
1) "one"
2) "1"
3) "uno"
4) "1"
5) "two"
6) "2"
7) "three"
8) "3"
ZREM
使用 ZREM
命令从有序集合中移除一个或者多个成员,移除最后一个成员后会删除集合。
语法:
ZREM key member [member ...]
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZADD myzset 2 "two"
(integer) 1
127.0.0.1:6379> ZADD myzset 3 "three"
(integer) 1
127.0.0.1:6379> ZREM myzset "two"
(integer) 1
127.0.0.1:6379> ZRANGE myzset 0 -1 WITHSCORES
1) "one"
2) "1"
3) "three"
4) "3"
ZSCORE
使用 ZSCORE
命令返回有序集合中的成员分数。
语法:
ZSCORE key member
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZSCORE myzset "one"
"1"
ZCARD
使用 ZCARD
命令返回有序集合中的成员数量。
语法:
ZCARD key
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZADD myzset 2 "two"
(integer) 1
127.0.0.1:6379> ZCARD myzset
(integer) 2
ZCOUNT
使用 ZCOUNT
命令返回有序集合中指定分数范围内的成员数量。
语法:
ZCOUNT key min max
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZADD myzset 2 "two"
(integer) 1
127.0.0.1:6379> ZADD myzset 3 "three"
(integer) 1
127.0.0.1:6379> ZCOUNT myzset 2 3
(integer) 2
127.0.0.1:6379> ZCOUNT myzset -inf +inf
(integer) 3
ZINCRBY
使用 ZINCRBY
命令增加有序集合中的成员分数。
语法:
ZINCRBY key increment member
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZADD myzset 2 "two"
(integer) 1
127.0.0.1:6379> ZINCRBY myzset 2 "one"
"3"
127.0.0.1:6379> ZRANGE myzset 0 -1 WITHSCORES
1) "two"
2) "2"
3) "one"
4) "3"
ZRANK ZREVRANK
使用 ZRANK
或者 ZREVRANK
命令以升序或者降序返回有序集合中的成员排名。
语法:
ZRANK key member [WITHSCORE]
ZREVRANK key member [WITHSCORE]
例子:
127.0.0.1:6379> ZADD myzset 1 "one"
(integer) 1
127.0.0.1:6379> ZADD myzset 2 "two"
(integer) 1
127.0.0.1:6379> ZADD myzset 3 "three"
(integer) 1
127.0.0.1:6379> ZRANK myzset "three"
(integer) 2
127.0.0.1:6379> ZREVRANK myzset "one"
(integer) 2
ZRANGE
使用 ZRANGE
命令以升序或者降序排序后,返回有序集合中指定元素。
语法:
ZRANGE key start stop [BYSCORE|BYLEX] [REV] [LIMIT offset count] [WITHSCORES]
其中:
BYSCORE
:返回指定分数范围内的成员BYLEX
:通过字典区间返回有序集合的成员,适用于分数相同的成员REV
:降序排序LIMIT offset count
:偏移量WITHSCORES
:显示分数
例子:
127.0.0.1:6379> ZADD myzset 1 "one" 2 "two" 3 "three"
(integer) 3
127.0.0.1:6379> ZRANGE myzset 0 -1
1) "one"
2) "two"
3) "three"
127.0.0.1:6379> ZRANGE myzset 2 3
1) "three"
127.0.0.1:6379> ZRANGE myzset -2 -1
1) "two"
2) "three"
127.0.0.1:6379> ZRANGE myzset 0 1 WITHSCORES
1) "one"
2) "1"
3) "two"
4) "2"
127.0.0.1:6379> ZRANGE myzset (1 +inf BYSCORE LIMIT 1 1
1) "three"
ZINTER
使用 ZINTER
命令返回多个有序集合的交集。
语法:
ZINTER numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE SUM|MIN|MAX] [WITHSCORES]
例子:
127.0.0.1:6379> ZADD zset1 1 "one" 2 "two"
(integer) 2
127.0.0.1:6379> ZADD zset2 1 "one" 2 "two" 3 "three"
(integer) 3
127.0.0.1:6379> ZINTER 2 zset1 zset2
1) "one"
2) "two"
127.0.0.1:6379> ZINTER 2 zset1 zset2 WITHSCORES
1) "one"
2) "2"
3) "two"
4) "4"
ZDIFF
使用 ZDIFF
命令返回多个有序集合的差集。
语法:
ZDIFF numkeys key [key ...] [WITHSCORES]
例子:
127.0.0.1:6379> ZADD zset1 1 "one" 2 "two" 3 "three"
(integer) 3
127.0.0.1:6379> ZADD zset2 1 "one" 2 "two"
(integer) 2
127.0.0.1:6379> ZDIFF 2 zset1 zset2
1) "three"
127.0.0.1:6379> ZDIFF 2 zset1 zset2 WITHSCORES
1) "three"
2) "3"
ZUNION
使用 ZUNION
命令返回多个有序集合的并集。
语法:
ZUNION numkeys key [key ...] [WEIGHTS weight [weight ...]] [AGGREGATE SUM|MIN|MAX] [WITHSCORES]
例子:
127.0.0.1:6379> ZADD zset1 1 "one" 2 "two"
(integer) 2
127.0.0.1:6379> ZADD zset2 1 "one" 2 "two" 3 "three"
(integer) 3
127.0.0.1:6379> ZUNION 2 zset1 zset2
1) "one"
2) "three"
3) "two"
127.0.0.1:6379> ZUNION 2 zset1 zset2 WITHSCORES
1) "one"
2) "2"
3) "three"
4) "3"
5) "two"
6) "4"
服务器管理
可以使用以下命令对 Redis 进行管理。
SHUTDOWN
使用 SHUTDOWN
命令关闭 Redis。
语法:
SHUTDOWN [NOSAVE|SAVE] [NOW] [FORCE] [ABORT]
DBSIZE
使用 DBSIZE
命令返回当前数据库的键数量。
语法:
DBSIZE (null)
例子:
127.0.0.1:6379> DBSIZE
(integer) 12
FLUSHALL
使用 FLUSHALL
命令清除所有数据库的所有键。
语法:
FLUSHALL [ASYNC|SYNC]
FLUSHDB
使用 FLUSHDB
命令清除当前数据库的所有键。
语法:
FLUSHDB [ASYNC|SYNC]
SAVE
使用 SAVE
命令同步保持数据到磁盘。
语法:
SAVE (null)
SLOWLOG LEN
使用 SLOWLOG LEN
返回慢查询记录数量。
语法:
SLOWLOG LEN (null)
SLOWLOG GET
使用 SLOWLOG GET
命令返回慢查询记录。
语法:
SLOWLOG GET [count]
使用以下参数配置慢查询:
slowlog-log-slower-than 10000
:指定慢查询阈值,默认为 10 毫秒,即查询超过 10 毫秒才会写入到慢查询日志。slowlog-max-len 128
:指定慢查询日志保存的最大记录数量。
SLOWLOG RESET
使用 SLOWLOG RESET
命令清除慢查询记录。
语法:
SLOWLOG RESET (null)
INFO
使用 INFO
命令获取 Redis 信息。
语法:
INFO [section [section ...]]
例子:
127.0.0.1:6379> INFO
# Server
redis_version:7.2.1
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:ca862484a497ae3f
redis_mode:standalone
os:Linux 3.10.0-1160.el7.x86_64 x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:4.8.5
process_id:990
process_supervised:no
run_id:14d6a821a67474fe74a190d980a1414fe2dab06b
tcp_port:6379
server_time_usec:1696989082377603
uptime_in_seconds:166
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:2490266
executable:/usr/local/bin/redis-server
config_file:/root/redis-stable/redis.conf
io_threads_active:0
listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379
# Clients
connected_clients:1
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20480
client_recent_max_output_buffer:0
blocked_clients:0
tracking_clients:0
clients_in_timeout_table:0
total_blocking_keys:0
total_blocking_keys_on_nokey:0
# Memory
used_memory:916904
used_memory_human:895.41K
used_memory_rss:10964992
used_memory_rss_human:10.46M
used_memory_peak:1134960
used_memory_peak_human:1.08M
used_memory_peak_perc:80.79%
used_memory_overhead:868576
used_memory_startup:865824
used_memory_dataset:48328
used_memory_dataset_perc:94.61%
allocator_allocated:1824256
allocator_active:1978368
allocator_resident:4378624
total_system_memory:1907716096
total_system_memory_human:1.78G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:32768
used_memory_vm_total:64512
used_memory_vm_total_human:63.00K
used_memory_functions:184
used_memory_scripts:184
used_memory_scripts_human:184B
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.08
allocator_frag_bytes:154112
allocator_rss_ratio:2.21
allocator_rss_bytes:2400256
rss_overhead_ratio:2.50
rss_overhead_bytes:6586368
mem_fragmentation_ratio:12.26
mem_fragmentation_bytes:10070976
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_total_replication_buffers:0
mem_clients_slaves:0
mem_clients_normal:1928
mem_cluster_links:0
mem_aof_buffer:0
mem_allocator:jemalloc-5.3.0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
# Persistence
loading:0
async_loading:0
current_cow_peak:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1696988916
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_saves:0
rdb_last_cow_size:0
rdb_last_load_keys_expired:0
rdb_last_load_keys_loaded:12
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_rewrites:0
aof_rewrites_consecutive_failures:0
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0
# Stats
total_connections_received:1
total_commands_processed:1
instantaneous_ops_per_sec:0
total_net_input_bytes:41
total_net_output_bytes:204421
total_net_repl_input_bytes:0
total_net_repl_output_bytes:0
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
instantaneous_input_repl_kbps:0.00
instantaneous_output_repl_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:4
evicted_keys:0
evicted_clients:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
pubsubshard_channels:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:0
dump_payload_sanitizations:0
total_reads_processed:2
total_writes_processed:3
io_threaded_reads_processed:0
io_threaded_writes_processed:0
reply_buffer_shrinks:1
reply_buffer_expands:0
eventloop_cycles:1642
eventloop_duration_sum:376560
eventloop_duration_cmd_sum:2686
instantaneous_eventloop_cycles_per_sec:9
instantaneous_eventloop_duration_usec:205
acl_access_denied_auth:0
acl_access_denied_cmd:0
acl_access_denied_key:0
acl_access_denied_channel:0
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:0f458be3719e91d9bf44e41b6913f80064379354
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:0.184887
used_cpu_user:0.305308
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000
used_cpu_sys_main_thread:0.174446
used_cpu_user_main_thread:0.315345
# Modules
# Errorstats
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=12,expires=0,avg_ttl=0
127.0.0.1:6379> INFO memory
# Memory
used_memory:939968
used_memory_human:917.94K
used_memory_rss:10964992
used_memory_rss_human:10.46M
used_memory_peak:1134960
used_memory_peak_human:1.08M
used_memory_peak_perc:82.82%
used_memory_overhead:868576
used_memory_startup:865824
used_memory_dataset:71392
used_memory_dataset_perc:96.29%
allocator_allocated:2031104
allocator_active:2191360
allocator_resident:4595712
total_system_memory:1907716096
total_system_memory_human:1.78G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:32768
used_memory_vm_total:64512
used_memory_vm_total_human:63.00K
used_memory_functions:184
used_memory_scripts:184
used_memory_scripts_human:184B
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.08
allocator_frag_bytes:160256
allocator_rss_ratio:2.10
allocator_rss_bytes:2404352
rss_overhead_ratio:2.39
rss_overhead_bytes:6369280
mem_fragmentation_ratio:11.93
mem_fragmentation_bytes:10045696
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_total_replication_buffers:0
mem_clients_slaves:0
mem_clients_normal:1928
mem_cluster_links:0
mem_aof_buffer:0
mem_allocator:jemalloc-5.3.0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
MEMORY STATS
使用 MEMORY STATS
命令获取内存使用情况。
语法:
MEMORY STATS (null)
例子:
127.0.0.1:6379> MEMORY STATS
1) "peak.allocated"
2) (integer) 1134960
3) "total.allocated"
4) (integer) 939880
5) "startup.allocated"
6) (integer) 865824
7) "replication.backlog"
8) (integer) 0
9) "clients.slaves"
10) (integer) 0
11) "clients.normal"
12) (integer) 1928
13) "cluster.links"
14) (integer) 0
15) "aof.buffer"
16) (integer) 0
17) "lua.caches"
18) (integer) 0
19) "functions.caches"
20) (integer) 184
21) "db.0"
22) 1) "overhead.hashtable.main"
2) (integer) 608
3) "overhead.hashtable.expires"
4) (integer) 32
5) "overhead.hashtable.slot-to-keys"
6) (integer) 0
23) "overhead.total"
24) (integer) 868576
25) "keys.count"
26) (integer) 12
27) "keys.bytes-per-key"
28) (integer) 6171
29) "dataset.bytes"
30) (integer) 71304
31) "dataset.percentage"
32) "96.28388977050781"
33) "peak.percentage"
34) "82.81172943115234"
35) "allocator.allocated"
36) (integer) 2031104
37) "allocator.active"
38) (integer) 2191360
39) "allocator.resident"
40) (integer) 4595712
41) "allocator-fragmentation.ratio"
42) "1.078900933265686"
43) "allocator-fragmentation.bytes"
44) (integer) 160256
45) "allocator-rss.ratio"
46) "2.097196340560913"
47) "allocator-rss.bytes"
48) (integer) 2404352
49) "rss-overhead.ratio"
50) "2.385917901992798"
51) "rss-overhead.bytes"
52) (integer) 6369280
53) "fragmentation"
54) "11.927181243896484"
55) "fragmentation.bytes"
56) (integer) 10045664
MEMORY USAGE
使用 MEMORY USAGE
命令估算键的内存使用情况。
语法:
MEMORY USAGE key [SAMPLES count]
例子:
127.0.0.1:6379> SET "" ""
OK
127.0.0.1:6379> MEMORY USAGE ""
(integer) 56
127.0.0.1:6379> SET foo bar
OK
127.0.0.1:6379> MEMORY USAGE foo
(integer) 56
127.0.0.1:6379> SET foo2 mybar
OK
127.0.0.1:6379> MEMORY USAGE foo2
(integer) 64
持久化
Redis 持久化是指将 Redis 在内存中的数据保存到磁盘中,以保证在服务器因故障崩溃或重启后,数据不会丢失。Redis 提供了两种持久化方法:
- RDB(Redis DataBase Backup File)
- AOF(Append Only File)
RDB
RDB(Redis DataBase Backup File)持久化是指在指定的时间间隔内,将内存中的数据生成一个快照(Snapshot),并保存在磁盘上。Redis 默认配置下,会在以下情况下生成 RDB 文件:
- 根据默认配置规则
save 3600 1 300 100 60 10000
,在指定的时间间隔内,如果发生指定数量的写操作,将会生成一个新的 RDB 文件。 - 在执行
BGSAVE
命令时,Redis 会在后台异步生成一个 RDB 文件,不会阻塞主线程。 - 在执行
SAVE
命令时,Redis 会阻塞主线程,直到 RDB 文件生成完毕。
RDB 文件的生成和加载速度较快,适合用于定期备份和恢复数据。但是由于 RDB 文件是定时生成的,所以在 Redis 崩溃时,可能会丢失最近一段时间内的数据。
其余配置有:
save ""
:禁用 RDB。rdbcompression yes
:启用压缩。dbfilename dump.rdb
:指定 RDB 文件名称。
AOF
AOF(Append Only File)持久化是指将 Redis 的所有写命令追加到一个文件中,当 Redis 重新启动时,会重新执行 AOF 文件中的命令,从而将数据恢复到崩溃前的状态。
在默认情况下,Redis 没有开启 AOF 持久化。如果需要开启 AOF 持久化,需要在 Redis 配置文件中将 appendonly
参数设置为 yes
,并指定 AOF 文件的名称和保存路径 appendfilename "appendonly.aof"
。
AOF 持久化可以配置不同的同步策略,包括:
appendfsync always
:每次写命令都立即同步到 AOF 文件。appendfsync everysec
:每秒将缓冲区中的写命令同步到 AOF 文件,默认策略。appendfsync no
:由操作系统决定何时将缓冲区中的写命令同步到 AOF 文件。
AOF 文件的生成和加载速度比 RDB 文件慢,但是 AOF 文件可以提供更好的数据持久性保证,因为它记录了所有的写命令。在 Redis 崩溃时,可以通过重新执行 AOF 文件中的命令来恢复数据。
由于 AOF 记录了所有的写命令,故 AOF 文件会比 RDB 文件大得多,而且 AOF 会记录对同一个 Key 的多次写操作,但只有最后一次写操作才有意义。通过执行 BGREWRITEAOF
命令,可以让 AOF 文件执行重写(Rewrite),用最少的命令达到相同效果。
127.0.0.1:6379> BGREWRITEAOF
Background append only file rewriting started
其余配置有:
auto-aof-rewrite-percentage 100
:AOF 文件增长超过多少百分比则触发重写。auto-aof-rewrite-min-size 64mb
:AOF 文件最小多大以上才触发重写。
需要注意的是,Redis 在启动时会优先加载 AOF 文件来恢复数据,如果 AOF 文件不存在或者加载失败,才会尝试加载 RDB 文件。
RDB 与 AOF 的对比:
RDB | AOF | |
---|---|---|
持久化方式 | 定时对整个内存做快照 | 记录每一次执行的命令 |
数据完整性 | 不完整,两次备份之间会丢失 | 相对完整,取决于刷盘策略 |
文件大小 | 会有压缩,文件体积小 | 记录命令,文件体积很大 |
宕机恢复速度 | 很快 | 慢 |
数据恢复优先级 | 低,因为数据完整性不如 AOF | 高,因为数据完整性更高 |
系统资源占用 | 高,消耗大量 CPU 和内存 | 低,但 AOF 重写会消耗大量 CPU 和内存 |
使用场景 | 可以容忍数分钟数据丢失,追求更快的启动速度 | 对数据安全性要求较高场景 |
建议:
- 用于缓存的 Redis 实例尽量不要开启持久化功能。
- 建议关闭 RDB,使用 AOF。
- 使用脚本定期在从节点做 RDB,实现数据备份。
- 设置合理的 Rewrite 阈值参数,避免频繁的
BGREWRITEAOF
。 - 配置
no-appendfsync-on-rewrite yes
,禁止在 Rewrite 期间做 AOF,避免因 AOF 引起的阻塞。
主从
Redis 主从模式是指在一组 Redis 服务器中,有一个主服务器(Master)和一个或多个从服务器(Slave)。主服务器负责处理写操作,并将数据同步到从服务器,从服务器则负责处理读操作。
当客户端向主服务器发送写操作时,主服务器会将数据写入内存,并将写操作复制到一个或多个从服务器。从服务器会定期向主服务器发送心跳消息,以确认它们仍然在线并接收写操作。如果主服务器在一段时间内没有收到从服务器的心跳消息,它会认为该从服务器已经下线,并停止向其发送写操作。
Redis 主从模式的好处是可以实现读写分离和负载均衡。由于主服务器只处理写操作,它可以将更多的资源用于处理写操作,从而提高写操作的性能。从服务器则只处理读操作,可以将更多的资源用于处理读操作,从而提高读操作的性能。此外,由于从服务器可以有多个,因此可以实现负载均衡,将读操作分配给多个从服务器处理,从而提高系统的整体性能。
Redis 主从模式的另一个好处是实现数据备份和故障恢复。由于从服务器会定期从主服务器复制数据,因此可以在从服务器上实现数据备份。如果主服务器出现故障,可以将从服务器提升为主服务器,从而实现故障恢复。此外,由于 Redis 主从模式支持多个从服务器,因此可以实现多个数据副本,从而提高数据的可靠性和可用性。
需要注意的是,Redis 主从模式也有一些限制和缺点。例如,如果主服务器出现故障,从服务器将无法继续复制数据,并且如果主服务器上的数据丢失,从服务器上的数据也将丢失。此外,由于 Redis 主从模式是异步复制的,因此在某些情况下可能会出现数据不一致的情况。
部署
环境为 1 个主服务器,2 个从服务器。
No. | HostName | IP | Port | Role |
---|---|---|---|---|
1 | master | 192.168.92.128 | 6379 | Master |
2 | replica1 | 192.168.92.129 | 6379 | Replica |
3 | replica2 | 192.168.92.130 | 6379 | Replica |
参考前面章节安装 Redis,确保开启 RDB,关闭 AOF,记得关闭主机防火墙或者放行对应端口。
调整主服务器 Redis 参数:
[root@master ~]# vi redis-stable/redis.conf
bind 127.0.0.1 -::1 192.168.92.128
protected-mode no
[root@master ~]# systemctl restart redis
调整从服务器 Redis 参数:
[root@replica1 ~]# vi redis-stable/redis.conf
bind 127.0.0.1 -::1 192.168.92.129
protected-mode no
replicaof 192.168.92.128 6379
[root@replica1 ~]# systemctl restart redis
[root@replica2 ~]# vi redis-stable/redis.conf
bind 127.0.0.1 -::1 192.168.92.130
protected-mode no
replicaof 192.168.92.128 6379
[root@replica2 ~]# systemctl restart redis
再在主服务器上查看配置:
[root@master ~]# redis-cli
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:2
slave0:ip=192.168.92.129,port=6379,state=online,offset=126,lag=1
slave1:ip=192.168.92.130,port=6379,state=online,offset=126,lag=0
master_failover_state:no-failover
master_replid:8173d2bca386dbba59c3979b4660e06202860cd4
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:126
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:126
在主服务器上写入数据:
[root@master ~]# redis-cli
127.0.0.1:6379> set num 123
OK
在从服务器上查看数据:
[root@replica1 ~]# redis-cli
127.0.0.1:6379> get num
"123"
哨兵
Redis 哨兵(Sentinel)是 Redis 官方推荐的高可用解决方案,主要用于实现 Redis 主从架构的自动故障转移(Failover)。
哨兵是一个独立的进程,用于监控 Redis 主从架构中的主服务器和从服务器。它会不断地检查主服务器和从服务器的状态,当发现主服务器出现故障时,会自动将从服务器提升为主服务器,并更新客户端的配置,使得客户端可以自动连接到新的主服务器。
哨兵的主要功能包括:
- 监控:哨兵会不断地检查主服务器和从服务器的状态,包括主服务器的存活状态、主从复制状态等。
- 自动故障转移:当主服务器出现故障时,哨兵会自动将从服务器提升为主服务器,并更新客户端的配置。
- 通知:哨兵可以将故障转移的结果发送给客户端或其他哨兵进程,以便客户端和其他进程可以及时更新配置。
在 Redis 主从架构中,通常建议配置多个哨兵进程,以实现高可用性和容错。多个哨兵进程可以互相协作,共同监控 Redis 主从架构的状态,并在必要时执行故障转移操作。同时,为了提高可靠性,哨兵进程也应该在不同的主机上运行。
哨兵基于心跳机制监测服务状态,每隔 1 秒向集群的每个实例发送 ping
命令:
- 主观下线:如果某哨兵节点发现某实例未在规定时间响应,则认为该实例主观下线。
- 客观下线:若超过指定数量(Quorum)的哨兵都认为该实例主观下线,则该实例客观下线 。Quorum 值最好超过哨兵实例数量的一半。
一旦发现主节点故障,哨兵需要在从节点中选择一个作为新的主节点,选择依据如下:
- 首先会判断从节点与主节点断开时间长短,如果超过指定值(
down-after-milliseconds * 10
)则会排除该从节点。 - 然后判断从节点的
slave-priority
值,越小优先级越高,如果是 0 则永不参与选举。 - 如果
slave-prority
一样,则判断从节点的offset
值,越大说明数据越新,优先级越高。 - 最后是判断从节点的运行 Id 大小,越小优先级越高。
当选中了其中一个从节点为新的主节点后,故障转移的步骤如下:
- 哨兵给被选中的从节点发送
slaveof no one
命令,让该节点成为主节点。 - 哨兵给所有其它从节点发送
slaveof masterIP masterPort
命令,让这些从节点成为新主节点的从节点,开始从新的主节点上同步数据。 - 最后,哨兵将故障节点标记为从节点,当故障节点恢复后会自动成为新主节点的从节点。
需要注意的是,虽然哨兵可以提高 Redis 主从架构的可用性和可靠性,但它也有一些限制和缺点。例如,在故障转移期间,可能会出现短暂的数据不一致或不可用的情况。此外,哨兵的配置和维护也需要一定的成本和复杂性。
部署
搭建一个 3 节点的哨兵集群,来监控前面的 Redis 主从集群。
No. | HostName | IP | Port |
---|---|---|---|
1 | master | 192.168.92.128 | 26379 |
2 | replica1 | 192.168.92.129 | 26379 |
3 | replica2 | 192.168.92.130 | 26379 |
调整 3 个节点的哨兵参数并启动:
[root@master ~]# vi redis-stable/sentinel.conf
daemonize yes
logfile /var/log/sentinel.log
sentinel announce-ip 192.168.92.128
sentinel monitor mymaster 192.168.92.128 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
[root@master ~]# redis-sentinel redis-stable/sentinel.conf
[root@replica1 ~]# vi redis-stable/sentinel.conf
daemonize yes
logfile /var/log/sentinel.log
sentinel announce-ip 192.168.92.129
sentinel monitor mymaster 192.168.92.128 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
[root@replica1 ~]# redis-sentinel redis-stable/sentinel.conf
[root@replica2 ~]# vi redis-stable/sentinel.conf
daemonize yes
logfile /var/log/sentinel.log
sentinel announce-ip 192.168.92.130
sentinel monitor mymaster 192.168.92.128 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
[root@replica2 ~]# redis-sentinel redis-stable/sentinel.conf
其中:
sentinel monitor mymaster 192.168.92.128 6379 2
:指定主节点信息,包括:mymaster
:自定义主节点名称192.168.92.128 6379
:主机点地址和端口2
:选举主节点的 Quorum 值
启动完成后查看状态:
[root@master ~]# redis-cli -p 26379
127.0.0.1:26379> INFO sentinel
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_tilt_since_seconds:-1
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=192.168.92.128:6379,slaves=2,sentinels=3
测试
关闭主节点 Redis,查看日志:
[root@master ~]# systemctl stop redis
[root@master ~]# tail -f /var/log/sentinel.log
1547:X 08 Oct 2023 11:18:35.374 # +sdown master mymaster 192.168.92.128 6379
1547:X 08 Oct 2023 11:18:35.466 # +odown master mymaster 192.168.92.128 6379 #quorum 2/2
1547:X 08 Oct 2023 11:18:35.466 # +new-epoch 1
1547:X 08 Oct 2023 11:18:35.466 # +try-failover master mymaster 192.168.92.128 6379
1547:X 08 Oct 2023 11:18:35.475 * Sentinel new configuration saved on disk
1547:X 08 Oct 2023 11:18:35.475 # +vote-for-leader 336e24794bc8d2b52610dd2a5a7c1a8ba0db9c1b 1
1547:X 08 Oct 2023 11:18:35.479 * 663bc4788e48214d657da6b25a943d7c49c28b1c voted for 8bfdbc77e4e88cb19dac5d01f1b3687602a9322c 1
1547:X 08 Oct 2023 11:18:35.479 * 8bfdbc77e4e88cb19dac5d01f1b3687602a9322c voted for 8bfdbc77e4e88cb19dac5d01f1b3687602a9322c 1
1547:X 08 Oct 2023 11:18:36.318 # +config-update-from sentinel 8bfdbc77e4e88cb19dac5d01f1b3687602a9322c 192.168.92.130 26379 @ mymaster 192.168.92.128 6379
1547:X 08 Oct 2023 11:18:36.319 # +switch-master mymaster 192.168.92.128 6379 192.168.92.129 6379
1547:X 08 Oct 2023 11:18:36.321 * +slave slave 192.168.92.130:6379 192.168.92.130 6379 @ mymaster 192.168.92.129 6379
1547:X 08 Oct 2023 11:18:36.321 * +slave slave 192.168.92.128:6379 192.168.92.128 6379 @ mymaster 192.168.92.129 6379
1547:X 08 Oct 2023 11:18:36.342 * Sentinel new configuration saved on disk
1547:X 08 Oct 2023 11:19:06.366 # +sdown slave 192.168.92.128:6379 192.168.92.128 6379 @ mymaster 192.168.92.129 6379
选择了一个从节点为新的主节点:
[root@replica1 ~]# redis-cli
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.92.130,port=6379,state=online,offset=970010,lag=0
master_failover_state:no-failover
master_replid:31ba4f06b05fb69109da0553d057eff66a2c4462
master_replid2:0032f8ac0adc55a2e6525c94c8c5daaa70a9ad9f
master_repl_offset:970153
second_repl_offset:870102
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:832527
repl_backlog_histlen:137627
启动原主节点后会被转换为从节点:
[root@master ~]# systemctl start redis
[root@master ~]# redis-cli
127.0.0.1:6379> INFO replication
# Replication
role:slave
master_host:192.168.92.129
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_read_repl_offset:1029770
slave_repl_offset:1029770
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:31ba4f06b05fb69109da0553d057eff66a2c4462
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1029770
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1016794
repl_backlog_histlen:12977
集群
Redis Cluster 是 Redis 官方提供的一种分布式解决方案,旨在提供高可用、高并发的水平扩展能力。
在 Redis Cluster 中,数据被划分为 16384 个不同的槽位(Slot),每个槽位都负责存储一部分数据。这些槽位可以被分配给不同的 Redis 实例,以实现数据的分片处理。每个 Redis 实例只存储部分数据,从而可以充分发挥集群主机的性能优势。
Redis Cluster 采用去中心化的架构模式,每个节点都与其他节点相互关联。节点之间通过 Gossip 协议进行通信,交换状态信息。当一个节点出现故障时,Redis Cluster 可以自动进行故障转移,将故障节点的槽位分配给其他正常节点,以保证服务的高可用性。
在 Redis Cluster 中,每个节点都可以同时处理读写操作,从而提高了整体的并发处理能力。此外,Redis Cluster 还支持在线扩展和收缩,可以在不影响服务的情况下动态地增加或减少节点。
部署
搭建一个 3 主 3 从的 Redis Cluster,这是创建集群的最低要求。
No. | HostName | IP | Port | Role |
---|---|---|---|---|
1 | master1 | 192.168.92.131 | 6379 | Master |
2 | master2 | 192.168.92.132 | 6379 | Master |
3 | master3 | 192.168.92.133 | 6379 | Master |
4 | replica1 | 192.168.92.141 | 6379 | Replica |
5 | replica2 | 192.168.92.142 | 6379 | Replica |
6 | replica3 | 192.168.92.143 | 6379 | Replica |
参考前面章节安装 Redis,确保开启 RDB,关闭 AOF,记得关闭主机防火墙或者放行对应端口。
调整所有节点的配置后启动 Redis:
# vi redis-stable/redis.conf
bind * -::*
protected-mode no
daemonize yes·
logfile /var/log/redis.log
cluster-enabled yes
cluster-config-file /root/redis-stable/nodes.conf
# systemctl restart redis
创建集群:
[root@master1 ~]# redis-cli --cluster create 192.168.92.131:6379 192.168.92.132:6379 192.168.92.133:6379 192.168.92.141:6379 192.168.92.142:6379 192.168.92.143:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.92.142:6379 to 192.168.92.131:6379
Adding replica 192.168.92.143:6379 to 192.168.92.132:6379
Adding replica 192.168.92.141:6379 to 192.168.92.133:6379
M: f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379
slots:[0-5460] (5461 slots) master
M: 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379
slots:[5461-10922] (5462 slots) master
M: 7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379
slots:[10923-16383] (5461 slots) master
S: bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379
replicates 7556c643524152a8ee59f75deed1006cfc186d75
S: cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379
replicates f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e
S: fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379
replicates 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
>>> Performing Cluster Check (using node 192.168.92.131:6379)
M: f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379
slots: (0 slots) slave
replicates f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e
M: 7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379
slots: (0 slots) slave
replicates 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182
S: bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379
slots: (0 slots) slave
replicates 7556c643524152a8ee59f75deed1006cfc186d75
M: 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
其中:
create
:表示创建集群--cluster-replicas
:指定每个主节点对应的副本数量
如果出现如下报错:
[ERR] Node 192.168.92.131:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
则需要删除数据库后再重新创建集群:
# redis-cli flushdb
OK
查看集群:
[root@master1 ~]# redis-cli cluster nodes
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696816309000 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696816307000 1 connected 0-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696816310000 3 connected 10923-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 slave 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 0 1696816310683 2 connected
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696816309000 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 master - 0 1696816311690 2 connected 5461-10922
可以看到三个主节点对应的 Slot 分别为 0-5460,5461-10922,10923-16383。
使用 redis-cli
客户端连接到集群进行操作时,需要加上 -c
参数:
[root@master1 ~]# redis-cli -c
127.0.0.1:6379> set a 1
-> Redirected to slot [15495] located at 192.168.92.133:6379
OK
可以看到键 a
经过哈希后被分配到了 15495 这个 Slot。
为了让相同业务的键分配相同的 Slot,可以使用 {}
包裹相同的标签,此时只会计算 {}
中的哈希值。
[root@master1 ~]# redis-cli -c
127.0.0.1:6379> set a 1
-> Redirected to slot [15495] located at 192.168.92.133:6379
OK
192.168.92.133:6379> set {a}b 2
OK
192.168.92.133:6379> set {a}c 3
OK
故障转移
当任意一个主节点出现故障时,其对应的从节点将会自动转换为主节点。
故障前集群状态:
[root@master1 ~]# redis-cli cluster nodes
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696819565000 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696819562000 1 connected 0-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696819564430 3 connected 10923-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 slave 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 0 1696819566464 2 connected
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696819565451 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 master - 0 1696819564000 2 connected 5461-10922
关闭其中一个主节点:
[root@master2 ~]# redis-cli shutdown
查看集群状态:
[root@master1 ~]# redis-cli cluster nodes
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696819672855 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696819670000 1 connected 0-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696819672000 3 connected 10923-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696819673874 7 connected 5461-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696819671826 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 master,fail - 1696819638245 1696819634190 2 disconnected
可以看到 192.168.92.132 这个主节点关闭后,其对应的从节点 192.168.92.143 变为了主节点。
启动 192.168.92.132 节点上的 Redis:
[root@master2 ~]# systemctl start redis
查看集群状态:
[root@master1 ~]# redis-cli cluster nodes
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696819876270 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696819875000 1 connected 0-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696819873000 3 connected 10923-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696819874000 7 connected 5461-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696819875269 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 slave fd3f2e44a931639cec632f5427312e4fe80bf302 0 1696819873205 7 connected
可以看到 192.168.92.132 这个节点变为了 192.168.92.143 的从节点。
添加节点
为集群添加一个主节点和一个从节点。
No. | HostName | IP | Port | Role |
---|---|---|---|---|
1 | master4 | 192.168.92.134 | 6379 | Master |
2 | replica4 | 192.168.92.144 | 6379 | Replica |
参考前面章节在新增节点安装 Redis,确保开启 RDB,关闭 AOF,记得关闭主机防火墙或者放行对应端口。
调整这两个节点的配置后启动 Redis:
# vi redis-stable/redis.conf
bind * -::*
protected-mode no
daemonize yes·
logfile /var/log/redis.log
cluster-enabled yes
cluster-config-file /root/redis-stable/nodes.conf
# systemctl restart redis
执行以下命令添加主节点:
[root@master1 ~]# redis-cli --cluster add-node 192.168.92.134:6379 192.168.92.131:6379
>>> Adding node 192.168.92.134:6379 to cluster 192.168.92.131:6379
>>> Performing Cluster Check (using node 192.168.92.131:6379)
M: f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379
slots: (0 slots) slave
replicates f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e
M: 7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379
slots: (0 slots) slave
replicates 7556c643524152a8ee59f75deed1006cfc186d75
S: 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379
slots: (0 slots) slave
replicates fd3f2e44a931639cec632f5427312e4fe80bf302
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Getting functions from cluster
>>> Send FUNCTION LIST to 192.168.92.134:6379 to verify there is no functions in it
>>> Send FUNCTION RESTORE to 192.168.92.134:6379
>>> Send CLUSTER MEET to node 192.168.92.134:6379 to make it join the cluster.
[OK] New node added correctly.
查看集群:
[root@master1 ~]# redis-cli cluster nodes
cb0d86df6dfbd010ce9ff560bab871af87cadf19 192.168.92.134:6379@16379 master - 0 1696823418000 0 connected
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696823420056 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696823416000 1 connected 0-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696823418033 3 connected 10923-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696823419000 7 connected 5461-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696823419044 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 slave fd3f2e44a931639cec632f5427312e4fe80bf302 0 1696823418000 7 connected
执行以下命令添加从节点:
[root@master1 ~]# redis-cli --cluster add-node 192.168.92.144:6379 192.168.92.131:6379 --cluster-slave --cluster-master-id cb0d86df6dfbd010ce9ff560bab871af87cadf19
>>> Adding node 192.168.92.144:6379 to cluster 192.168.92.131:6379
>>> Performing Cluster Check (using node 192.168.92.131:6379)
M: f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: cb0d86df6dfbd010ce9ff560bab871af87cadf19 192.168.92.134:6379
slots: (0 slots) master
S: cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379
slots: (0 slots) slave
replicates f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e
M: 7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379
slots: (0 slots) slave
replicates 7556c643524152a8ee59f75deed1006cfc186d75
S: 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379
slots: (0 slots) slave
replicates fd3f2e44a931639cec632f5427312e4fe80bf302
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.92.144:6379 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 192.168.92.134:6379.
[OK] New node added correctly.
查看集群:
[root@master1 ~]# redis-cli cluster nodes
cb0d86df6dfbd010ce9ff560bab871af87cadf19 192.168.92.134:6379@16379 master - 0 1696823716935 0 connected
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696823718000 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696823716000 1 connected 0-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696823717000 3 connected 10923-16383
da600f6ce82617e6e7501002d247e3bfbf947431 192.168.92.144:6379@16379 slave cb0d86df6dfbd010ce9ff560bab871af87cadf19 0 1696823717944 0 connected
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696823717000 7 connected 5461-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696823719965 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 slave fd3f2e44a931639cec632f5427312e4fe80bf302 0 1696823719000 7 connected
转移插槽
添加节点后,新的主节点上的插槽数量为 0。
[root@master1 ~]# redis-cli --cluster info 192.168.92.131:6379
192.168.92.131:6379 (f8e6be12...) -> 0 keys | 5461 slots | 1 slaves.
192.168.92.134:6379 (cb0d86df...) -> 0 keys | 0 slots | 1 slaves.
192.168.92.133:6379 (7556c643...) -> 3 keys | 5461 slots | 1 slaves.
192.168.92.143:6379 (fd3f2e44...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 3 keys in 4 masters.
0.00 keys per slot on average.
可以使用 reshard
将其他主节点的插槽转移到新增主节点上:
[root@master1 ~]# redis-cli --cluster reshard 192.168.92.131:6379
>>> Performing Cluster Check (using node 192.168.92.131:6379)
M: f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: cb0d86df6dfbd010ce9ff560bab871af87cadf19 192.168.92.134:6379
slots: (0 slots) master
1 additional replica(s)
S: cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379
slots: (0 slots) slave
replicates f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e
M: 7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: da600f6ce82617e6e7501002d247e3bfbf947431 192.168.92.144:6379
slots: (0 slots) slave
replicates cb0d86df6dfbd010ce9ff560bab871af87cadf19
M: fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379
slots: (0 slots) slave
replicates 7556c643524152a8ee59f75deed1006cfc186d75
S: 8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379
slots: (0 slots) slave
replicates fd3f2e44a931639cec632f5427312e4fe80bf302
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 2000
What is the receiving node ID? cb0d86df6dfbd010ce9ff560bab871af87cadf19
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all
查看迁移结果:
[root@master1 ~]# redis-cli --cluster info 192.168.92.131:6379
192.168.92.131:6379 (f8e6be12...) -> 0 keys | 4795 slots | 1 slaves.
192.168.92.134:6379 (cb0d86df...) -> 0 keys | 1999 slots | 1 slaves.
192.168.92.133:6379 (7556c643...) -> 3 keys | 4795 slots | 1 slaves.
192.168.92.143:6379 (fd3f2e44...) -> 0 keys | 4795 slots | 1 slaves.
[OK] 3 keys in 4 masters.
0.00 keys per slot on average.
还可以使用 rebalance
平衡各个主节点的插槽数量:
[root@master1 ~]# redis-cli --cluster rebalance 192.168.92.131:6379
>>> Performing Cluster Check (using node 192.168.92.131:6379)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 4 nodes. Total weight = 4.00
Moving 699 slots from 192.168.92.143:6379 to 192.168.92.134:6379
Moving 699 slots from 192.168.92.133:6379 to 192.168.92.134:6379
Moving 699 slots from 192.168.92.131:6379 to 192.168.92.134:6379
如果新增主节点还没有分配插槽,需要使用 --cluster-use-empty-masters
参数。
查看结果:
[root@master1 ~]# redis-cli --cluster info 192.168.92.131:6379
192.168.92.131:6379 (f8e6be12...) -> 0 keys | 4096 slots | 1 slaves.
192.168.92.134:6379 (cb0d86df...) -> 0 keys | 4096 slots | 1 slaves.
192.168.92.133:6379 (7556c643...) -> 3 keys | 4096 slots | 1 slaves.
192.168.92.143:6379 (fd3f2e44...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 3 keys in 4 masters.
0.00 keys per slot on average.
删除节点
可以直接删除从节点:
[root@master1 ~]# redis-cli cluster nodes
cb0d86df6dfbd010ce9ff560bab871af87cadf19 192.168.92.134:6379@16379 master - 0 1696831123041 8 connected 0-1364 5461-6826 10923-12287
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696831119000 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696831120000 1 connected 1365-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696831117000 3 connected 12288-16383
da600f6ce82617e6e7501002d247e3bfbf947431 192.168.92.144:6379@16379 slave cb0d86df6dfbd010ce9ff560bab871af87cadf19 0 1696831121021 8 connected
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696831120000 7 connected 6827-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696831122032 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 slave fd3f2e44a931639cec632f5427312e4fe80bf302 0 1696831120010 7 connected
[root@master1 ~]# redis-cli --cluster del-node 192.168.92.131:6379 da600f6ce82617e6e7501002d247e3bfbf947431
>>> Removing node da600f6ce82617e6e7501002d247e3bfbf947431 from cluster 192.168.92.131:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@master1 ~]# redis-cli cluster nodes
cb0d86df6dfbd010ce9ff560bab871af87cadf19 192.168.92.134:6379@16379 master - 0 1696831217240 8 connected 0-1364 5461-6826 10923-12287
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696831214000 1 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696831211000 1 connected 1365-5460
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696831213199 3 connected 12288-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696831216230 7 connected 6827-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696831214209 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 slave fd3f2e44a931639cec632f5427312e4fe80bf302 0 1696831215217 7 connected
无法直接删除已经分配插槽的主节点:
[root@master1 ~]# redis-cli --cluster del-node 192.168.92.131:6379 cb0d86df6dfbd010ce9ff560bab871af87cadf19
>>> Removing node cb0d86df6dfbd010ce9ff560bab871af87cadf19 from cluster 192.168.92.131:6379
[ERR] Node 192.168.92.134:6379 is not empty! Reshard data away and try again.
需要先将其上的插槽转移到其他主机点:
[root@master1 ~]# redis-cli --cluster reshard 192.168.92.131:6379 \
> --cluster-from cb0d86df6dfbd010ce9ff560bab871af87cadf19 \
> --cluster-to f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e \
> --cluster-slots 4096 \
> --cluster-yes
[root@master1 ~]# redis-cli --cluster info 192.168.92.131:6379
192.168.92.131:6379 (f8e6be12...) -> 0 keys | 8192 slots | 2 slaves.
192.168.92.133:6379 (7556c643...) -> 3 keys | 4096 slots | 1 slaves.
192.168.92.143:6379 (fd3f2e44...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 3 keys in 3 masters.
0.00 keys per slot on average.
然后再删除:
[root@master1 ~]# redis-cli --cluster del-node 192.168.92.131:6379 cb0d86df6dfbd010ce9ff560bab871af87cadf19
>>> Removing node cb0d86df6dfbd010ce9ff560bab871af87cadf19 from cluster 192.168.92.131:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@master1 ~]# redis-cli cluster nodes
cd3ebef2e8b5bba46833f50257568f703e65271b 192.168.92.142:6379@16379 slave f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 0 1696831799528 9 connected
f8e6be12bf4971b1badc961f6e1ea3c931ff2c0e 192.168.92.131:6379@16379 myself,master - 0 1696831798000 9 connected 0-6826 10923-12287
7556c643524152a8ee59f75deed1006cfc186d75 192.168.92.133:6379@16379 master - 0 1696831796000 3 connected 12288-16383
fd3f2e44a931639cec632f5427312e4fe80bf302 192.168.92.143:6379@16379 master - 0 1696831797507 7 connected 6827-10922
bdaa52d5827e02ec88699c46626797514bf093bd 192.168.92.141:6379@16379 slave 7556c643524152a8ee59f75deed1006cfc186d75 0 1696831800554 3 connected
8a92ad6e8fbf8d06b5a777fb0f9368c0f1547182 192.168.92.132:6379@16379 slave fd3f2e44a931639cec632f5427312e4fe80bf302 0 1696831799000 7 connected
此时各主节点的插槽数量不均衡,使用 rebalance
进行平衡:
[root@master1 ~]# redis-cli --cluster rebalance 192.168.92.131:6379
>>> Performing Cluster Check (using node 192.168.92.131:6379)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 3 nodes. Total weight = 3.00
Moving 1366 slots from 192.168.92.131:6379 to 192.168.92.133:6379
Moving 1365 slots from 192.168.92.131:6379 to 192.168.92.143:6379
[root@master1 ~]# redis-cli --cluster info 192.168.92.131:6379
192.168.92.131:6379 (f8e6be12...) -> 0 keys | 5461 slots | 1 slaves.
192.168.92.133:6379 (7556c643...) -> 3 keys | 5462 slots | 1 slaves.
192.168.92.143:6379 (fd3f2e44...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 3 keys in 3 masters.
0.00 keys per slot on average.