为什么要学这个项目
传统的webserver已经烂大街了,只有一个webserver项目大概率是找不到工作的,今天给大家分享一个C++Linux进阶项目-仿写Redis之Qedis,Redis是C++ Linux开发必备的核心知识,通过学习Qedis开源项目,不仅可以深入理解redis,也更能提升自己的编程能力,比如C++11任意函数作为任务的线程池,Reactor网络模型的C++封装,时间轮定时器,Redis数据结构的实现等。
视频地址
1.项目介绍

Qedis网上很多编译方式是错误的。
C++后台开发偏基础服务,比如分布式存储相关的岗位,基本上都是使用c++。
校招的时候如果想从事分布式存储相关的岗位,则需要深入掌握MySQL/Redis,而且即使不是做分布式存储,C++后台开发也是必须掌握Redis,所以建议大家可以参考Qedis项目仿写Redis,这样和其他人写webserver就有区分度。

Qedis 是一个基于 C++11 实现的 Redis 服务器,支持集群功能,并使用 LevelDB 作为持久化存储。该项目旨在提供一个高性能、可扩展的内存数据库解决方案,适用于需要快速数据访问和处理的应用场景。
Qedis 可以广泛应用于需要高性能数据存储和访问的场景,例如:
-
实时数据分析:Qedis 的高性能和低延迟特性使其成为实时数据分析的理想选择。
-
缓存系统:Qedis 可以作为缓存层,加速应用程序的数据访问速度。
-
消息队列:结合 Redis 的发布/订阅功能,Qedis 可以用于构建高效的消息队列系统
2 项目快速启动
确保系统已安装 C++11 编译器。
2.1 安装 LevelDB 库
2.1.0 leveldb介绍
LevelDB是一种快速的键-值存储库,由Google开发,用于提供高性能的数据持久性存储。它通常被用作支持各种应用程序的底层数据库引擎,包括分布式数据库、区块链、分布式文件系统等。
2.1.1 下载Level
git clone https://gitclone.com/github.com/google/leveldb
2.1.2 下载编译leveldb需要的googletest、benchmark
1、进入下载好的leveldb
cd leveldb
2、进入third_party目录
cd third_party
3、下载
git clone https://gitclone.com/github.com/google/googletest
git clone https://gitclone.com/github.com/google/benchmark
2.1.3 编译安装
回到leveldb目录
cd ../
mkdir build && cd build
# 步骤4: 使用CMake生成构建文件,这里以Debug模式为例
cmake -DCMAKE_BUILD_TYPE=Debug -DBUILD_SHARED_LIBS=1 ..
make
sudo make install
2.1.4 刷新环境
sudo ldconfig
2.1.5 c++测试范例
测试leveldb,文件名为hello.cc,内容如下所示:
#include <assert.h>
#include <string.h>
#include <leveldb/db.h>
#include <iostream>
using namespace leveldb;
int main(){
leveldb::DB* db;
leveldb::Options options;
options.create_if_missing = true;
// 打开一个数据库,不存在就创建
leveldb::Status status = leveldb::DB::Open(options, "/tmp/testdb", &db);
assert(status.ok());
// 插入一个键值对
status = db->Put(leveldb::WriteOptions(), "hello", "LevelDB");
assert(status.ok());
// 读取键值对
std::string value;
status = db->Get(leveldb::ReadOptions(), "hello", &value);
assert(status.ok());
std::cout << value << std::endl;
delete db;
return 0;
}
编译代码:
g++ hello.cc -o hello -lpthread -lleveldb
执行程序:
./hello
显示:
LevelDB
2.2 下载和编译Qedis
git clone https://github.com/loveyacper/Qedis.git
cd Qedis
mkdir build
cd build
# 以debug模式编译目的是为了方便后续debug源码
cmake -DCMAKE_BUILD_TYPE=Debug ..
make
编译后的执行文件在Qedis/bin目录,所以需要切换目录:
lqf@ubuntu:~/long/Qedis/build$ cd ../bin/
2.3 启动Qedis和测试
需要确保之前已经安装过redis,因为测试我们需要redis-cli,但redis-server不要启动。
2.3.1 默认方式启动Qedis
lqf@ubuntu:~/long/Qedis/bin$ ./qedis_server
打印:
_____ _____ _____ _ _____
/ _ \ | ____| | _ \ | | / ___/
| | | | | |__ | | | | | | | |___ Qedis(1.0.0) 64 bits, another redis written in C++11
| | | | | __| | | | | | | \___ \ Port: 6379
| |_| |_ | |___ | |_| | | | ___| | Author: Bert Young
\_______| |_____| |_____/ |_| /_____/ https://github.com/loveyacper/Qedis
start log thread
2024-11-27[14:49:20.419][USR]:Success: listen on (127.0.0.1:6379)
通过redis-cli控制台进行测试:
# 启动redis-cli控制台
lqf@ubuntu:~/long/Qedis/bin$ redis-cli
# 设置key
127.0.0.1:6379> set key qedis
OK
# 获取key,测试正常
127.0.0.1:6379> get key
"qedis"
127.0.0.1:6379>
2.3.2 通过配置文件启动Qedis
如果需要使用leveldb做持久化,需要修改配置文件Qedis/qedis.conf ,将backend 设置为1:

然后重新启动qedis_server,注意你配置文件路径
./qedis_server ~/long/Qedis/qedis.conf
打印:
Load libqedismodule.dylib failed because runtime error Load libnotexist.dylib failed because runtime error
/ _ \ | | | _ \ | | / / | | | | | | | | | | | | | |__ Qedis(1.0.0) 64 bits, another redis written in C++11 | | | | | | | | | | | | _ \ Port: 6379 | || | | |_ | |_| | | | | | Author: Bert Young | || |/ || // https://github.com/loveyacper/Qedis start log thread 2024-11-2818:32:21.938:Success: listen on (127.0.0.1:6379) 2024-11-2818:32:21.946:Open leveldb dump0 2024-11-2818:32:21.951:Open leveldb dump1 2024-11-2818:32:21.957:Open leveldb dump2 2024-11-2818:32:21.961:Open leveldb dump3 2024-11-2818:32:21.967:Open leveldb dump4 2024-11-2818:32:21.971:Open leveldb dump5 2024-11-2818:32:21.976:Open leveldb dump6 2024-11-2818:32:21.980:Open leveldb dump7 2024-11-2818:32:21.984:Open leveldb dump8 2024-11-2818:32:21.989:Open leveldb dump9 2024-11-2818:32:21.994:Open leveldb dump10 2024-11-2818:32:21.999:Open leveldb dump11 2024-11-2818:32:22.004:Open leveldb dump12 2024-11-2818:32:22.009:Open leveldb dump13 2024-11-2818:32:22.014:Open leveldb dump14 2024-11-2818:32:22.019:Open leveldb dump15
3 如何仿写redis
-
熟悉redis原理,但这个不是说一定要每行代码都看懂,推荐书籍:《Redis设计与实现》
-
熟悉leveldb,也是不说一定要每行代码看懂,主要是先理解下leveldb如何使用,我这里有整理的leveldb相关的资料,大家可以加薇laoliao6668获取。
-
熟悉Qedis源码,参考Qedis仿写redis,前期可以先照抄Qedis源码都可以,我这里主要梳理Qedis的流程。
4 Qedis框架流程
这里是服务器开源项目调试的经典流程,不只局限于qedis源码。
4.0 通过gdb启动服务程序
我们通过gdb的方式启动qedis_server
gdb ./qedis_server
如果需要设置配置文件,则在gdb启动后,这样就能加载配置文件
(gdb) set args ../qedis.conf
4.1 main函数在哪个文件
4.1.1 获取main函数所在源文件位置
在main函数打断点:
(gdb) b main
Breakpoint 1 at 0x5f60:file/home/lqf/long/Qedis/QedisSvr/Qedis.cc, line 447.
这样能快速定位main函数所在的文件,结合vscode,点击

自动跳转到main函数,是不是非常方便。

这里我们没有提供配置文件参数,默认是前台运行,前台运行更方便debug
4.1.2 r运行程序
我们输入r先运行到main函数再打其他断点,输入r然后回车:
Breakpoint 1 at 0x5f60: file /home/lqf/long/Qedis/QedisSvr/Qedis.cc, line 447.
(gdb) r
Starting program: /home/lqf/long/Qedis/bin/qedis_server
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Breakpoint 1, main (ac=1, av=0x7fffffffe038) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:447
447 {
(gdb)
接下来我们就要根据tcp 服务的套路打断点:
4.2 qedis服务框架

设置断点
所有的tcp服务器都遵循
函数:
-
bind
-
listen
-
accept
-
epoll_wait
-
epoll_ctl
-
发送 send或者write,不同的项目可能用的函数不一样,先断点send
-
接收recv或者read,不同的项目可能用的函数不一样,先断点recv
(gdb) b bind
Breakpoint 2 at 0x7ffff7b71380: file ../sysdeps/unix/syscall-template.S, line 78.
(gdb) b listen
Breakpoint 3 at 0x7ffff7b714e0: file ../sysdeps/unix/syscall-template.S, line 78.
(gdb) b accept
Breakpoint 4 at 0x7ffff798e4b0: accept. (2 locations)
(gdb) b epoll_wait
Breakpoint 5 at 0x7ffff7b70630: file ../sysdeps/unix/sysv/linux/epoll_wait.c, line 28.
(gdb) b epoll_ctl
Breakpoint 6 at 0x7ffff7b70d10: file ../sysdeps/unix/syscall-template.S, line 78.
(gdb) b send
Breakpoint 7 at 0x7ffff798e770: send. (2 locations)
(gdb) b recv
Breakpoint 8 at 0x7ffff798e5f0: recv. (2 locations)
bind和listen查看调用栈
打好断点后,我们输入c继续运行程序:
Thread 1 "qedis_server" hit Breakpoint 2, bind () at ../sysdeps/unix/syscall-template.S:78
78 ../sysdeps/unix/syscall-template.S: No such file or directory.
通过查看堆栈的方式:
#0 bind () at ../sysdeps/unix/syscall-template.S:78
#1 0x00007ffff7f11289 in Internal::ListenSocket::Bind (this=this@entry=0x5555555759a0, addr=...)
at /home/lqf/long/Qedis/QBase/ListenSocket.cc:48
#2 0x00007ffff7f1935a in Server::TCPBind (this=<optimized out>, addr=..., tag=1)
at /home/lqf/long/Qedis/QBase/Server.cc:57
#3 0x000055555555a64e in Qedis::_Init (this=0x7fffffffde10) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:290
#4 0x00007ffff7f18924 in Server::MainLoop (this=0x7fffffffde10, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:128
#5 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
可以清晰看到服务程序如何bind和listen

listen同理也是使用bt查看堆栈,

从这里63行也可以看出来这里是封装了NetThreadPool 网络线程池,查看源码是支持:
-
recvThread_接收线程;
-
sendThread_发送线程。
class NetThreadPool
{
std::shared_ptr<RecvThread> recvThread_;
std::shared_ptr<SendThread> sendThread_;
....
}
接收数据相关:
RecvThread::Run()
发送数据相关:
SendThread::Run( )
继续运行触发epoll_ctl
#0 epoll_ctl () at ../sysdeps/unix/syscall-template.S:78
#1 0x00007ffff7f10880 in Epoll::AddSocket (epfd=<optimized out>, socket=socket@entry=5, events=events@entry=1,
ptr=ptr@entry=0x5555555759a0) at /home/lqf/long/Qedis/QBase/EPoller.cc:28
#2 0x00007ffff7f108d1 in Epoller::AddSocket (this=0x555555573f50, sock=5, events=1, userPtr=0x5555555759a0)
at /home/lqf/long/Qedis/QBase/EPoller.cc:75
#3 0x00007ffff7f118a8 in Internal::NetThread::_AddSocket (this=this@entry=0x555555573c20,
task=std::shared_ptr<Socket> (use count 2, weak count 1) = {...}, events=<optimized out>)
at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:73
#4 0x00007ffff7f126f5 in Internal::NetThread::_TryAddNewTasks (this=0x555555573c20)
at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:67
#5 0x00007ffff7f13348 in Internal::RecvThread::Run (this=0x555555573c20)
at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:93
从堆栈可以看出来此时是把当前的fd加入到epoll处理,我们也看看
Qedis/QBase/NetThreadPool.cc
void NetThread::_TryAddNewTasks()
{
if (newCnt_ > 0 && mutex_.try_lock())
{
NewTasks tmp;
newTasks_.swap(tmp); //newTasks_是任务队列,可以跟踪下 是在哪里push任务
newCnt_ = 0;
mutex_.unlock();
auto iter(tmp.begin()),
end (tmp.end());
for (; iter != end; ++ iter)
_AddSocket(iter->first, iter->second);
}
}
当前文件搜索newTasks_关键字可以找到NetThread::AddSocket,可以增加断点NetThread::AddSocket
void NetThread::AddSocket(PSOCKET task, uint32_t events)
{
std::lock_guard<std::mutex> guard(mutex_);
newTasks_.push_back(std::make_pair(task, events));
++ newCnt_;
assert (newCnt_ == static_cast<int>(newTasks_.size()));
}
把socket任务加入到对应的线程进行处理,比如SendThread::Run函数里调用tcpSock->Send()发送数据给客户端。需要源码的朋友可以加薇laoliao6668获取
继续运行触发epoll_wait的调用
Thread 3 "qedis_server" hit Breakpoint 5, epoll_wait (epfd=3, events=0x5555555759f0, maxevents=maxevents@entry=1,
timeout=timeout@entry=1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:28
28 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.
(gdb) bt
#0 epoll_wait (epfd=3, events=0x5555555759f0, maxevents=maxevents@entry=1, timeout=timeout@entry=1)
at ../sysdeps/unix/sysv/linux/epoll_wait.c:28
#1 0x00007ffff7f10c3a in Epoller::Poll (timeoutMs=<optimized out>, maxEvent=<optimized out>, events=...,
this=<optimized out>) at /usr/include/c++/10/bits/stl_vector.h:1043
#2 Epoller::Poll (this=0x555555573f50, events=std::vector of length 0, capacity 0, maxEvent=1, timeoutMs=1)
at /home/lqf/long/Qedis/QBase/EPoller.cc:99
#3 0x00007ffff7f1339d in Internal::RecvThread::Run (this=0x555555573c20)
此时可以清晰看到RecvThread::Run 就是epoll_wait所在的线程。这里注意tasks_表示的是class Socket ,后续线程池获取到这个Socket再调用对应的函数做处理
void RecvThread::Run()
{
// init log;
g_logLevel = logALL;
g_logDest = logFILE;
if (g_logLevel && g_logDest)
{
g_log = LogManager::Instance().CreateLog(g_logLevel, g_logDest, "recvthread_log");
}
std::deque<PSOCKET >::iterator it;
int loopCount = 0;
while (IsAlive())
{
_TryAddNewTasks();
if (tasks_.empty())
{
std::this_thread::sleep_for(std::chrono::microseconds(100));
continue;
}
const int nReady = poller_->Poll(firedEvents_, static_cast<int>(tasks_.size()), 1);
for (int i = 0; i < nReady; ++ i)
{
assert (!(firedEvents_[i].events & EventTypeWrite));
Socket* sock = (Socket* )firedEvents_[i].userdata;
if (firedEvents_[i].events & EventTypeRead)
{
if (!sock->OnReadable()) //处理新连接
{
sock->OnError();
}
}
if (firedEvents_[i].events & EventTypeError)
{
sock->OnError();
}
}
if (nReady == 0)
loopCount *= 2;
if (++ loopCount < 100000)
continue;
loopCount = 0;
for (auto it(tasks_.begin()); //遍历任务,判断socket任务是不是有效的
it != tasks_.end();
)
{
if ((*it)->Invalid())
{
NetThreadPool::Instance().DisableRead(*it); //禁止读事件
RemoveSocket(*it, EventTypeRead); //从epoll 移除socket fd
it = tasks_.erase(it); //如果是无效的从任务队列中移除
}
else
{
++ it;
}
}
}
}
知道epoll_wait在哪个线程后,可以先停止该函数的断点 。
然后继续运行,此时如果客户端不发消息,此时没有函数触发。
发送线程和接收线程是独立的。
redis-cli连接qedis触发accept
#0 __libc_accept (fd=5, addr=addr@entry=..., len=len@entry=0x7ffff7024b54) at ../sysdeps/unix/sysv/linux/accept.c:24
#1 0x00007ffff7f11591 in Internal::ListenSocket::_Accept (this=this@entry=0x5555555759a0)
at /home/lqf/long/Qedis/QBase/ListenSocket.cc:78
#2 0x00007ffff7f115de in Internal::ListenSocket::OnReadable (this=0x5555555759a0)
at /home/lqf/long/Qedis/QBase/ListenSocket.cc:85
#3 0x00007ffff7f133d3 in Internal::RecvThread::Run (this=0x555555573c20)
at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:110
即是:
bool ListenSocket::OnReadable()
{
while (true)
{
int connfd = _Accept();
if (connfd >= 0)
{ //创建对象绑定新连接
Server::Instance()->NewConnection(connfd, tag_);
}
void Server::NewConnection(int sock, int tag, const std::function<void ()>& cb)
{
if (sock == INVALID_SOCKET)
return;
auto conn = _OnNewConnection(sock, tag);
if (!conn)
{
Socket::CloseSocket(sock);
return;
}
conn->SetOnDisconnect(cb);
if (NetThreadPool::Instance().AddSocket(conn, EventTypeRead | EventTypeWrite)) //先连接 加入可读 可写事件
tasks_.AddTask(conn); //加到队列
}
QBase/Server.cc代码需要花时间研究。重点函数:
erver::_RunLogic()
在哪里被调用,可以断点
并重点查看StreamSocket.h的class StreamSocket
分析哪里接收的客户端数据
recv函数没有触发,那要断点read,或者readv
StreamSocket::Recv()调用了readv
int StreamSocket::Recv()
{
if (recvBuf_.Capacity() == 0)
{
recvBuf_.InitCapacity(64 * 1024); // First recv data, allocate buffer
}
BufferSequence buffers;
recvBuf_.GetSpace(buffers);
if (buffers.count == 0)
{
WRN << "Recv buffer is full";
return 0;
}
int ret = static_cast<int>(::readv(localSock_, buffers.buffers, static_cast<int>(buffers.count)));
if (ret == ERRORSOCKET && (EAGAIN == errno || EWOULDBLOCK == errno))
return 0;
if (ret > 0)
recvBuf_.AdjustWritePtr(ret);
return (0 == ret) ? EOFSOCKET : ret;
}
所以需要断点:StreamSocket::Recv
断点触发情况:
Thread 3 "qedis_server" hit Breakpoint 11, StreamSocket::Recv (this=this@entry=0x7ffff0001820)
at /home/lqf/long/Qedis/QBase/StreamSocket.cc:46
46 {
(gdb) bt
#0 StreamSocket::Recv (this=this@entry=0x7ffff0001820) at /home/lqf/long/Qedis/QBase/StreamSocket.cc:46
#1 0x00007ffff7f1b0b4 in StreamSocket::OnReadable (this=0x7ffff0001820) at /home/lqf/long/Qedis/QBase/StreamSocket.cc:122
#2 0x00007ffff7f133d3 in Internal::RecvThread::Run (this=0x555555573c20)
at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:110
StreamSocket::Recv读取到的数据由StreamSocket::DoMsgParse函数处理,所以也要断点
#0 StreamSocket::DoMsgParse (this=0x7ffff0001820) at /home/lqf/long/Qedis/QBase/StreamSocket.cc:219
#1 0x00007ffff7f1cafb in Internal::TaskManager::DoMsgParse (this=0x7fffffffde20)
at /home/lqf/long/Qedis/QBase/TaskManager.cc:99
#2 0x00007ffff7f18ace in Server::MainLoop (this=0x7fffffffde10, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:139
#3 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
可以看到是在main函数处理 redis的协议
StreamSocket::DoMsgParse
后续请继续跟踪:
QMigrateClient::_HandlePacket
分析哪里发送的客户端数据
send函数没有触发,那要断点write,或者writev
通过搜索源码找到
StreamSocket::_Send调用了writev
int StreamSocket::_Send(const BufferSequence& bf)
{
auto total = bf.TotalBytes();
if (total == 0)
return 0;
int ret = static_cast<int>(::writev(localSock_, bf.buffers, static_cast<int>(bf.count)));
if (ERRORSOCKET == ret && (EAGAIN == errno || EWOULDBLOCK == errno))
{
epollOut_ = true;
ret = 0;
}
else if (ret > 0 && static_cast<size_t>(ret) < total)
{
epollOut_ = true;
}
else if (static_cast<size_t>(ret) == total)
{
epollOut_ = false;
}
return ret;
}
所以需要断点StreamSocket::_Send
#0 StreamSocket::_Send (this=this@entry=0x7ffff0001820, bf=...) at /home/lqf/long/Qedis/QBase/StreamSocket.cc:72
#1 0x00007ffff7f1a900 in StreamSocket::Send (this=this@entry=0x7ffff0001820)
at /home/lqf/long/Qedis/QBase/StreamSocket.cc:148
#2 0x00007ffff7f1395c in Internal::SendThread::Run (this=0x555555573f80)
需要源码的朋友可以加薇laoliao6668获取
5 默认启动的线程
5.1 主线程
Thread 1 "qedis_server" received signal SIGINT, Interrupt.
0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7fffffffdd20, rem=rem@entry=0x7fffffffdd20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb) thread 1
[Switching to thread 1 (Thread 0x7ffff7827780 (LWP 11433))]
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7fffffffdd20, rem=rem@entry=0x7fffffffdd20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb) bt
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7fffffffdd20, rem=rem@entry=0x7fffffffdd20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
#1 0x00007ffff7b33ec7 in __GI___nanosleep (requested_time=requested_time@entry=0x7fffffffdd20,
remaining=remaining@entry=0x7fffffffdd20) at nanosleep.c:27
#2 0x00007ffff7f18b15 in std::this_thread::sleep_for<long, std::ratio<1l, 1000000l> > (__rtime=..., __rtime=...)
at /usr/include/c++/10/thread:401
#3 Server::MainLoop (this=0x7fffffffde10, daemon=<optimized out>) at /home/lqf/long/Qedis/QBase/Server.cc:140
#4 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
5.2 监控线程ThreadPool::_MonitorRoutine()
(gdb) thread 2
[Switching to thread 2 (Thread 0x7ffff7826700 (LWP 11775))]
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff7825e20, rem=rem@entry=0x7ffff7825e20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb) bt
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff7825e20, rem=rem@entry=0x7ffff7825e20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
#1 0x00007ffff7b33ec7 in __GI___nanosleep (requested_time=requested_time@entry=0x7ffff7825e20,
remaining=remaining@entry=0x7ffff7825e20) at nanosleep.c:27
#2 0x00007ffff7f2348d in std::this_thread::sleep_for<long, std::ratio<1l, 1l> > (__rtime=..., __rtime=...)
at /usr/include/c++/10/thread:401
#3 ThreadPool::_MonitorRoutine (this=0x7ffff7f33600 <ThreadPool::Instance()::pool>)
at /home/lqf/long/Qedis/QBase/Threads/ThreadPool.cc:94
#4 0x00007ffff7d4e793 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff7983609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6 0x00007ffff7b70353 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
5.3 发送线程SendThread::Run
其实这里是运行在线程池创建的线程里,但一直独占了线程池的一个线程。
(gdb) thread 3
[Switching to thread 3 (Thread 0x7ffff7025700 (LWP 11776))]
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff7024c20, rem=rem@entry=0x7ffff7024c20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb) bt
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff7024c20, rem=rem@entry=0x7ffff7024c20) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
#1 0x00007ffff7b33ec7 in __GI___nanosleep (requested_time=requested_time@entry=0x7ffff7024c20,
remaining=remaining@entry=0x7ffff7024c20) at nanosleep.c:27
#2 0x00007ffff7f13a61 in std::this_thread::sleep_for<long, std::ratio<1l, 1000000l> > (__rtime=..., __rtime=...)
at /usr/include/c++/10/thread:401
#3 Internal::SendThread::Run (this=0x555555573f80) at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:194
。。。。。。
#27 ThreadPool::_WorkerRoutine (this=0x7ffff7f33600 <ThreadPool::Instance()::pool>)
at /home/lqf/long/Qedis/QBase/Threads/ThreadPool.cc:83
5.4 接收线程RecvThread::Run
其实这里是运行在线程池创建的线程里,但一直独占了线程池的一个线程。
(gdb) thread 4
[Switching to thread 4 (Thread 0x7ffff6824700 (LWP 11777))]
#0 0x00007ffff7b7068e in epoll_wait (epfd=3, events=0x5555555759f0, maxevents=maxevents@entry=1, timeout=timeout@entry=1)
at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.
(gdb) bt
#0 0x00007ffff7b7068e in epoll_wait (epfd=3, events=0x5555555759f0, maxevents=maxevents@entry=1, timeout=timeout@entry=1)
at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
#1 0x00007ffff7f10c3a in Epoller::Poll (timeoutMs=<optimized out>, maxEvent=<optimized out>, events=...,
this=<optimized out>) at /usr/include/c++/10/bits/stl_vector.h:1043
#2 Epoller::Poll (this=0x555555573f50, events=std::vector of length 0, capacity 0, maxEvent=1, timeoutMs=1)
at /home/lqf/long/Qedis/QBase/EPoller.cc:99
#3 0x00007ffff7f1339d in Internal::RecvThread::Run (this=0x555555573c20)
at /home/lqf/long/Qedis/QBase/NetThreadPool.cc:101
。。。
#25 std::__future_base::_Task_state<std::_Bind<std::_Bind<void (Internal::RecvThread::*(std::shared_ptr<Internal::RecvThread>))()> ()>, std::allocator<int>, void ()>::_M_run() (this=0x555555574960) at /usr/include/c++/10/future:1459
#26 0x00007ffff7f2322a in std::function<void ()>::operator()() const (this=0x7ffff6823e10)
at /usr/include/c++/10/bits/std_function.h:622
#27 ThreadPool::_WorkerRoutine (this=0x7ffff7f33600 <ThreadPool::Instance()::pool>)
at /home/lqf/long/Qedis/QBase/Threads/ThreadPool.cc:83
可以看到,发送/接收线程,其实都是线程池分配的线程运行的。
线程:
void ThreadPool::_CreateWorker()
{
std::thread t([this]() { this->_WorkerRoutine(); } );
workers_.push_back(std::move(t));
}
5.5 持久化线程AOFThread::Run
其实这里是运行在线程池创建的线程里,但一直独占了线程池的一个线程。
(gdb) thread 5
[Switching to thread 5 (Thread 0x7ffff5d7e700 (LWP 11778))]
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff5d7daf0, rem=rem@entry=0x7ffff5d7daf0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 ../sysdeps/unix/sysv/linux/clock_nanosleep.c: No such file or directory.
(gdb) bt
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff5d7daf0, rem=rem@entry=0x7ffff5d7daf0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
#1 0x00007ffff7b33ec7 in __GI___nanosleep (requested_time=requested_time@entry=0x7ffff5d7daf0,
remaining=remaining@entry=0x7ffff5d7daf0) at nanosleep.c:27
#2 0x00007ffff7f64db1 in std::this_thread::sleep_for<long, std::ratio<1l, 1000l> > (__rtime=..., __rtime=...)
at /usr/include/c++/10/thread:401
#3 qedis::QAOFThreadController::AOFThread::Run (this=0x55555558c920) at /home/lqf/long/Qedis/QedisCore/QAOF.cc:167
#4 0x00007ffff7f66b53 in std::__invoke_impl<void, void (qedis::QAOFThreadController::AOFThread::*&)(), std::shared_ptr<qedis::QAOFThreadController::AOFThread>&> (__f=<optimized out>, __f=<optimized out>, __t=...)
at /usr/include/c++/10/bits/invoke.h:73
#5 std::__invoke<void (qedis::QAOFThreadController::AOFThread::*&)(), std::shared_ptr<qedis::QAOFThreadController::AOFThread>&> (__fn=<optimized out>) at /usr/include/c++/10/bits/invoke.h:95
#6 std::_Bind<void (qedis::QAOFThreadController::AOFThread::*(std::shared_ptr<qedis::QAOFThreadController::AOFThread>))()>::__call<void, , 0ul>(std::tuple<>&&, std::_Index_tuple<0ul>) (__args=..., this=<optimized out>)
at /usr/include/c++/10/functional:416
。。。。。。。
#31 0x00007ffff7f2322a in std::function<void ()>::operator()() const (this=0x7ffff5d7de10)
at /usr/include/c++/10/bits/std_function.h:622
#32 ThreadPool::_WorkerRoutine (this=0x7ffff7f33600 <ThreadPool::Instance()::pool>)
at /home/lqf/long/Qedis/QBase/Threads/ThreadPool.cc:83
5.6 日志线程LogThread::Run
其实这里是运行在线程池创建的线程里,但一直独占了线程池的一个线程。
(gdb) thread 6
[Switching to thread 6 (Thread 0x7ffff557d700 (LWP 11779))]
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff557cc30, rem=rem@entry=0x7ffff557cc30) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
78 in ../sysdeps/unix/sysv/linux/clock_nanosleep.c
(gdb) bt
#0 0x00007ffff7b2e23f in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,
req=req@entry=0x7ffff557cc30, rem=rem@entry=0x7ffff557cc30) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
#1 0x00007ffff7b33ec7 in __GI___nanosleep (requested_time=requested_time@entry=0x7ffff557cc30,
remaining=remaining@entry=0x7ffff557cc30) at nanosleep.c:27
#2 0x00007ffff7f208fd in std::this_thread::sleep_for<long, std::ratio<1l, 1000l> > (__rtime=..., __rtime=...)
at /usr/include/c++/10/thread:401
#3 LogManager::LogThread::Run (this=0x555555573320) at /home/lqf/long/Qedis/QBase/Log/Logger.cc:662
#4 0x00007ffff7f20dc3 in std::__invoke_impl<void, void (LogManager::LogThread::*&)(), std::shared_ptr<LogManager::LogThread>&> (__f=<optimized out>, __f=<optimized out>, __t=...) at /usr/include/c++/10/bits/invoke.h:73
#5 std::__invoke<void (LogManager::LogThread::*&)(), std::shared_ptr<LogManager::LogThread>&> (__fn=<optimized out>)
at /usr/include/c++/10/bits/invoke.h:95
#6 std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>::__call<void, , 0ul>(std::tuple<>&&, std::_Index_tuple<0ul>) (__args=..., this=<optimized out>) at /usr/include/c++/10/functional:416
#7 std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>::operator()<, void>() (
this=<optimized out>) at /usr/include/c++/10/functional:499
#8 std::__invoke_impl<void, std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>&>(std::__invoke_other, std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>&) (__f=...)
at /usr/include/c++/10/bits/invoke.h:60
#9 std::__invoke<std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>&>(std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>&) (__fn=...) at /usr/include/c++/10/bits/invoke.h:95
#10 std::_Bind<std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()> ()>::__call<void>(std::tuple<>&&, std::_Index_tuple<>) (__args=..., this=<optimized out>) at /usr/include/c++/10/functional:416
#11 std::_Bind<std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()> ()>::operator()<, void>() (this=<optimized out>) at /usr/include/c++/10/functional:499
#12 std::__invoke_impl<void, std::_Bind<std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))
。。。。
#30 std::_Function_handler<void (), ThreadPool::ExecuteTask<std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>>(std::_Bind<void (LogManager::LogThread::*(std::shared_ptr<LogManager::LogThread>))()>&&)::{lambda()#1}>::_M_invoke(std::_Any_data const&) (__functor=...) at /usr/include/c++/10/bits/std_function.h:291
#31 0x00007ffff7f2322a in std::function<void ()>::operator()() const (this=0x7ffff557ce10)
at /usr/include/c++/10/bits/std_function.h:622
#32 ThreadPool::_WorkerRoutine (this=0x7ffff7f33600 <ThreadPool::Instance()::pool>)
--Type <RET> for more, q to quit, c to continue without paging--
at /home/lqf/long/Qedis/QBase/Threads/ThreadPool.cc:83
6 命令执行操作level-db
操作leveldb
redis命令和leveldb封装映射
#0 qedis::QCommandTable::AliasCommand (aliases=std::map with 0 elements)
at /home/lqf/long/Qedis/QedisCore/QCommand.cc:208
#1 0x000055555555a6f1 in Qedis::_Init (this=0x7fffffffde10)
at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:297
#2 0x00007ffff7f18924 in Server::MainLoop (this=0x7fffffffde10, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:128
#3 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038)
at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
初始化db
#0 0x0000555555559930 in qedis::QStore::Init(int)@plt ()
#1 0x000055555555a704 in Qedis::_Init (this=0x7fffffffde10)
at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:298
#2 0x00007ffff7f18924 in Server::MainLoop (this=0x7fffffffde10, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:128
#3 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038)
at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
my_hash作用
Thread 1 "qedis_server" hit Breakpoint 6, qedis::dictGenHashFunction (key=0x55555558d710, len=4)
at /home/lqf/long/Qedis/QedisCore/QHelper.cc:26
26 unsigned int dictGenHashFunction(const void* key, int len) {
(gdb) bt
#0 qedis::dictGenHashFunction (key=0x55555558d710, len=4) at /home/lqf/long/Qedis/QedisCore/QHelper.cc:26
#1 0x00007ffff7f7b206 in qedis::my_hash::operator() (this=this@entry=0x555555578a90, str="key2")
at /home/lqf/long/Qedis/QedisCore/QHelper.cc:71
at /usr/include/c++/10/bits/unordered_map.h:985
#5 qedis::QStore::SetValue (this=this@entry=0x7ffff7fc66c0 <qedis::QStore::Instance()::store>, key="key2",
value=...) at /home/lqf/long/Qedis/QedisCore/QStore.cc:612
#6 0x00007ffff7fa4488 in qedis::SetValue (key="key2", value="d", exclusive=<optimized out>)
at /home/lqf/long/Qedis/QedisCore/QString.cc:82
#7 0x00007ffff7fa4a7a in qedis::set (params=..., reply=0x7ffff0001730)
at /home/lqf/long/Qedis/QedisCore/QString.cc:89
#8 0x00007ffff7f69e45 in qedis::QClient::_HandlePacket (this=0x7ffff00015c0,
start=0x7ffff0011960 "*3\r\n$3\r\nset\r\n$4\r\nkey2\r\n$1\r\nd\r\n", bytes=<optimized out>)
at /home/lqf/long/Qedis/QedisCore/QClient.h:48
#9 0x00007ffff7f1a680 in StreamSocket::DoMsgParse (this=0x7ffff00015c0)
--Type <RET> for more, q to quit, c to continue without paging--
at /home/lqf/long/Qedis/QBase/StreamSocket.cc:227
#10 0x00007ffff7f1cafb in Internal::TaskManager::DoMsgParse (this=0x7fffffffde20)
at /home/lqf/long/Qedis/QBase/TaskManager.cc:99
#11 0x00007ffff7f18ace in Server::MainLoop (this=this@entry=0x7fffffffde10, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:139
#12 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
设置hash的目的是什么?为什么这里要抽象一层出来?这里貌似是如果不需要持久化存储时,就不用到leveldb
什么时候才会初始化 leveldb
需要配置文件开启持久化才行
#0 qedis::QStore::InitDumpBackends (this=0x7ffff7fc66c0 <qedis::QStore::Instance()::store>)
at /home/lqf/long/Qedis/QedisCore/QStore.cc:787
#1 0x000055555555a738 in Qedis::_Init (this=0x7fffffffe410) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:302
#2 0x00007ffff7f18924 in Server::MainLoop (this=0x7fffffffe410, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:128
#3 0x0000555555559fc5 in main (ac=2, av=0x7fffffffe638) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
void QStore::InitDumpBackends()
{
assert (waitSyncKeys_.empty());
if (g_config.backend == BackEndNone) //默认是不需要持久化
return;
if (g_config.backend == BackEndLeveldb)
{
waitSyncKeys_.resize(store_.size());
for (size_t i = 0; i < store_.size(); ++ i)
{
std::unique_ptr<QLeveldb> db(new QLeveldb);
QString dbpath = g_config.backendPath + std::to_string(i);
if (!db->Open(dbpath.data()))
assert(false);
else
USR << "Open leveldb " << dbpath;
backends_.push_back(std::move(db));
}
}
else
{
// ERROR: unsupport backend
return;
}
for (int i = 0; i < static_cast<int>(backends_.size()); ++ i)
{
auto timer = TimerManager::Instance().CreateTimer();
timer->Init(1000 / g_config.backendHz);
timer->SetCallback([&, i] () {
int oldDb = QSTORE.SelectDB(i);
QSTORE.DumpToBackends(i);
QSTORE.SelectDB(oldDb);
});
TimerManager::Instance().AddTimer(timer);
}
}
如何持久化
通过定时器定时刷新
(gdb) bt
#0 qedis::QLeveldb::Put (this=0x55555558cec0, key="key", obj=..., absttl=-1)
at /home/lqf/long/Qedis/QedisCore/QLeveldb.cc:74
#1 0x00007ffff7f9f665 in qedis::QStore::DumpToBackends (
this=0x7ffff7fc66c0 <qedis::QStore::Instance()::store>, dbno=<optimized out>)
at /home/lqf/long/Qedis/QedisCore/QStore.cc:850
#2 0x00007ffff7f9f8dd in operator() (__closure=0x5555556dcc30)
at /home/lqf/long/Qedis/QedisCore/QStore.cc:820
#3 std::__invoke_impl<void, const qedis::QStore::InitDumpBackends()::<lambda()>&> (__f=...)
at /usr/include/c++/10/bits/invoke.h:60
#4 std::__invoke<const qedis::QStore::InitDumpBackends()::<lambda()>&> (__fn=...)
at /usr/include/c++/10/bits/invoke.h:95
#5 std::_Bind<qedis::QStore::InitDumpBackends()::<lambda()>()>::__call_c<void> (__args=...,
this=0x5555556dcc30) at /usr/include/c++/10/functional:427
#6 std::_Bind<qedis::QStore::InitDumpBackends()::<lambda()>()>::operator()<> (this=0x5555556dcc30)
at /usr/include/c++/10/functional:511
#7 operator() (this=0x5555556dcc30) at /home/lqf/long/Qedis/QBase/Timer.h:61
#8 std::__invoke_impl<void, Timer::SetCallback<qedis::QStore::InitDumpBackends()::<lambda()>, {}>::<lambda()>&> (__f=...) at /usr/include/c++/10/bits/invoke.h:60
#9 std::__invoke_r<void, Timer::SetCallback<qedis::QStore::InitDumpBackends()::<lambda()>, {}>::<lambda()>&>
(__fn=...) at /usr/include/c++/10/bits/invoke.h:153
#10 std::_Function_handler<void(), Timer::SetCallback<qedis::QStore::InitDumpBackends()::<lambda()>, {}>::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...) at /usr/include/c++/10/bits/std_function.h:291
#11 0x00007ffff7f1d7ca in std::function<void ()>::operator()() const (this=0x5555556dcc30)
at /usr/include/c++/10/bits/std_function.h:622
#12 Timer::OnTimer (this=0x5555556dcc30) at /home/lqf/long/Qedis/QBase/Timer.cc:201
#13 Timer::OnTimer (this=0x5555556dcc30) at /home/lqf/long/Qedis/QBase/Timer.cc:193
--Type <RET> for more, q to quit, c to continue without paging--
#14 0x00007ffff7f1e181 in TimerManager::UpdateTimers (this=0x7ffff7f31a60 <TimerManager::Instance()::mgr>,
now=...) at /home/lqf/long/Qedis/QBase/Timer.cc:330
#15 0x000055555555b7c3 in Qedis::_RunLogic (this=0x7fffffffdde0)
at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:431
#16 0x00007ffff7f18ace in Server::MainLoop (this=0x7fffffffdde0, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:139
#17 0x0000555555559fc5 in main (ac=2, av=0x7fffffffe008) at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
持久化最终由 QStore::DumpToBackends函数负责,不管是删除还是添加。需要源码的朋友可以加薇laoliao6668获取
哪个线程执行具体的k-v操作
是主线程执行k-v操作
断点
b ExecuteCmd
然后redis-cli执行set get命令操作
(gdb) bt
#0 qedis::QCommandTable::ExecuteCmd (params=std::vector of length 3, capacity 4 = {...},
info=0x7ffff7fc4d90 <qedis::QCommandTable::s_info+1872>, reply=0x7ffff0001730)
at /home/lqf/long/Qedis/QedisCore/QCommand.cc:249
#1 0x00007ffff7f69e45 in qedis::QClient::_HandlePacket (this=0x7ffff00015c0,
start=0x7ffff00119a4 "*3\r\n$3\r\nset\r\n$3\r\nkey\r\n$1\r\nd\r\n",
bytes=<optimized out>) at /home/lqf/long/Qedis/QedisCore/QClient.h:48
#2 0x00007ffff7f1a680 in StreamSocket::DoMsgParse (this=0x7ffff00015c0)
at /home/lqf/long/Qedis/QBase/StreamSocket.cc:227
#3 0x00007ffff7f1cafb in Internal::TaskManager::DoMsgParse (this=0x7fffffffde20)
at /home/lqf/long/Qedis/QBase/TaskManager.cc:99
#4 0x00007ffff7f18ace in Server::MainLoop (this=0x7fffffffde10, daemon=<optimized out>)
at /home/lqf/long/Qedis/QBase/Server.cc:139
#5 0x0000555555559fc5 in main (ac=1, av=0x7fffffffe038)
at /home/lqf/long/Qedis/QedisSvr/Qedis.cc:464
QError QCommandTable::ExecuteCmd(const std::vector<QString>& params, const QCommandInfo* info, UnboundedBuffer* reply)
{
if (params.empty())
{
ReplyError(QError_param, reply);
return QError_param;
}
if (!info)
{
ReplyError(QError_unknowCmd, reply);
return QError_unknowCmd;
}
if (!info->CheckParamsCount(static_cast<int>(params.size())))
{
ReplyError(QError_param, reply);
return QError_param;
}
return info->handler(params, reply); //执行这里,看看进入哪个函数
}
set get是是字符串操作,触发QedisCore/QString.cc相关函数的调用
QError set(const std::vector<QString>& params, UnboundedBuffer* reply)
{
SetValue(params[1], params[2]); //存储kv
FormatOK(reply);
return QError_ok;
}
k-v设置都是在主线程处理?
具体操作levedb的封装
QedisCore/QLeveldb.h
QedisCore/QLeveldb.cc
断点
QLeveldb::Put
7 性能测试
需要注意的是这个Qedis项目,新版本的redis-benchmark并不支持,是下载的redis 3.2的版本,我也把源码包一起打包了。
./redis-benchmark -t set,get -n 1000000 -c 1
刚开始时,性能比较差,所以使用tcpdump 分析客户端的请求和响应
sudo tcpdump -A -i lo host 127.0.0.1 and port 6379
10:59:39.162891 IP localhost.55876 > localhost.6379: Flags [P.], seq 13951:13996, ack 1551, win 512, options [nop,nop,TS val 3294051496 ecr 3294051496], length 45: RESP "SET" "key:rand_int" "xxx" E..a.6@.@.^.........D....}..........U..... .W<..W<.*3 3 SET 16 key:rand_int 3 xxx 10:59:39.192597 29毫秒才收到回复 IP localhost.6379 \> localhost.55876: Flags \[P.\], seq 1551:1556, ack 13996, win 16384, options \[nop,nop,TS val 3294051526 ecr 3294051496\], length 5: RESP "OK" E..9F.@[email protected]......\[email protected]..... .W\<..W\<.+OK \\ 10:59:39.192741 0.14毫秒又继续发送请求了 IP localhost.55876 \> localhost.6379: Flags \[P.\], seq 13996:14041, ack 1556, win 512, options \[nop,nop,TS val 3294051526 ecr 3294051526\], length 45: RESP "SET" "key:rand_int" "xxx" E..a.7@.@._\].........D....\~..........U..... .W\<..W\<.\*3 3 SET 16 key:rand_int 3 xxx 10:59:39.207409 IP 14.6毫秒才收到回复,localhost.6379 > localhost.55876: Flags [P.], seq 1556:1561, ack 14041, win 16384, options [nop,nop,TS val 3294051541 ecr 3294051526], length 5: RESP "OK" E..9F.@[email protected][email protected]..... .W<..W<.+OK 10:59:39.207589 0.18毫秒又继续发送请求了 IP localhost.55876 > localhost.6379: Flags [P.], seq 14041:14086, ack 1561, win 512, options [nop,nop,TS val 3294051541 ecr 3294051541], length 45: RESP "SET" "key:rand_int" "xxx" E..a.8@.@..........D....~9.........U..... .W<..W<.*3 3 SET 16 key:rand_int 3 xxx 10:59:39.222298 14.8毫秒才收到回复 IP localhost.6379 \> localhost.55876: Flags \[P.\], seq 1561:1566, ack 14086, win 16384, options \[nop,nop,TS val 3294051555 ecr 3294051541\], length 5: RESP "OK" E..9F.@[email protected]......\[email protected]..... .W\<..W\<.+OK 10:59:39.222530 IP localhost.55876 \> localhost.6379: Flags \[P.\], seq 14086:14131, ack 1566, win 512, options \[nop,nop,TS val 3294051556 ecr 3294051555\], length 45: RESP "SET" "key:rand_int" "xxx" E..a.9@.@._\[.........D....\~f.........U..... .W\<..W\<.\*3 3 SET 16 key:rand_int 3 xxx 10:59:39.230025 IP localhost.55876 > localhost.6379: Flags [F.], seq 14131, ack 1566, win 512, options [nop,nop,TS val 3294051563 ecr 3294051555], length 0 E..4.:@.@._..........D....~..........(..... .W<..W<. 10:59:39.237170 IP localhost.6379 > localhost.55876: Flags [P.], seq 1566:1571, ack 14132, win 16384, options [nop,nop,TS val 3294051570 ecr 3294051556], length 5: RESP "OK" E..9F.@[email protected][email protected]..... .W<..W<.+OK 10:59:39.237296 IP localhost.55876 > localhost.6379: Flags [R], seq 2914614932, win 0, length 0 E..(..@.@.<..........D....~.....P....`..
./redis-benchmark -t set,get -n 1000000 -c 1 -P 100
pipe line模式,性能有提升到1万qps,但此模式下redis-server能到50万qps
E...m.@[email protected]........
.T8..T8.*3
$3
SET
$16
key:__rand_int__
$3
xxx
*3
$3
SET
$16
其他参数,参考redis-benchmark的使用。