最近在与suricata打交道,简单看一下其源码,尝试从源码分析一下其关键模块的运行原理。其实网上有很多suricata源码分析教程,都写得非常不错,为我提供不少帮助,我也不奢望能够写出更好的教程,主要还是加深自己对于IPS/IDS的运行原理的理解。suricata是一个庞杂的项目,我也无法详细地分析每一处重要的代码,主要还是挑一些自己的感兴趣的模块记录一下。
接下来的分析文章都基于最新的稳定版本suricata7.0.8。
参考文献
docs.suricata.io/en/suricata...
suricata --list-runmodes
sh
------------------------------------- Runmodes ------------------------------------------
| RunMode Type | Custom Mode | Description
|----------------------------------------------------------------------------------------
| PCAP_DEV | single | Single threaded pcap live mode
| ---------------------------------------------------------------------
| | autofp | Multi-threaded pcap live mode. Packets from each flow are assigned to a consistent detection thread
| ---------------------------------------------------------------------
| | workers | Workers pcap live mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| PCAP_FILE | single | Single threaded pcap file mode
| ---------------------------------------------------------------------
| | autofp | Multi-threaded pcap file mode. Packets from each flow are assigned to a consistent detection thread
|----------------------------------------------------------------------------------------
| PFRING(DISABLED) | autofp | Multi threaded pfring mode. Packets from each flow are assigned to a single detect thread, unlike "pfring_auto" where packets from the same flow can be processed by any detect thread
| ---------------------------------------------------------------------
| | single | Single threaded pfring mode
| ---------------------------------------------------------------------
| | workers | Workers pfring mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| NFQ | autofp | Multi threaded NFQ IPS mode with respect to flow
| ---------------------------------------------------------------------
| | workers | Multi queue NFQ IPS mode with one thread per queue
|----------------------------------------------------------------------------------------
| NFLOG | autofp | Multi threaded nflog mode
| ---------------------------------------------------------------------
| | single | Single threaded nflog mode
| ---------------------------------------------------------------------
| | workers | Workers nflog mode
|----------------------------------------------------------------------------------------
| IPFW | autofp | Multi threaded IPFW IPS mode with respect to flow
| ---------------------------------------------------------------------
| | workers | Multi queue IPFW IPS mode with one thread per queue
|----------------------------------------------------------------------------------------
| ERF_FILE | single | Single threaded ERF file mode
| ---------------------------------------------------------------------
| | autofp | Multi threaded ERF file mode. Packets from each flow are assigned to a single detect thread
|----------------------------------------------------------------------------------------
| ERF_DAG | autofp | Multi threaded DAG mode. Packets from each flow are assigned to a single detect thread, unlike "dag_auto" where packets from the same flow can be processed by any detect thread
| ---------------------------------------------------------------------
| | single | Singled threaded DAG mode
| ---------------------------------------------------------------------
| | workers | Workers DAG mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| AF_PACKET_DEV | single | Single threaded af-packet mode
| ---------------------------------------------------------------------
| | workers | Workers af-packet mode, each thread does all tasks from acquisition to logging
| ---------------------------------------------------------------------
| | autofp | Multi socket AF_PACKET mode. Packets from each flow are assigned to a single detect thread.
|----------------------------------------------------------------------------------------
| AF_XDP_DEV | single | Single threaded af-xdp mode
| ---------------------------------------------------------------------
| | workers | Workers af-xdp mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| NETMAP(DISABLED) | single | Single threaded netmap mode
| ---------------------------------------------------------------------
| | workers | Workers netmap mode, each thread does all tasks from acquisition to logging
| ---------------------------------------------------------------------
| | autofp | Multi-threaded netmap mode. Packets from each flow are assigned to a single detect thread.
|----------------------------------------------------------------------------------------
| DPDK(DISABLED) | workers | Workers DPDK mode, each thread does all tasks from acquisition to logging
|----------------------------------------------------------------------------------------
| UNIX_SOCKET | single | Unix socket mode
| ---------------------------------------------------------------------
| | autofp | Unix socket mode
|----------------------------------------------------------------------------------------
| WINDIVERT(DISABLED) | autofp | Multi-threaded WinDivert IPS mode load-balanced by flow
|----------------------------------------------------------------------------------------
对于列出的运行模式列表,关注到RunMode Type和Custom Mode这两列,分别对应抓包驱动和运行线程模式。简要了解一下各个抓包驱动。
- PCAP_DEV:pcap 实时模式,使用 libpcap 库捕获流量。
- PCAP_FILE:分析pcap文件中记录的流量。
- PFRING:PF_RING是一种高性能数据包处理框架。
- NFQ:通过Linux netfilter框架的NFQUEUE方式实现IPS。
- NFLOG:使用Linux netfilter框架的NFLOG方式。
- IPFW:结合FreeBSD的防火墙的queue IPS模式。
- ERF_FILE:处理ERF格式数据包文件。
- ERF_DAG:通过DAG设备捕获ERF流量。
- AF_PACKET_DEV:使用 AF_PACKET 接口进行实时捕获流量。
- AF_XDP_DEV:使用AF_XDP(Address Family eXpress Data Path)接口,绕开了传统网络栈。
- NETMAP:一个高效的网络数据包捕获和传输框架。(github.com/luigirizzo/...
- DPDK:Data Plane Development Kit,是一组快速处理数据包的开发平台及接口,现在较为流行。(github.com/DPDK/dpdk)
- UNIX_SOCKET:应该指的是unixsocket交互接口,但不知为何放置在这里???
- WINDIVERT:一种用户模式的数据包捕获框架(github.com/basil00/Win...
运行线程模式:single、autofp和workers的区别
从描述辨析这三种模式:
single:Single threaded pcap live mode.单线程运行,适合用于开发与排查。
autofp:Multi-threaded pcap live mode. Packets from each flow are assigned to a consistent detection thread.多线程 PCAP 实时模式,使用多个线程来并行处理从网络接口捕获的数据包,来自同一流的所有数据包分配给同一个检测线程。
workers: Workers pcap live mode, each thread does all tasks from acquisition to logging.每个线程负责从数据包捕获到处理和日志记录的整个流程。这种模式通过并行化任务来充分利用多核 CPU 的计算能力,从而提高吞吐量和性能。
接下来使用 AF_PACKET_DEV 的这三种模式,看一下在代码中的区别。
首先对runmod进行注册:
c
void RunModeIdsAFPRegister(void)
{
RunModeRegisterNewRunMode(RUNMODE_AFP_DEV, "single", "Single threaded af-packet mode",
RunModeIdsAFPSingle, AFPRunModeEnableIPS);
RunModeRegisterNewRunMode(RUNMODE_AFP_DEV, "workers",
"Workers af-packet mode, each thread does all"
" tasks from acquisition to logging",
RunModeIdsAFPWorkers, AFPRunModeEnableIPS);
RunModeRegisterNewRunMode(RUNMODE_AFP_DEV, "autofp",
"Multi socket AF_PACKET mode. Packets from "
"each flow are assigned to a single detect "
"thread.",
RunModeIdsAFPAutoFp, AFPRunModeEnableIPS);
return;
}
single
启动日志里面i: threads: Threads created -> W: 1 FM: 1 FR: 1 Engine started.
RunModeIdsAFPSingle-->RunModeSetLiveCaptureSingle-->RunModeSetLiveCaptureWorkersForDevice
c
static int RunModeSetLiveCaptureWorkersForDevice(ConfigIfaceThreadsCountFunc ModThreadsCount,
const char *recv_mod_name,
const char *decode_mod_name, const char *thread_name,
const char *live_dev, void *aconf,
unsigned char single_mode)
{
// single和workers都调用了该函数,single调用时single_mode参数为1
//......
if (single_mode) {
threads_count = 1;
} else {
threads_count = MIN(ModThreadsCount(aconf), thread_max);
SCLogInfo("%s: creating %" PRId32 " thread%s", live_dev, threads_count,
threads_count > 1 ? "s" : "");
}
//threads_count = 1即意味这下面的创建线程时只会创建一次
//......
for (int thread = 0; thread < threads_count; thread++) {
//......创建线程
if (single_mode) {
snprintf(tname, sizeof(tname), "%s#01-%s", thread_name, visual_devname);
snprintf(printable_threadname, strlen(thread_name)+5+strlen(live_dev), "%s#01-%s",
thread_name, live_dev);
} else {
snprintf(tname, sizeof(tname), "%s#%02d-%s", thread_name,
thread+1, visual_devname);
snprintf(printable_threadname, strlen(thread_name)+5+strlen(live_dev), "%s#%02d-%s",
thread_name, thread+1, live_dev);
}
ThreadVars *tv = TmThreadCreatePacketHandler(tname,
"packetpool", "packetpool",
"packetpool", "packetpool",
"pktacqloop");
if (tv == NULL) {
FatalError("TmThreadsCreate failed");
}
tv->printable_name = printable_threadname;
tm_module = TmModuleGetByName(recv_mod_name);
if (tm_module == NULL) {
FatalError("TmModuleGetByName failed for %s", recv_mod_name);
}
TmSlotSetFuncAppend(tv, tm_module, aconf);
tm_module = TmModuleGetByName(decode_mod_name);
if (tm_module == NULL) {
FatalError("TmModuleGetByName %s failed", decode_mod_name);
}
TmSlotSetFuncAppend(tv, tm_module, NULL);
tm_module = TmModuleGetByName("FlowWorker");
if (tm_module == NULL) {
FatalError("TmModuleGetByName for FlowWorker failed");
}
TmSlotSetFuncAppend(tv, tm_module, NULL);
tm_module = TmModuleGetByName("RespondReject");
if (tm_module == NULL) {
FatalError("TmModuleGetByName RespondReject failed");
}
TmSlotSetFuncAppend(tv, tm_module, NULL);
TmThreadSetCPU(tv, WORKER_CPU_SET);
if (TmThreadSpawn(tv) != TM_ECODE_OK) {
FatalError("TmThreadSpawn failed");
}
}
}
这里可以看到代码中将recv_mod、decode_mod、FlowWorker和RespondReject4个模块都附加到了同一个线程。
workers
启动日志里面i: threads: Threads created -> W: 4 FM: 1 FR: 1 Engine started. RunModeIdsAFPWorkers-->RunModeSetLiveCaptureWorkers-->RunModeSetLiveCaptureWorkersForDevice
这里和single模式一样,调用到了RunModeSetLiveCaptureWorkersForDevice这个函数只是这次single_mode为0,在这就不再赘述。
autofp
启动日志里面i: threads: Threads created -> RX: 4 W: 4 FM: 1 FR: 1 Engine started.
使用gdb查看所有线程
less
(gdb) info threads
Id Target Id Frame
* 1 Thread 0x7ffff7a6e240 (LWP 54297) "Suricata-Main" 0x00007ffff78e57f8 in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0, req=req@entry=0x7fffffffe2d0,
rem=rem@entry=0x0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
2 Thread 0x7ffff6400640 (LWP 55145) "RX#01" 0x00007ffff7918bcf in __GI___poll (fds=fds@entry=0x7ffff63ff268, nfds=nfds@entry=1, timeout=timeout@entry=100)
at ../sysdeps/unix/sysv/linux/poll.c:29
3 Thread 0x7ffff5a00640 (LWP 55242) "RX#02" 0x00007ffff7918bcf in __GI___poll (fds=fds@entry=0x7ffff59ff268, nfds=nfds@entry=1, timeout=timeout@entry=100)
at ../sysdeps/unix/sysv/linux/poll.c:29
4 Thread 0x7ffff5000640 (LWP 55373) "RX#03" 0x00007ffff7918bcf in __GI___poll (fds=fds@entry=0x7ffff4fff268, nfds=nfds@entry=1, timeout=timeout@entry=100)
at ../sysdeps/unix/sysv/linux/poll.c:29
5 Thread 0x7fffefe00640 (LWP 55374) "RX#04" 0x00007ffff7918bcf in __GI___poll (fds=fds@entry=0x7fffefdff268, nfds=nfds@entry=1, timeout=timeout@entry=100)
at ../sysdeps/unix/sysv/linux/poll.c:29
6 Thread 0x7fffef400640 (LWP 55375) "W#01" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x555556dde148)
at ./nptl/futex-internal.c:57
7 Thread 0x7fffee200640 (LWP 55376) "W#02" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x555556dde1fc)
at ./nptl/futex-internal.c:57
8 Thread 0x7fffed000640 (LWP 55377) "W#03" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x555556dde2a8)
--Type <RET> for more, q to quit, c to continue without paging--
at ./nptl/futex-internal.c:57
9 Thread 0x7fffe3600640 (LWP 55378) "W#04" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x555556dde378)
at ./nptl/futex-internal.c:57
10 Thread 0x7fffe2400640 (LWP 55379) "FM#01" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffe23ff270, op=393, expected=0,
futex_word=0x555555e33d08 <flow_manager_ctrl_cond+40>) at ./nptl/futex-internal.c:57
11 Thread 0x7fffe1a00640 (LWP 55380) "FR#01" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffe19ff270, op=393, expected=0,
futex_word=0x555555e33c88 <flow_recycler_ctrl_cond+40>) at ./nptl/futex-internal.c:57
12 Thread 0x7fffe1000640 (LWP 55382) "CW" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffe0fff2f0, op=393, expected=0, futex_word=0x555555ed4ce8)
at ./nptl/futex-internal.c:57
13 Thread 0x7fffd7e00640 (LWP 55383) "CS" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffd7dff2f0, op=393, expected=0, futex_word=0x555555ed5a08)
at ./nptl/futex-internal.c:57
RunModeIdsAFPAutoFp->RunModeSetLiveCaptureAutoFp
c
int RunModeSetLiveCaptureAutoFp(ConfigIfaceParserFunc ConfigParser,
ConfigIfaceThreadsCountFunc ModThreadsCount, const char *recv_mod_name,
const char *decode_mod_name, const char *thread_name, const char *live_dev)
{
if ((nlive <= 1) && (live_dev != NULL)) {
SCLogDebug("live_dev %s", live_dev);
void *aconf = ConfigParser(live_dev);
if (aconf == NULL) {
FatalError("Failed to allocate config for %s", live_dev);
}
//这里threads_count=4
int threads_count = ModThreadsCount(aconf);
SCLogInfo("Going to use %" PRId32 " %s receive thread(s)",
threads_count, recv_mod_name);
/* create the threads */
for (int thread = 0; thread < threads_count; thread++) {
//......
snprintf(tname, sizeof(tname), "%s#%02d", thread_name, thread+1);
//tname=RX#0x;RX线程都是这边启动的
ThreadVars *tv_receive =
TmThreadCreatePacketHandler(tname,
"packetpool", "packetpool",
queues, "flow", "pktacqloop");
if (tv_receive == NULL) {
FatalError("TmThreadsCreate failed");
}
TmModule *tm_module = TmModuleGetByName(recv_mod_name);
if (tm_module == NULL) {
FatalError("TmModuleGetByName failed for %s", recv_mod_name);
}
TmSlotSetFuncAppend(tv_receive, tm_module, aconf);
tm_module = TmModuleGetByName(decode_mod_name);
if (tm_module == NULL) {
FatalError("TmModuleGetByName %s failed", decode_mod_name);
}
TmSlotSetFuncAppend(tv_receive, tm_module, NULL);
TmThreadSetCPU(tv_receive, RECEIVE_CPU_SET);
if (TmThreadSpawn(tv_receive) != TM_ECODE_OK) {
FatalError("TmThreadSpawn failed");
}
}
for (uint16_t thread = 0; thread < thread_max; thread++) {
snprintf(tname, sizeof(tname), "%s#%02u", thread_name_workers, (uint16_t)(thread + 1));
snprintf(qname, sizeof(qname), "pickup%u", (uint16_t)(thread + 1));
SCLogDebug("tname %s, qname %s", tname, qname);
ThreadVars *tv_detect_ncpu =
TmThreadCreatePacketHandler(tname,
qname, "flow",
"packetpool", "packetpool",
"varslot");
if (tv_detect_ncpu == NULL) {
FatalError("TmThreadsCreate failed");
}
TmModule *tm_module = TmModuleGetByName("FlowWorker");
if (tm_module == NULL) {
FatalError("TmModuleGetByName for FlowWorker failed");
}
TmSlotSetFuncAppend(tv_detect_ncpu, tm_module, NULL);
TmThreadSetCPU(tv_detect_ncpu, WORKER_CPU_SET);
TmThreadSetGroupName(tv_detect_ncpu, "Detect");
tm_module = TmModuleGetByName("RespondReject");
if (tm_module == NULL) {
FatalError("TmModuleGetByName RespondReject failed");
}
TmSlotSetFuncAppend(tv_detect_ncpu, tm_module, NULL);
if (TmThreadSpawn(tv_detect_ncpu) != TM_ECODE_OK) {
FatalError("TmThreadSpawn failed");
}
}
}
在autofp模式中recv_mod和decode_mod附加在一个线程中,FlowWorker和RespondReject附加在一个线程中。
简单介绍了一下不同运行模式下的线程创建情况。single是workers模式的单线程例子,将所有的工作放在同一线程之中;而autofp模式会将工作进行分离到不同线程。