maxun爬虫机器人介绍与部署

软件介绍

机器人爬虫工具,绕开编码,直接从网页中截图并且进行解析

一款全新的无代码网页数据提取平台,无需编程即可轻松抓取网站的数据,支持列表/文本抓取、截图、自定义代理、自动处理分页和滚动等功能。作为一个新的开源项目,它的功能还在不停迭代,计划推比如适应网站布局变化和登录后数据提取等新功能

代码地址:

https://github.com/getmaxun/maxun?tab=readme-ov-file

软件部署

1、下载git

复制代码
yum install git

2、克隆代码

复制代码
git clone https://github.com/getmaxun/maxun.git

3、创建配置文件

复制代码
cd maxun
mkdir .env

添加如下内容

复制代码
# App Setup
NODE_ENV=production                     # Set to 'development' or 'production' as required
JWT_SECRET=a9Z$kLq7^f03GzNw!bP9dH4xV6sT2yXl3O8vR@uYq3          # Replace with a secure JWT secret key
DB_NAME=maxun                           # Your PostgreSQL database name
DB_USER=postgres                        # PostgreSQL username
DB_PASSWORD=postgres                    # PostgreSQL password
DB_HOST=postgres                        # Host for PostgreSQL in Docker
DB_PORT=5432                            # Port for PostgreSQL (default: 5432)
ENCRYPTION_KEY=f4d5e6a7b8c9d0e1f23456789abcdef01234567890abcdef123456789abcdef0      # Key for encrypting sensitive data (passwords and proxies)
MINIO_ENDPOINT=minio                    # MinIO endpoint in Docker
MINIO_PORT=9000                         # Port for MinIO (default: 9000)
MINIO_CONSOLE_PORT=9001                 # Web UI Port for MinIO (default: 9001)
MINIO_ACCESS_KEY=minio_access_key       # MinIO access key
MINIO_SECRET_KEY=minio_secret_key       # MinIO secret key
REDIS_HOST=redis                        # Redis host in Docker
REDIS_PORT=6379                         # Redis port (default: 6379)

# Backend and Frontend URLs and Ports
BACKEND_PORT=8080 # Port to run backend on. Needed for Docker setup 
FRONTEND_PORT=5173 # Port to run frontend on. Needed for Docker setup 
BACKEND_URL=http://localhost:8080       # URL on which the backend runs. You can change it based on your needs. 
PUBLIC_URL=http://localhost:5173        # URL on which the frontend runs. You can change it based on your needs. 
VITE_BACKEND_URL=http://localhost:8080  # URL used by frontend to connect to backend. It should always have the same value as BACKEND_URL
VITE_PUBLIC_URL=http://localhost:5173   # URL used by backend to connect to frontend. It should always have the same value as PUBLIC_URL

# Optional Google OAuth settings for Google Sheet Integration
GOOGLE_CLIENT_ID=your_google_client_id
GOOGLE_CLIENT_SECRET=your_google_client_secret
GOOGLE_REDIRECT_URI=your_google_redirect_uri

# Telemetry Settings - Please keep it enabled. Keeping it enabled helps us understand how the product is used and assess the impact of any new changes. 
MAXUN_TELEMETRY=true

方式一:

Docker部署minio-CSDN博客

Docker部署Redis教程-CSDN博客

Docker部署Postgres教程_docker pull postgres-CSDN博客

Linux部署NodeJS-CSDN博客

项目部署

复制代码
git clone https://github.com/getmaxun/maxun

# change directory to the project root
cd maxun

# install dependencies
npm install

# change directory to maxun-core to install dependencies
cd maxun-core 
npm install

# get back to the root directory
cd ..

# make sure playwright is properly initialized
npx playwright install
npx playwright install-deps

# get back to the root directory
cd ..

# start frontend and backend together
npm run start

方式二:

a、下载docker compose

复制代码
yum -y update
yum install -y docker-compose-plugin
yum install -y python-pip   
docker compose version

b、启动容器

复制代码
docker compose --env-file .env up -d

4、测试

浏览器中输入http://localhost:5173/

如果远程,需要讲localhost改成服务器IP

部署参考:

https://github.com/getmaxun/maxun

相关推荐
ZC跨境爬虫18 小时前
Scrapy多级请求实战:5sing伴奏网爬取踩坑与优化全记录(JSON提取+Xpath解析)
爬虫·scrapy·html·json
willhuo19 小时前
基于Playwright的抖音网页自动化浏览器项目使用指南
爬虫·c#·.netcore·webview
-To be number.wan1 天前
Python爬取百度指数保姆级教程
爬虫·python
程序员老邢1 天前
【产品底稿 04】商助慧 V1.1 里程碑:爬虫入库 + MySQL + Milvus 全链路打通
java·爬虫·mysql·ai·springboot·milvus
ZC跨境爬虫1 天前
【爬虫实战对比】Requests vs Scrapy 笔趣阁小说爬虫,从单线程到高效并发的全方位升级
前端·爬虫·scrapy·html
ZC跨境爬虫2 天前
【Scrapy实战避坑】5sing网站爬虫从0到1,踩遍动态渲染、正则匹配全坑(附完整解决方案)
爬虫·scrapy
ZC跨境爬虫2 天前
Scrapy实战爬取5sing网站:Pipeline优化+全流程踩坑复盘,从报错到数据落地
前端·爬虫·python·scrapy
码农很忙2 天前
爬虫与反爬虫攻防战:技术解析与实战指南
爬虫
大數據精準工單獲取2 天前
【数据抓取】 编写爬虫基本请求:使用爬虫框架发送 HTTP 请求,获取网页内容
爬虫·网络协议·http
IP老炮不瞎唠2 天前
为什么Python爬虫需要代理 IP?原理与应用详解
爬虫·python·tcp/ip