模拟场景:一次性上传一个文件夹里的1000张图片,总大小高达10GB😭
这可不是个小任务,因为图片不仅数量多,单张大小还可能接近10MB。这就需要好好设计一下,确保整个过程既流畅又稳定,还能容错。
1. 如何认定重复的图片?
一开始,我们想用哈希算法来判断图片是否重复。毕竟,哈希值相同,基本可以认定是同一张图片。但问题来了,每张10MB的图片计算哈希值大概要30-60ms,1000张图片就得等上30秒到60秒。这个等待时间太长了,用户体验会很差。
后来我们换了个思路:既然支持文件夹上传,那在遍历文件夹时就能拿到每张图片的相对路径、文件名和文件大小。干脆把这三样东西组合起来,作为判断图片是否上传过的依据。这样一来,选好文件夹后,就能立刻把文件列表传给后端,后端再反馈哪些图片重复、哪些不重复。这样就能快速筛选出需要上传的图片,省时又省力。
2. 为什么要在前端做持久化缓存?
上传这么多文件可不是一蹴而就的事,中间很容易出岔子,比如网页不小心关了,或者浏览器崩溃了。要是没有持久化缓存,用户就得重新选择文件夹,这多麻烦啊!所以,我们需要一种像localStorage
那样即使浏览器关闭也不会丢失数据的存储方式。但localStorage
不行,它不能存文件,而且容量太小,装不下一张图片。
最后我们选定了IndexedDB
,它不仅能存储文件,容量也够大,完美解决了问题。
3. 如何控制上传的并发任务?
1000张图片不可能一股脑全发出去,不然用户就得干等着,啥也干不了。如果是用HTTP/1.1协议,就需要控制并发数。我们测试了一下,Chrome浏览器对同一个域名的并发数最大是6,所以把并发数设置为5,留点网络资源给前台任务,这样用户体验会好很多。
虽然HTTP/2.0协议在理论上可以不用控制并发,但实际测试下来,上传任务的主要瓶颈还是带宽,HTTP/2.0的优势(比如多路复用、头部压缩)在这种场景下体现不出来,总任务时长差不多。所以,我们还是用HTTP/1.1协议。
4. 失败和中断的处理
这部分主要靠前面提到的本地持久化存储。第一次拿到后端返回的需要上传的文件列表时,就把文件列表和文件本身存进IndexedDB
。每成功上传一张,就从IndexedDB
里删除对应的记录。最后剩下的,就是没上传成功的,方便后续处理。
当然,也可以在上传任务中加个catch
,对失败的任务自动重传。
对indexDB操作的封装
JavaScript
ini
class FileStorage {
private db: IDBDatabase | null = null;
private dbName = 'fileStore';
private storeName = 'files';
private openCallback: () => void;
constructor(open = () => {}) {
this.openCallback = open;
this.openDatabase();
}
// 打开或创建数据库
private openDatabase(): void {
const request = indexedDB.open(this.dbName, 1);
request.onupgradeneeded = (event) => {
this.db = (event.target as IDBOpenDBRequest).result;
if (!this.db.objectStoreNames.contains(this.storeName)) {
this.db.createObjectStore(this.storeName, {
keyPath: 'id',
autoIncrement: true,
});
}
};
request.onsuccess = (event) => {
this.db = (event.target as IDBOpenDBRequest).result;
this.openCallback();
};
request.onerror = (event) => {
console.error('Error opening database:', event);
};
}
// 插入整个 fileList 数组
public async insertFileList(fileList: fileInfoWithId[]): Promise<void> {
const fileContents = await Promise.all(
fileList.map((file) => this.readFileAsArrayBuffer(file.raw))
);
await this.transaction(async (transaction) => {
const objectStore = transaction.objectStore(this.storeName);
for (let i = 0; i < fileList.length; i++) {
const file = fileList[i];
const content = fileContents[i];
await this.insertObjectStore(objectStore, { ...file, content });
}
});
}
// 插入单个 fileInfo
public async insertFile(fileInfo: fileInfoWithId): Promise<void> {
await this.insertFileList([fileInfo]);
}
// 读取文件为 ArrayBuffer
private readFileAsArrayBuffer(file: File): Promise<ArrayBuffer> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.readAsArrayBuffer(file);
reader.onload = () => resolve(reader.result as ArrayBuffer);
reader.onerror = (error) => reject(error);
});
}
// 插入对象到 objectStore
private async insertObjectStore(
objectStore: IDBObjectStore,
file: fileInfoWithId & { content: ArrayBuffer }
): Promise<void> {
return new Promise((resolve, reject) => {
const request = objectStore.add(file);
request.onsuccess = () => resolve();
request.onerror = (error) => reject(error);
});
}
// 根据 ID 列表删除数据
public async deleteByIds(ids: number[]): Promise<void> {
await this.transaction(async (transaction) => {
const objectStore = transaction.objectStore(this.storeName);
for (const id of ids) {
const request = objectStore.delete(id);
await this.waitForRequest(request);
}
});
}
// 根据单个 ID 删除数据
public async deleteById(id: number): Promise<void> {
await this.transaction(async (transaction) => {
const objectStore = transaction.objectStore(this.storeName);
const exists = await this.checkIfExists(objectStore, id);
if (!exists) {
console.warn(`No file found with id ${id}`);
return;
}
console.log(`Deleting file with id ${id}`);
const request = objectStore.delete(id);
await this.waitForRequest(request);
});
}
// 检查是否存在指定 ID 的数据
private async checkIfExists(
objectStore: IDBObjectStore,
id: number
): Promise<boolean> {
return new Promise((resolve) => {
const getRequest = objectStore.get(id);
getRequest.onsuccess = () => {
resolve(getRequest.result !== undefined); // 如果 result 是 undefined,说明没有找到该 ID
};
getRequest.onerror = (error) => {
console.error('Check existence error:', error);
resolve(false);
};
});
}
// 查找所有数据
public async findAll(): Promise<fileInfoWithId[]> {
return await this.transaction(async (transaction) => {
const objectStore = transaction.objectStore(this.storeName);
const request = objectStore.getAll();
return await this.waitForRequest(request);
});
}
// 清空整个对象存储中的所有数据
public async clearDatabase(): Promise<void> {
await this.transaction(async (transaction) => {
const objectStore = transaction.objectStore(this.storeName);
const request = objectStore.clear();
await this.waitForRequest(request);
console.log('Database cleared successfully');
});
}
// 带分页的查找功能
public async findWithPagination(
page: number,
pageSize: number
): Promise<fileInfoWithId[]> {
return await this.transaction(async (transaction) => {
const objectStore = transaction.objectStore(this.storeName);
const start = (page - 1) * pageSize;
const result: fileInfoWithId[] = [];
let currentIndex = 0;
const cursorRequest = objectStore.openCursor();
return new Promise<fileInfoWithId[]>((resolve, reject) => {
cursorRequest.onerror = (error) => reject(error);
cursorRequest.onsuccess = (event) => {
const cursor = (event.target as IDBRequest).result;
if (cursor) {
if (currentIndex >= start && currentIndex < start + pageSize) {
result.push(cursor.value);
}
if (currentIndex < start + pageSize) {
currentIndex++;
cursor.continue();
} else {
resolve(result);
}
} else {
resolve(result);
}
};
});
});
}
// 创建事务并执行回调
private async transaction<T>(
callback: (transaction: IDBTransaction) => Promise<T>
): Promise<T> {
return new Promise((resolve, reject) => {
if (!this.db) {
reject(new Error('Database not initialized'));
return;
}
const transaction = this.db.transaction([this.storeName], 'readwrite');
callback(transaction).then(resolve).catch(reject);
transaction.oncomplete = () => {
console.log('Transaction completed');
};
transaction.onerror = (error) => {
console.error('Transaction error:', error);
reject(error);
};
});
}
// 等待请求完成
private waitForRequest<T>(request: IDBRequest<T>): Promise<T> {
return new Promise((resolve, reject) => {
request.onsuccess = () => resolve(request.result);
request.onerror = (error) => {
console.error('Request error:', error);
reject(error);
};
});
}
}
export default FileStorage;
并发控制类
JavaScript复制
kotlin
// 并发控制类
class ConcurrencyControl {
private maxConcurrency: number;
private queue: Function[];
private running: number;
private onAllTasksCompleted: Function;
constructor(maxConcurrency: number, onAllTasksCompleted: Function = () => {}) {
this.maxConcurrency = maxConcurrency;
this.queue = [];
this.running = 0;
this.onAllTasksCompleted = onAllTasksCompleted;
}
addQueue(queue: Function[]) {
this.queue.push(...queue);
this.run();
}
addTask(task: Function) {
this.queue.push(task);
this.run();
}
run() {
while (this.running < this.maxConcurrency && this.queue.length) {
this.running++;
const task = this.queue.shift();
Promise.resolve(task()).finally(() => {
this.running--;
this.run();
if (this.running === 0 && this.queue.length === 0) {
this.onAllTasksCompleted();
}
});
}
}
}
export default ConcurrencyControl;
5. 方案二:文件整体打包压缩与切片上传
5.1 方案概述
文件整体打包压缩与切片上传是指将所有图片先打包成一个压缩文件,然后对压缩文件进行切片,分批次上传到服务器。这种方式的好处是:
- 数据量更小:通过压缩,可以显著减少上传数据的总量。
- 容错能力强:如果某个切片上传失败,只需要重新上传这个切片,而不需要重新上传整个文件。
- 用户体验好:用户可以在上传过程中继续操作页面,而不会被长时间的上传任务阻塞。
5.2 前端实现
文件打包压缩
JavaScript复制
ini
// 使用 zip.js 打包压缩文件
import { ZipWriter, BlobWriter } from "zip.js";
async function compressFiles(fileList: File[]) {
const zipWriter = new ZipWriter(new BlobWriter("application/zip"));
const promises = [];
for (let i = 0; i < fileList.length; i++) {
promises.push(
zipWriter.add(fileList[i].name, new Blob([fileList[i]]), { level: 6 })
);
}
await Promise.all(promises);
const compressedFile = await zipWriter.close();
return compressedFile;
}
文件切片上传逻辑
JavaScript
javascript
// 切片上传逻辑
async function uploadCompressedFile(compressedFile: Blob, sliceSize: number) {
const totalSlices = Math.ceil(compressedFile.size / sliceSize);
const uploadPromises = [];
for (let i = 0; i < totalSlices; i++) {
const start = i * sliceSize;
const end = Math.min(start + sliceSize, compressedFile.size);
const slice = compressedFile.slice(start, end);
uploadPromises.push(uploadSlice(slice, i, totalSlices));
}
// 控制并发上传
const concurrencyControl = new ConcurrencyControl(5);
uploadPromises.forEach((promise) => concurrencyControl.addTask(promise));
await Promise.all(uploadPromises);
console.log("所有切片上传完成!");
}
// 切片上传函数
function uploadSlice(slice: Blob, sliceIndex: number, totalSlices: number) {
return new Promise((resolve, reject) => {
const formData = new FormData();
formData.append("file", slice);
formData.append("index", sliceIndex.toString());
formData.append("totalSlices", totalSlices.toString());
fetch("/upload-slice", {
method: "POST",
body: formData,
})
.then((response) => {
if (response.ok) {
resolve(sliceIndex);
} else {
reject(`切片 ${sliceIndex} 上传失败`);
}
})
.catch((error) => {
reject(`切片 ${sliceIndex} 上传失败: ${error}`);
});
});
}
5.3 后端实现(Node.js示例)
JavaScript复制
ini
const express = require("express");
const multer = require("multer");
const fs = require("fs");
const app = express();
const upload = multer({ dest: "uploads/" });
app.post("/upload-slice", upload.single("file"), (req, res) => {
const { index, totalSlices } = req.body;
const filePath = `uploads/compressed.part${index}`;
// 保存切片
fs.rename(req.file.path, filePath, (err) => {
if (err) {
return res.status(500).send("保存切片失败");
}
// 检查是否所有切片都已上传
if (index === totalSlices - 1) {
mergeSlices(totalSlices);
}
res.send("切片上传成功");
});
});
// 合并切片
function mergeSlices(totalSlices) {
const filePath = `uploads/compressed.zip`;
const writeStream = fs.createWriteStream(filePath);
for (let i = 0; i < totalSlices; i++) {
const slicePath = `uploads/compressed.part${i}`;
const readStream = fs.createReadStream(slicePath);
readStream.pipe(writeStream, { end: false });
readStream.on("end", () => {
fs.unlink(slicePath, (err) => {
if (err) {
console.error(`删除切片 ${slicePath} 失败: ${err}`);
}
});
});
}
writeStream.on("finish", () => {
console.log("压缩文件合并完成");
// 解压文件
const unzip = require("unzipper");
fs.createReadStream(filePath)
.pipe(unzip.Extract({ path: "uploads/unzipped" }))
.on("close", () => {
console.log("文件解压完成");
});
});
}
app.listen(3000, () => {
console.log("服务器运行在 3000 端口");
});
6. 总结
通过以上两种方案,我们可以高效地处理大规模文件上传的问题。
最后
水平有限,还不能写到尽善尽美,希望大家多多交流,跟春野一同进步!!!