FFmpeg has three concatenation methods:
- concat video filter
Use this method if your inputs do not have the same parameters (width, height, etc), or are not the same formats/codecs, or if you want to perform any filtering.
Note that this method performs a re-encode of all inputs. If you want to avoid the re-encode, you could re-encode just the inputs that don't match so they share the same codec and other parameters, then use the concat demuxer to avoid re-encoding everything.
ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv
-filter_complex "[0:v] [0:a] [1:v] [1:a] [2:v] [2:a]
concat=n=3:v=1:a=1 [v] [a]"
-map "[v]" -map "[a]" output.mkv
- concat demuxer
Use this method when you want to avoid a re-encode and your format does not support file-level concatenation (most files used by general users do not support file-level concatenation).
$ cat mylist.txt
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
$ ffmpeg -f concat -i mylist.txt -c copy output.mp4
For Windows:
(echo file 'first file.mp4' & echo file 'second file.mp4' )>list.txt
ffmpeg -safe 0 -f concat -i list.txt -c copy output.mp4
or
(for %i in (*.mp4) do @echo file '%i') > list.txt
ffmpeg -safe 0 -f concat -i list.txt -c copy output.mp4
- concat protocol
Use this method with formats that support file-level concatenation (MPEG-1, MPEG-2 PS, DV). Do not use with MP4.
ffmpeg -i "concat:input1|input2" -codec copy output.mkv
This method does not work for many formats, including MP4, due to the nature of these formats and the simplistic physical concatenation performed by this method. It's the equivalent of just raw joining the files.
If in doubt about which method to use, try the concat demuxer.
Also see
FFmpeg FAQ: How can I join video files?
FFmpeg Wiki: How to concatenate (join, merge) media files
https://superuser.com/questions/1619992/audio-out-of-sync-when-using-ffmpeg-adelay-and-amix
you need to delay all the audio channels with the same value using adelay=milliseconds:all=true, and use -async 1 at the end of your command so ffmpeg will just corrects the start of the audio stream instead of stretching/squeezing.
so in your case :
bash
ffmpeg
-i input1.webm
-i input2.webm
-i input3.webm
-i input4.webm
-i input5.webm
-filter_complex "
[0]adelay=1720:all=true[a0];
[1]adelay=48920:all=true[a1];
[2]adelay=76220:all=true[a2];
[3]adelay=3360980:all=true[a3];
[4]adelay=3689600:all=true[a4];
[a0][a1][a2][a3][a4]amix=inputs=5 [out]
"
-map "[out]"
out.webm -async 1 -y
as the documentations says :
-async samples_per_second
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps, the parameter is the maximum samples per second by which the audio is changed. -async 1 is a special case where only the start of the audio stream is corrected without any later correction.
Share
Improve this answer
Follow