diff -Nru auto-editor-22w28a+ds/ae-ffmpeg/LICENSE.txt auto-editor-22w52a+ds/ae-ffmpeg/LICENSE.txt --- auto-editor-22w28a+ds/ae-ffmpeg/LICENSE.txt 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/ae-ffmpeg/LICENSE.txt 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,165 @@ + GNU LESSER GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + + This version of the GNU Lesser General Public License incorporates +the terms and conditions of version 3 of the GNU General Public +License, supplemented by the additional permissions listed below. + + 0. Additional Definitions. + + As used herein, "this License" refers to version 3 of the GNU Lesser +General Public License, and the "GNU GPL" refers to version 3 of the GNU +General Public License. + + "The Library" refers to a covered work governed by this License, +other than an Application or a Combined Work as defined below. + + An "Application" is any work that makes use of an interface provided +by the Library, but which is not otherwise based on the Library. +Defining a subclass of a class defined by the Library is deemed a mode +of using an interface provided by the Library. + + A "Combined Work" is a work produced by combining or linking an +Application with the Library. The particular version of the Library +with which the Combined Work was made is also called the "Linked +Version". + + The "Minimal Corresponding Source" for a Combined Work means the +Corresponding Source for the Combined Work, excluding any source code +for portions of the Combined Work that, considered in isolation, are +based on the Application, and not on the Linked Version. + + The "Corresponding Application Code" for a Combined Work means the +object code and/or source code for the Application, including any data +and utility programs needed for reproducing the Combined Work from the +Application, but excluding the System Libraries of the Combined Work. + + 1. Exception to Section 3 of the GNU GPL. + + You may convey a covered work under sections 3 and 4 of this License +without being bound by section 3 of the GNU GPL. + + 2. Conveying Modified Versions. + + If you modify a copy of the Library, and, in your modifications, a +facility refers to a function or data to be supplied by an Application +that uses the facility (other than as an argument passed when the +facility is invoked), then you may convey a copy of the modified +version: + + a) under this License, provided that you make a good faith effort to + ensure that, in the event an Application does not supply the + function or data, the facility still operates, and performs + whatever part of its purpose remains meaningful, or + + b) under the GNU GPL, with none of the additional permissions of + this License applicable to that copy. + + 3. Object Code Incorporating Material from Library Header Files. + + The object code form of an Application may incorporate material from +a header file that is part of the Library. You may convey such object +code under terms of your choice, provided that, if the incorporated +material is not limited to numerical parameters, data structure +layouts and accessors, or small macros, inline functions and templates +(ten or fewer lines in length), you do both of the following: + + a) Give prominent notice with each copy of the object code that the + Library is used in it and that the Library and its use are + covered by this License. + + b) Accompany the object code with a copy of the GNU GPL and this license + document. + + 4. Combined Works. + + You may convey a Combined Work under terms of your choice that, +taken together, effectively do not restrict modification of the +portions of the Library contained in the Combined Work and reverse +engineering for debugging such modifications, if you also do each of +the following: + + a) Give prominent notice with each copy of the Combined Work that + the Library is used in it and that the Library and its use are + covered by this License. + + b) Accompany the Combined Work with a copy of the GNU GPL and this license + document. + + c) For a Combined Work that displays copyright notices during + execution, include the copyright notice for the Library among + these notices, as well as a reference directing the user to the + copies of the GNU GPL and this license document. + + d) Do one of the following: + + 0) Convey the Minimal Corresponding Source under the terms of this + License, and the Corresponding Application Code in a form + suitable for, and under terms that permit, the user to + recombine or relink the Application with a modified version of + the Linked Version to produce a modified Combined Work, in the + manner specified by section 6 of the GNU GPL for conveying + Corresponding Source. + + 1) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (a) uses at run time + a copy of the Library already present on the user's computer + system, and (b) will operate properly with a modified version + of the Library that is interface-compatible with the Linked + Version. + + e) Provide Installation Information, but only if you would otherwise + be required to provide such information under section 6 of the + GNU GPL, and only to the extent that such information is + necessary to install and execute a modified version of the + Combined Work produced by recombining or relinking the + Application with a modified version of the Linked Version. (If + you use option 4d0, the Installation Information must accompany + the Minimal Corresponding Source and Corresponding Application + Code. If you use option 4d1, you must provide the Installation + Information in the manner specified by section 6 of the GNU GPL + for conveying Corresponding Source.) + + 5. Combined Libraries. + + You may place library facilities that are a work based on the +Library side by side in a single library together with other library +facilities that are not Applications and are not covered by this +License, and convey such a combined library under terms of your +choice, if you do both of the following: + + a) Accompany the combined library with a copy of the same work based + on the Library, uncombined with any other library facilities, + conveyed under the terms of this License. + + b) Give prominent notice with the combined library that part of it + is a work based on the Library, and explaining where to find the + accompanying uncombined form of the same work. + + 6. Revised Versions of the GNU Lesser General Public License. + + The Free Software Foundation may publish revised and/or new versions +of the GNU Lesser General Public License from time to time. Such new +versions will be similar in spirit to the present version, but may +differ in detail to address new problems or concerns. + + Each version is given a distinguishing version number. If the +Library as you received it specifies that a certain numbered version +of the GNU Lesser General Public License "or any later version" +applies to it, you have the option of following the terms and +conditions either of that published version or of any later version +published by the Free Software Foundation. If the Library as you +received it does not specify a version number of the GNU Lesser +General Public License, you may choose any version of the GNU Lesser +General Public License ever published by the Free Software Foundation. + + If the Library as you received it specifies that a proxy can decide +whether future versions of the GNU Lesser General Public License shall +apply, that proxy's public statement of acceptance of any version is +permanent authorization for you to choose that version for the +Library. diff -Nru auto-editor-22w28a+ds/ae-ffmpeg/README.md auto-editor-22w52a+ds/ae-ffmpeg/README.md --- auto-editor-22w28a+ds/ae-ffmpeg/README.md 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/ae-ffmpeg/README.md 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,24 @@ +# AE-FFmpeg +Static FFmpeg and FFprobe binaries for use with Auto-Editor. + +## Install +``` +pip install ae-ffmpeg +``` + +## Copyright +The FFmpeg/FFprobe binaires used are under the LGPLv3. Only libraries that are compatible with the LGPLv3 are included. + +## How to Compile on MacOS +Use https://github.com/WyattBlue/ffmpeg-build-script/ + +## How to Compile on Windows +I use https://github.com/m-ab-s/media-autobuild_suite to compile on Windows. + +## Is There a Linux Build? +Linux distros generally already ship ffmpeg, so having another ffmpeg would be redundant. + +## MacOS flags +``` +--enable-videotoolbox --enable-libdav1d --enable-libvpx --enable-libzimg --enable-libmp3lame --enable-libopus --enable-libvorbis --disable-debug --disable-doc --disable-shared --disable-network --disable-indevs --disable-outdevs --disable-sdl2 --disable-xlib --disable-ffplay --enable-pthreads --enable-static --enable-version3 --extra-cflags=-I/Users/wyattblue/projects/ffmpeg-build-script/workspace/include --extra-ldexeflags= --extra-ldflags=-L/Users/wyattblue/projects/ffmpeg-build-script/workspace/lib --extra-libs='-ldl -lpthread -lm -lz' --pkgconfigdir=/Users/wyattblue/projects/ffmpeg-build-script/workspace/lib/pkgconfig --pkg-config-flags=--static --prefix=/Users/wyattblue/projects/ffmpeg-build-script/workspace --extra-version=5.0.1 +``` diff -Nru auto-editor-22w28a+ds/ae-ffmpeg/setup.py auto-editor-22w52a+ds/ae-ffmpeg/setup.py --- auto-editor-22w28a+ds/ae-ffmpeg/setup.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/ae-ffmpeg/setup.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,65 @@ +import re + +from setuptools import find_packages, setup + + +def pip_version(): + with open("ae_ffmpeg/__init__.py") as f: + version_content = f.read() + + version_match = re.search( + r"^__version__ = ['\"]([^'\"]*)['\"]", version_content, re.M + ) + + if version_match: + return version_match.group(1) + + raise ValueError("Unable to find version string.") + + +with open("README.md") as f: + long_description = f.read() + +setup( + name="ae-ffmpeg", + version=pip_version(), + description="Static FFmpeg binaries for Auto-Editor", + long_description=long_description, + long_description_content_type="text/markdown", + license="LGPLv3", + url="https://auto-editor.com", + project_urls={ + "Bug Tracker": "https://github.com/WyattBlue/auto-editor/issues", + "Source Code": "https://github.com/WyattBlue/auto-editor", + }, + author="WyattBlue", + author_email="wyattblue@auto-editor.com", + keywords="video audio media", + packages=find_packages(), + package_data={ + "ae_ffmpeg": [ + "LICENSE.txt", + "Windows/ffmpeg.exe", + "Windows/ffprobe.exe", + "Windows/libopenh264.dll", + "Darwin-x86_64/ffmpeg", + "Darwin-x86_64/ffprobe", + "Darwin-arm64/ffmpeg", + "Darwin-arm64/ffprobe", + "py.typed", + ], + }, + include_package_data=True, + zip_safe=False, + python_requires=">=3.8", + classifiers=[ + "Topic :: Multimedia :: Sound/Audio", + "Topic :: Multimedia :: Video", + "License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)", + "Intended Audience :: Developers", + "Operating System :: MacOS :: MacOS X", + "Operating System :: Microsoft :: Windows", + "Development Status :: 5 - Production/Stable", + "Programming Language :: Python :: 3", + ], +) diff -Nru auto-editor-22w28a+ds/auto_editor/analyze/audio.py auto-editor-22w52a+ds/auto_editor/analyze/audio.py --- auto-editor-22w28a+ds/auto_editor/analyze/audio.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/analyze/audio.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,50 +0,0 @@ -from math import ceil - -import numpy as np -import numpy.typing as npt - -from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar - - -def get_max_volume(s: np.ndarray) -> float: - return max(float(np.max(s)), -float(np.min(s))) - - -def audio_detection( - audio_samples: np.ndarray, - sample_rate: int, - fps: float, - progress: ProgressBar, - log: Log, -) -> npt.NDArray[np.float_]: - - max_volume = get_max_volume(audio_samples) - - if max_volume == 0: - # Prevent dividing by zero - max_volume = 1 - - sample_count = audio_samples.shape[0] - - sample_rate_per_frame = sample_rate / fps - audio_frame_count = ceil(sample_count / sample_rate_per_frame) - log.debug(f"Dur (Audio Analyze): {audio_frame_count}") - - progress.start(audio_frame_count, "Analyzing audio volume") - - threshold_list = np.zeros((audio_frame_count), dtype=np.float_) - - # Calculate when the audio is loud or silent. - for i in range(audio_frame_count): - - if i % 500 == 0: - progress.tick(i) - - start = int(i * sample_rate_per_frame) - end = min(int((i + 1) * sample_rate_per_frame), sample_count) - - threshold_list[i] = get_max_volume(audio_samples[start:end]) / max_volume - - progress.end() - return threshold_list diff -Nru auto-editor-22w28a+ds/auto_editor/analyze/motion.py auto-editor-22w52a+ds/auto_editor/analyze/motion.py --- auto-editor-22w28a+ds/auto_editor/analyze/motion.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/analyze/motion.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,68 +0,0 @@ -from typing import Tuple - -import av -import numpy as np -from numpy.typing import NDArray -from PIL import ImageChops, ImageFilter, ImageOps - -from auto_editor.utils.progressbar import ProgressBar - - -def new_size(size: Tuple[int, int], width: int) -> Tuple[int, int]: - h, w = size - return width, int(h * (width / w)) - - -def motion_detection( - path: str, fps: float, progress: ProgressBar, width: int, blur: int -) -> NDArray[np.float_]: - container = av.open(path, "r") - - stream = container.streams.video[0] - stream.thread_type = "AUTO" - - inaccurate_dur = int(stream.duration * stream.time_base * stream.rate) - - progress.start(inaccurate_dur, "Analyzing motion") - - prev_image = None - image = None - total_pixels = None - index = 0 - - threshold_list = np.zeros((1024), dtype=np.float_) - - for frame in container.decode(stream): - if image is None: - prev_image = None - else: - prev_image = image - - index = int(frame.time * fps) - - progress.tick(index) - - if index > len(threshold_list) - 1: - threshold_list = np.concatenate( - (threshold_list, np.zeros((len(threshold_list)), dtype=np.float_)), - axis=0, - ) - - image = frame.to_image() - - if total_pixels is None: - total_pixels = image.size[0] * image.size[1] - - image.thumbnail(new_size(image.size, width)) - image = ImageOps.grayscale(image) - - if blur > 0: - image = image.filter(ImageFilter.GaussianBlur(radius=blur)) - - if prev_image is not None: - count = np.count_nonzero(ImageChops.difference(prev_image, image)) - - threshold_list[index] = count / total_pixels - - progress.end() - return threshold_list[:index] diff -Nru auto-editor-22w28a+ds/auto_editor/analyze/pixeldiff.py auto-editor-22w52a+ds/auto_editor/analyze/pixeldiff.py --- auto-editor-22w28a+ds/auto_editor/analyze/pixeldiff.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/analyze/pixeldiff.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,50 +0,0 @@ -import av -import numpy as np -from numpy.typing import NDArray -from PIL import ImageChops - -from auto_editor.utils.progressbar import ProgressBar - - -def pixel_difference( - path: str, fps: float, progress: ProgressBar -) -> NDArray[np.uint64]: - container = av.open(path, "r") - - stream = container.streams.video[0] - stream.thread_type = "AUTO" - - inaccurate_dur = int(stream.duration * stream.time_base * stream.rate) - - progress.start(inaccurate_dur, "Analyzing pixel diffs") - - prev_image = None - image = None - index = 0 - - threshold_list = np.zeros((1024), dtype=np.uint64) - - for frame in container.decode(stream): - if image is None: - prev_image = None - else: - prev_image = image - - index = int(frame.time * fps) - progress.tick(index) - - if index > len(threshold_list) - 1: - threshold_list = np.concatenate( - (threshold_list, np.zeros((len(threshold_list)), dtype=np.uint64)), - axis=0, - ) - - image = frame.to_image() - - if prev_image is not None: - threshold_list[index] = np.count_nonzero( - ImageChops.difference(prev_image, image) - ) - - progress.end() - return threshold_list[:index] diff -Nru auto-editor-22w28a+ds/auto_editor/analyze.py auto-editor-22w52a+ds/auto_editor/analyze.py --- auto-editor-22w28a+ds/auto_editor/analyze.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/analyze.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,444 @@ +from __future__ import annotations + +import json +import os +from typing import TYPE_CHECKING + +import numpy as np + +from auto_editor.objs.edit import ( + Audio, + Motion, + Pixeldiff, + audio_builder, + motion_builder, + pixeldiff_builder, +) +from auto_editor.objs.util import _Vars, parse_dataclass +from auto_editor.utils.func import boolop +from auto_editor.wavfile import read + +if TYPE_CHECKING: + from fractions import Fraction + from typing import Any + + from numpy.typing import NDArray + + from auto_editor.ffwrapper import FileInfo + from auto_editor.interpreter import FileSetup + from auto_editor.output import Ensure + from auto_editor.utils.bar import Bar + from auto_editor.utils.log import Log + + +def link_nodes(*nodes: Any) -> None: + for c, n in zip(nodes, nodes[1:]): + c.link_to(n) + + +def to_threshold(arr: np.ndarray, t: int | float) -> NDArray[np.bool_]: + return np.fromiter((x >= t for x in arr), dtype=np.bool_) + + +def get_media_length( + ensure: Ensure, src: FileInfo, tb: Fraction, temp: str, log: Log +) -> int: + if src.audios: + if (arr := read_cache(src, tb, "audio", {"stream": 0}, temp)) is not None: + return len(arr) + + sr, samples = read(ensure.audio(f"{src.path.resolve()}", src.label, stream=0)) + samp_count = len(samples) + del samples + + samp_per_ticks = sr / tb + ticks = int(samp_count / samp_per_ticks) + log.debug(f"Audio Length: {ticks}") + log.debug(f"... without rounding: {float(samp_count / samp_per_ticks)}") + return ticks + + # If there's no audio, get length in video metadata. + import av + + av.logging.set_level(av.logging.PANIC) + + with av.open(f"{src.path}") as cn: + if len(cn.streams.video) < 1: + log.error("Could not get media duration") + + video = cn.streams.video[0] + dur = int(video.duration * video.time_base * tb) + log.debug(f"Video duration: {dur}") + + return dur + + +def get_all( + ensure: Ensure, src: FileInfo, tb: Fraction, temp: str, log: Log +) -> NDArray[np.bool_]: + return np.zeros(get_media_length(ensure, src, tb, temp, log), dtype=np.bool_) + + +def get_none( + ensure: Ensure, src: FileInfo, tb: Fraction, temp: str, log: Log +) -> NDArray[np.bool_]: + return np.ones(get_media_length(ensure, src, tb, temp, log), dtype=np.bool_) + + +def _dict_tag(tag: str, tb: Fraction, obj: Any) -> tuple[str, dict]: + if isinstance(obj, dict): + obj_dict = obj.copy() + else: + obj_dict = obj.__dict__.copy() + if "threshold" in obj_dict: + del obj_dict["threshold"] + + key = f"{tag}:{tb}:" + for k, v in obj_dict.items(): + key += f"{k}={v}," + key = key[:-1] + + return key, obj_dict + + +def read_cache( + src: FileInfo, tb: Fraction, tag: str, obj: Any, temp: str +) -> None | np.ndarray: + from auto_editor import version + + workfile = os.path.join(os.path.dirname(temp), f"ae-{version}", "cache.json") + + try: + with open(workfile) as file: + cache = json.load(file) + except Exception: + return None + + if f"{src.path.resolve()}" not in cache: + return None + + key, obj_dict = _dict_tag(tag, tb, obj) + + if key not in (root := cache[f"{src.path.resolve()}"]): + return None + + return np.asarray(root[key]["arr"], dtype=root[key]["type"]) + + +def cache( + tag: str, tb: Fraction, obj: Any, arr: np.ndarray, src: FileInfo, temp: str +) -> np.ndarray: + from auto_editor import version + + workdur = os.path.join(os.path.dirname(temp), f"ae-{version}") + workfile = os.path.join(workdur, "cache.json") + if not os.path.exists(workdur): + os.mkdir(workdur) + + key, obj_dict = _dict_tag(tag, tb, obj) + + try: + with open(workfile) as file: + json_object = json.load(file) + except Exception: + json_object = {} + + entry = { + "type": str(arr.dtype), + "arr": arr.tolist(), + } + + src_key = f"{src.path}" + + if src_key in json_object: + json_object[src_key][key] = entry + else: + json_object[src_key] = {key: entry} + + with open(os.path.join(workdur, "cache.json"), "w") as file: + file.write(json.dumps(json_object)) + + return arr + + +def audio_levels( + ensure: Ensure, + src: FileInfo, + s: int, + tb: Fraction, + bar: Bar, + strict: bool, + temp: str, + log: Log, +) -> NDArray[np.float_]: + + if s > len(src.audios) - 1: + if strict: + log.error(f"Audio stream '{s}' does not exist.") + return np.zeros(get_media_length(ensure, src, tb, temp, log), dtype=np.float_) + + if (arr := read_cache(src, tb, "audio", {"stream": s}, temp)) is not None: + return arr + + sr, samples = read(ensure.audio(f"{src.path.resolve()}", src.label, s)) + + def get_max_volume(s: np.ndarray) -> float: + return max(float(np.max(s)), -float(np.min(s))) + + max_volume = get_max_volume(samples) + log.debug(f"Max volume: {max_volume}") + + samp_count = samples.shape[0] + samp_per_ticks = sr / tb + + audio_ticks = int(samp_count / samp_per_ticks) + log.debug(f"analyze: Audio Length: {audio_ticks}") + log.debug(f"... no rounding: {float(samp_count / samp_per_ticks)}") + + bar.start(audio_ticks, "Analyzing audio volume") + + threshold_list = np.zeros((audio_ticks), dtype=np.float_) + + if max_volume == 0: # Prevent dividing by zero + return threshold_list + + # Determine when audio is silent or loud. + for i in range(audio_ticks): + if i % 500 == 0: + bar.tick(i) + + start = int(i * samp_per_ticks) + end = min(int((i + 1) * samp_per_ticks), samp_count) + + threshold_list[i] = get_max_volume(samples[start:end]) / max_volume + + bar.end() + return cache("audio", tb, {"stream": s}, threshold_list, src, temp) + + +def motion_levels( + ensure: Ensure, + src: FileInfo, + mobj: Any, + tb: Fraction, + bar: Bar, + strict: bool, + temp: str, + log: Log, +) -> NDArray[np.float_]: + import av + from PIL import ImageChops, ImageFilter + + av.logging.set_level(av.logging.PANIC) + + if mobj.stream >= len(src.videos): + if not strict: + return np.zeros( + get_media_length(ensure, src, tb, temp, log), dtype=np.float_ + ) + log.error(f"Video stream '{mobj.stream}' does not exist.") + + if (arr := read_cache(src, tb, "motion", mobj, temp)) is not None: + return arr + + container = av.open(f"{src.path}", "r") + + stream = container.streams.video[mobj.stream] + stream.thread_type = "AUTO" + + if stream.duration is None: + inaccurate_dur = 1 + else: + inaccurate_dur = int(stream.duration * stream.time_base * stream.average_rate) + + bar.start(inaccurate_dur, "Analyzing motion") + + prev_image = None + image = None + total_pixels = src.videos[0].width * src.videos[0].height + index = 0 + + graph = av.filter.Graph() + link_nodes( + graph.add_buffer(template=stream), + graph.add("scale", f"{mobj.width}:-1"), + graph.add("buffersink"), + ) + graph.configure() + + threshold_list = np.zeros((1024), dtype=np.float_) + + for unframe in container.decode(stream): + graph.push(unframe) + frame = graph.pull() + + prev_image = image + + index = int(frame.time * tb) + bar.tick(index) + + if index > len(threshold_list) - 1: + threshold_list = np.concatenate( + (threshold_list, np.zeros((len(threshold_list)), dtype=np.float_)), + axis=0, + ) + + image = frame.to_image().convert("L") + + if mobj.blur > 0: + image = image.filter(ImageFilter.GaussianBlur(radius=mobj.blur)) + + if prev_image is not None: + count = np.count_nonzero(ImageChops.difference(prev_image, image)) + + threshold_list[index] = count / total_pixels + + bar.end() + result = threshold_list[:index] + del threshold_list + + return cache("motion", tb, mobj, result, src, temp) + + +def pixeldiff_levels( + ensure: Ensure, + src: FileInfo, + pobj: Any, + tb: Fraction, + bar: Bar, + strict: bool, + temp: str, + log: Log, +) -> NDArray[np.uint64]: + import av + from PIL import ImageChops + + av.logging.set_level(av.logging.PANIC) + + if pobj.stream >= len(src.videos): + if not strict: + return np.zeros( + get_media_length(ensure, src, tb, temp, log), dtype=np.uint64 + ) + log.error(f"Video stream '{pobj.stream}' does not exist.") + + if (arr := read_cache(src, tb, "pixeldiff", pobj, temp)) is not None: + return arr + + container = av.open(f"{src.path}", "r") + + stream = container.streams.video[pobj.stream] + stream.thread_type = "AUTO" + + if stream.duration is None: + inaccurate_dur = 1 + else: + inaccurate_dur = int(stream.duration * stream.time_base * stream.average_rate) + + bar.start(inaccurate_dur, "Analyzing pixel diffs") + + prev_image = None + image = None + index = 0 + + threshold_list = np.zeros((1024), dtype=np.uint64) + + for frame in container.decode(stream): + prev_image = image + + index = int(frame.time * tb) + bar.tick(index) + + if index > len(threshold_list) - 1: + threshold_list = np.concatenate( + (threshold_list, np.zeros((len(threshold_list)), dtype=np.uint64)), + axis=0, + ) + + image = frame.to_image() + + if prev_image is not None: + threshold_list[index] = np.count_nonzero( + ImageChops.difference(prev_image, image) + ) + + bar.end() + result = threshold_list[:index] + del threshold_list + + return cache("pixeldiff", tb, pobj, result, src, temp) + + +def edit_method(val: str, filesetup: FileSetup) -> NDArray[np.bool_]: + src = filesetup.src + tb = filesetup.tb + ensure = filesetup.ensure + strict = filesetup.strict + + bar = filesetup.bar + temp = filesetup.temp + log = filesetup.log + + METHODS = ("audio", "motion", "pixeldiff", "none", "all") + + if ":" in val: + method, attrs = val.split(":") + if method not in METHODS: + log.error(f"'{method}' not allowed to have attributes") + else: + method, attrs = val, "" + + if method == "none": + return get_none(ensure, src, tb, temp, log) + + if method == "all": + return get_all(ensure, src, tb, temp, log) + + if method == "audio": + aobj = parse_dataclass(attrs, (Audio, audio_builder), log) + s = aobj.stream + if s == "all": + total_list: NDArray[np.bool_] | None = None + for s in range(len(src.audios)): + audio_list = to_threshold( + audio_levels(ensure, src, s, tb, bar, strict, temp, log), + aobj.threshold, + ) + if total_list is None: + total_list = audio_list + else: + total_list = boolop(total_list, audio_list, np.logical_or) + if total_list is None: + if strict: + log.error("Input has no audio streams.") + stream_data = get_all(ensure, src, tb, temp, log) + else: + stream_data = total_list + else: + stream_data = to_threshold( + audio_levels(ensure, src, s, tb, bar, strict, temp, log), + aobj.threshold, + ) + + return stream_data + + if method == "motion": + if src.videos: + _vars: _Vars = {"width": src.videos[0].width} + else: + _vars = {"width": 1} + + mobj = parse_dataclass(attrs, (Motion, motion_builder), log, _vars) + return to_threshold( + motion_levels(ensure, src, mobj, tb, bar, strict, temp, log), + mobj.threshold, + ) + + if method == "pixeldiff": + pobj = parse_dataclass(attrs, (Pixeldiff, pixeldiff_builder), log) + return to_threshold( + pixeldiff_levels(ensure, src, pobj, tb, bar, strict, temp, log), + pobj.threshold, + ) + + raise ValueError("Unreachable") diff -Nru auto-editor-22w28a+ds/auto_editor/edit.py auto-editor-22w52a+ds/auto_editor/edit.py --- auto-editor-22w28a+ds/auto_editor/edit.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/edit.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,41 +1,81 @@ +from __future__ import annotations + import os -from typing import List, Optional +from typing import Any from auto_editor.ffwrapper import FFmpeg, FileInfo -from auto_editor.timeline import Timeline, make_timeline +from auto_editor.make_layers import make_timeline +from auto_editor.objs.export import ( + ExAudio, + ExClipSequence, + ExDefault, + ExFinalCutPro, + ExJson, + Exports, + ExPremiere, + ExShotCut, + ExTimeline, +) +from auto_editor.objs.util import Attr +from auto_editor.output import Ensure +from auto_editor.timeline import Timeline +from auto_editor.utils.bar import Bar +from auto_editor.utils.chunks import Chunk, Chunks from auto_editor.utils.container import Container, container_constructor -from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar -from auto_editor.utils.types import Args, Chunk, Chunks - - -def set_output_name(path: str, inp_ext: str, export: str) -> str: - root, ext = os.path.splitext(path) - - if export == "json": - return f"{root}.json" - if export == "final-cut-pro": - return f"{root}.fcpxml" - if export == "shotcut": - return f"{root}.mlt" - if export == "premiere": - return f"{root}.xml" - if export == "audio": - return f"{root}_ALTERED.wav" - if ext == "": - return root + inp_ext +from auto_editor.utils.log import Log, Timer +from auto_editor.utils.types import Args + + +def set_output( + out: str | None, _export: str | None, src: FileInfo | None, log: Log +) -> tuple[str, Exports]: + + export = None if _export is None else parse_export(_export, log) + + if src is None: + root, ext = "out", ".mp4" + else: + root, ext = os.path.splitext(str(src.path) if out is None else out) + if ext == "": + ext = src.path.suffix + + if export is None: + if ext == ".xml": + export = ExPremiere() + elif ext == ".fcpxml": + export = ExFinalCutPro() + elif ext == ".mlt": + export = ExShotCut() + elif ext == ".json": + export = ExJson() + else: + export = ExDefault() + + if isinstance(export, ExPremiere): + ext = ".xml" + if isinstance(export, ExFinalCutPro): + ext = ".fcpxml" + if isinstance(export, ExShotCut): + ext = ".mlt" + if isinstance(export, ExJson): + ext = ".json" + if isinstance(export, ExAudio): + ext = ".wav" - return f"{root}_ALTERED{ext}" + if out is None: + return f"{root}_ALTERED{ext}", export + + return f"{root}{ext}", export codec_error = "'{}' codec is not supported in '{}' container." def set_video_codec( - codec: str, inp: Optional[FileInfo], out_ext: str, ctr: Container, log: Log + codec: str, src: FileInfo | None, out_ext: str, ctr: Container, log: Log ) -> str: if codec == "auto": - codec = "h264" if (inp is None or not inp.videos) else inp.videos[0].codec + codec = "h264" if (src is None or not src.videos) else src.videos[0].codec if ctr.vcodecs is not None: if ctr.vstrict and codec not in ctr.vcodecs: return ctr.vcodecs[0] @@ -45,14 +85,11 @@ return codec if codec == "copy": - if inp is None: + if src is None: log.error("No input to copy its codec from.") - if not inp.videos: + if not src.videos: log.error("Input file does not have a video stream to copy codec from.") - codec = inp.videos[0].codec - - if codec == "uncompressed": - codec = "mpeg4" + codec = src.videos[0].codec if ctr.vstrict: assert ctr.vcodecs is not None @@ -66,10 +103,10 @@ def set_audio_codec( - codec: str, inp: Optional[FileInfo], out_ext: str, ctr: Container, log: Log + codec: str, src: FileInfo | None, out_ext: str, ctr: Container, log: Log ) -> str: if codec == "auto": - codec = "aac" if (inp is None or not inp.audios) else inp.audios[0].codec + codec = "aac" if (src is None or not src.audios) else src.audios[0].codec if ctr.acodecs is not None: if ctr.astrict and codec not in ctr.acodecs: return ctr.acodecs[0] @@ -79,11 +116,11 @@ return codec if codec == "copy": - if inp is None: + if src is None: log.error("No input to copy its codec from.") - if not inp.audios: + if not src.audios: log.error("Input file does not have an audio stream to copy codec from.") - codec = inp.audios[0].codec + codec = src.audios[0].codec if codec != "unset": if ctr.astrict: @@ -97,173 +134,212 @@ return codec +def make_sources( + paths: list[str], ffmpeg: FFmpeg, log: Log +) -> tuple[dict[str, FileInfo], list[int]]: + + used_paths: dict[str, int] = {} + sources: dict[str, FileInfo] = {} + inputs: list[int] = [] + + i = 0 + for path in paths: + if path in used_paths: + inputs.append(used_paths[path]) + else: + sources[str(i)] = FileInfo(path, ffmpeg, log, str(i)) + inputs.append(i) + used_paths[path] = i + i += 1 + return sources, inputs + + +def parse_export(export: str, log: Log) -> Exports: + from auto_editor.objs.util import parse_dataclass + from auto_editor.timeline import timeline_builder + + exploded = export.split(":", maxsplit=1) + if len(exploded) == 1: + name, attrs = exploded[0], "" + else: + name, attrs = exploded + + parsing: dict[str, tuple[Any, list[Attr]]] = { + "default": (ExDefault, []), + "premiere": (ExPremiere, []), + "final-cut-pro": (ExFinalCutPro, []), + "shotcut": (ExShotCut, []), + "json": (ExJson, timeline_builder), + "timeline": (ExTimeline, timeline_builder), + "audio": (ExAudio, []), + "clip-sequence": (ExClipSequence, []), + } + + if name in parsing: + return parse_dataclass(attrs, parsing[name], log) + + log.error(f"'{name}': Export must be [{', '.join([s for s in parsing.keys()])}]") + + def edit_media( - paths: List[str], ffmpeg: FFmpeg, args: Args, temp: str, log: Log -) -> Optional[str]: + paths: list[str], ffmpeg: FFmpeg, args: Args, temp: str, log: Log +) -> None: - progress = ProgressBar(args.progress) + timer = Timer(args.quiet) + bar = Bar(args.progress) timeline = None if paths: - path_ext = os.path.splitext(paths[0])[1] - if path_ext == ".json": + path_ext = os.path.splitext(paths[0])[1].lower() + if path_ext == ".xml": + from auto_editor.formats.premiere import premiere_read_xml + + timeline = premiere_read_xml(paths[0], ffmpeg, log) + src: FileInfo | None = next(iter(timeline.sources.items()))[1] + sources = timeline.sources + + elif path_ext == ".mlt": + from auto_editor.formats.shotcut import shotcut_read_mlt + + timeline = shotcut_read_mlt(paths[0], ffmpeg, log) + src = next(iter(timeline.sources.items()))[1] + sources = timeline.sources + + elif path_ext == ".json": from auto_editor.formats.json import read_json timeline = read_json(paths[0], ffmpeg, log) - inputs: List[FileInfo] = timeline.inputs - else: - inputs = [FileInfo(path, ffmpeg, log) for path in paths] - else: - inputs = [] - del paths - - inp = None if not inputs else inputs[0] - - if inp is None: - output = "out.mp4" if args.output_file is None else args.output_file - else: - if args.output_file is None: - output = set_output_name(inp.path, inp.ext, args.export) + sources = timeline.sources + src = sources["0"] else: - output = args.output_file - if os.path.splitext(output)[1] == "": - output = set_output_name(output, inp.ext, args.export) + sources, inputs = make_sources(paths, ffmpeg, log) + src = None if not inputs else sources[str(inputs[0])] - out_ext = os.path.splitext(output)[1].replace(".", "") - - # Check if export options make sense. - ctr = container_constructor(out_ext) + del paths - if ctr.samplerate is not None and args.sample_rate not in ctr.samplerate: - log.error(f"'{out_ext}' container only supports samplerates: {ctr.samplerate}") + output, export = set_output(args.output_file, args.export, src, log) - args.video_codec = set_video_codec(args.video_codec, inp, out_ext, ctr, log) - args.audio_codec = set_audio_codec(args.audio_codec, inp, out_ext, ctr, log) + if isinstance(export, ExTimeline): + log.quiet = True + timer.quiet = True - if args.keep_tracks_separate and ctr.max_audios == 1: - log.warning(f"'{out_ext}' container doesn't support multiple audio tracks.") + if not args.preview: + log.conwrite("Starting") - if not args.preview and not args.timeline: if os.path.isdir(output): log.error("Output path already has an existing directory!") - if os.path.isfile(output) and inputs[0].path != output: + if os.path.isfile(output) and src is not None and src.path != output: log.debug(f"Removing already existing file: {output}") os.remove(output) - # Extract subtitles in their native format. - if inp is not None and len(inp.subtitles) > 0: - cmd = ["-i", inp.path, "-hide_banner"] - for s, sub in enumerate(inp.subtitles): - cmd.extend(["-map", f"0:s:{s}"]) - for s, sub in enumerate(inp.subtitles): - cmd.extend([os.path.join(temp, f"{s}s.{sub.ext}")]) - ffmpeg.run(cmd) - del inp - - log.conwrite("Extracting audio") - - cmd = [] - for i, inp in enumerate(inputs): - cmd.extend(["-i", inp.path]) - cmd.append("-hide_banner") - if args.sample_rate is None: - if inputs: - samplerate = inputs[0].get_samplerate() + if timeline is None: + samplerate = 48000 if src is None else src.get_samplerate() else: - samplerate = 48000 + samplerate = timeline.samplerate else: samplerate = args.sample_rate - for i, inp in enumerate(inputs): - for s in range(len(inp.audios)): - cmd.extend( - [ - "-map", - f"{i}:a:{s}", - "-ac", - "2", - "-ar", - f"{samplerate}", - "-rf64", - "always", - os.path.join(temp, f"{i}-{s}.wav"), - ] - ) - - ffmpeg.run(cmd) + ensure = Ensure(ffmpeg, samplerate, temp, log) if timeline is None: - timeline = make_timeline(inputs, args, samplerate, progress, temp, log) + # Extract subtitles in their native format. + if src is not None and len(src.subtitles) > 0: + cmd = ["-i", f"{src.path}", "-hide_banner"] + for s, sub in enumerate(src.subtitles): + cmd.extend(["-map", f"0:s:{s}"]) + for s, sub in enumerate(src.subtitles): + cmd.extend([os.path.join(temp, f"{s}s.{sub.ext}")]) + ffmpeg.run(cmd) + + timeline = make_timeline( + sources, inputs, ffmpeg, ensure, args, samplerate, bar, temp, log + ) - if args.timeline: + if isinstance(export, ExTimeline): from auto_editor.formats.json import make_json_timeline - make_json_timeline(args.api, 0, timeline, log) - return None + make_json_timeline(export, 0, timeline, log) + return if args.preview: from auto_editor.preview import preview - preview(timeline, temp, log) - return None + preview(ensure, timeline, temp, log) + return - if args.export == "json": + if isinstance(export, ExJson): from auto_editor.formats.json import make_json_timeline - make_json_timeline(args.api, output, timeline, log) - return output + make_json_timeline(export, output, timeline, log) + return - if args.export == "premiere": - from auto_editor.formats.premiere import premiere_xml + if isinstance(export, ExPremiere): + from auto_editor.formats.premiere import premiere_write_xml - premiere_xml(temp, output, timeline) - return output + premiere_write_xml(ensure, output, timeline) + return - if args.export == "final-cut-pro": + if isinstance(export, ExFinalCutPro): from auto_editor.formats.final_cut_pro import fcp_xml fcp_xml(output, timeline) - return output + return + + if isinstance(export, ExShotCut): + from auto_editor.formats.shotcut import shotcut_write_mlt + + shotcut_write_mlt(output, timeline) + return + + out_ext = os.path.splitext(output)[1].replace(".", "") - if args.export == "shotcut": - from auto_editor.formats.shotcut import shotcut_xml + # Check if export options make sense. + ctr = container_constructor(out_ext.lower()) + + if ctr.samplerate is not None and args.sample_rate not in ctr.samplerate: + log.error(f"'{out_ext}' container only supports samplerates: {ctr.samplerate}") - shotcut_xml(output, timeline) - return output + args.video_codec = set_video_codec(args.video_codec, src, out_ext, ctr, log) + args.audio_codec = set_audio_codec(args.audio_codec, src, out_ext, ctr, log) + + if args.keep_tracks_separate and ctr.max_audios == 1: + log.warning(f"'{out_ext}' container doesn't support multiple audio tracks.") def make_media(timeline: Timeline, output: str) -> None: from auto_editor.output import mux_quality_media from auto_editor.render.video import render_av + assert src is not None + visual_output = [] audio_output = [] + sub_output = [] apply_later = False - inp = timeline.inputs[0] if ctr.allow_subtitle: - from auto_editor.render.subtitle import cut_subtitles + from auto_editor.render.subtitle import make_new_subtitles - cut_subtitles(ffmpeg, timeline, temp, log) + sub_output = make_new_subtitles(timeline, ffmpeg, temp, log) if ctr.allow_audio: from auto_editor.render.audio import make_new_audio - audio_output = make_new_audio(timeline, progress, temp, log) + audio_output = make_new_audio(timeline, ensure, ffmpeg, bar, temp, log) if ctr.allow_video: if len(timeline.v) > 0: out_path, apply_later = render_av( - ffmpeg, timeline, args, progress, ctr, temp, log + ffmpeg, timeline, args, bar, ctr, temp, log ) visual_output.append((True, out_path)) - for v, vid in enumerate(inp.videos, start=1): + for v, vid in enumerate(src.videos, start=1): if ctr.allow_image and vid.codec in ("png", "mjpeg", "webp"): out_path = os.path.join(temp, f"{v}.{vid.codec}") # fmt: off - ffmpeg.run(["-i", inp.path, "-map", "0:v", "-map", "-0:V", + ffmpeg.run(["-i", f"{src.path}", "-map", "0:v", "-map", "-0:V", "-c", "copy", out_path]) # fmt: on visual_output.append((False, out_path)) @@ -273,21 +349,23 @@ ffmpeg, visual_output, audio_output, + sub_output, apply_later, ctr, output, + timeline.timebase, args, - inp, + src, temp, log, ) - if args.export == "clip-sequence": + if isinstance(export, ExClipSequence): chunks = timeline.chunks if chunks is None: log.error("Timeline to complex to use clip-sequence export") - from auto_editor.timeline import clipify, make_av + from auto_editor.make_layers import clipify, make_av from auto_editor.utils.func import append_filename def pad_chunk(chunk: Chunk, total: int) -> Chunks: @@ -302,10 +380,11 @@ continue _c = pad_chunk(chunk, total_frames) - vspace, aspace = make_av([clipify(_c, 0, 0)], [inp]) + + vspace, aspace = make_av([clipify(_c, "0")], timeline.sources, [0]) my_timeline = Timeline( - timeline.inputs, - timeline.fps, + timeline.sources, + timeline.timebase, timeline.samplerate, timeline.res, "#000", @@ -318,4 +397,16 @@ clip_num += 1 else: make_media(timeline, output) - return output + + timer.stop() + + if not args.no_open and isinstance(export, (ExDefault, ExAudio, ExClipSequence)): + if args.player is None: + from auto_editor.utils.func import open_with_system_default + + open_with_system_default(output, log) + else: + import subprocess + from shlex import split + + subprocess.run(split(args.player) + [output]) diff -Nru auto-editor-22w28a+ds/auto_editor/ffwrapper.py auto-editor-22w52a+ds/auto_editor/ffwrapper.py --- auto-editor-22w28a+ds/auto_editor/ffwrapper.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/ffwrapper.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,15 +1,16 @@ +from __future__ import annotations + import json import os.path import subprocess import sys from dataclasses import dataclass from fractions import Fraction +from pathlib import Path from platform import system from re import search from subprocess import PIPE, Popen -from typing import Any, Dict, List, Optional, Tuple - -import ae_ffmpeg +from typing import Any from auto_editor.utils.func import get_stdout from auto_editor.utils.log import Log @@ -23,18 +24,22 @@ def __init__( self, - ff_location: Optional[str] = None, + ff_location: str | None = None, my_ffmpeg: bool = False, debug: bool = False, - ) -> None: - - def _set_ff_path(ff_location: Optional[str], my_ffmpeg: bool) -> str: + ): + def _set_ff_path(ff_location: str | None, my_ffmpeg: bool) -> str: if ff_location is not None: return ff_location if my_ffmpeg: return "ffmpeg" - return ae_ffmpeg.get_path() + try: + import ae_ffmpeg + + return ae_ffmpeg.get_path() + except ImportError: + return "ffmpeg" self.debug = debug self.path = _set_ff_path(ff_location, my_ffmpeg) @@ -45,13 +50,12 @@ except FileNotFoundError: if system() == "Darwin": Log().error( - "No ffmpeg found, download via homebrew or restore the " - "included binary." + "No ffmpeg found, download via homebrew or install ae-ffmpeg." ) if system() == "Windows": Log().error( "No ffmpeg found, download ffmpeg with your favorite package " - "manager (ex chocolatey), or restore the included binary." + "manager (ex chocolatey), or install ae-ffmpeg." ) Log().error("ffmpeg must be installed and on PATH.") @@ -60,11 +64,11 @@ if self.debug: sys.stderr.write(f"FFmpeg: {message}\n") - def print_cmd(self, cmd: List[str]) -> None: + def print_cmd(self, cmd: list[str]) -> None: if self.debug: sys.stderr.write(f"FFmpeg run: {' '.join(cmd)}\n") - def run(self, cmd: List[str]) -> None: + def run(self, cmd: list[str]) -> None: cmd = [self.path, "-y", "-hide_banner"] + cmd if not self.debug: cmd.extend(["-nostats", "-loglevel", "error"]) @@ -73,26 +77,18 @@ def run_check_errors( self, - cmd: List[str], + cmd: list[str], log: Log, show_out: bool = False, - path: Optional[str] = None, + path: str | None = None, ) -> None: - def _run(cmd: List[str]) -> str: - process = self.Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) - _, stderr = process.communicate() + process = self.Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) + _, stderr = process.communicate() - if process.stdin is not None: - process.stdin.close() - return stderr.decode("utf-8", "replace") - - output = _run(cmd) - - if "Try -allow_sw 1" in output: - cmd.insert(-1, "-allow_sw") - cmd.insert(-1, "1") - output = _run(cmd) + if process.stdin is not None: + process.stdin.close() + output = stderr.decode("utf-8", "replace") error_list = [ r"Unknown encoder '.*'", @@ -111,8 +107,7 @@ print(f"stderr: {output}") for item in error_list: - check = search(item, output) - if check: + if check := search(item, output): log.error(check.group()) if path is not None and not os.path.isfile(path): @@ -120,12 +115,14 @@ elif show_out and not self.debug: print(f"stderr: {output}") - def Popen(self, cmd: List[str], stdin=None, stdout=PIPE, stderr=None) -> Popen: + def Popen( + self, cmd: list[str], stdin: Any = None, stdout: Any = PIPE, stderr: Any = None + ) -> Popen: cmd = [self.path] + cmd self.print_cmd(cmd) return Popen(cmd, stdin=stdin, stdout=stdout, stderr=stderr) - def pipe(self, cmd: List[str]) -> str: + def pipe(self, cmd: list[str]) -> str: cmd = [self.path, "-y"] + cmd self.print_cmd(cmd) @@ -139,75 +136,77 @@ width: int height: int codec: str - fps: float + fps: Fraction + duration: str | None + sar: str | None time_base: Fraction pix_fmt: str - color_range: Optional[str] - color_space: Optional[str] - color_primaries: Optional[str] - color_transfer: Optional[str] - bitrate: Optional[str] - lang: Optional[str] + color_range: str | None + color_space: str | None + color_primaries: str | None + color_transfer: str | None + bitrate: str | None + lang: str | None @dataclass class AudioStream: codec: str samplerate: int - bitrate: Optional[str] - lang: Optional[str] + duration: str | None + bitrate: str | None + lang: str | None @dataclass class SubtitleStream: codec: str ext: str - lang: Optional[str] + lang: str | None class FileInfo: __slots__ = ( "path", - "abspath", - "basename", - "dirname", - "name", - "ext", + "modified", "bitrate", + "duration", "description", "videos", "audios", "subtitles", + "label", ) - def get_res(self) -> Tuple[int, int]: + def get_res(self) -> tuple[int, int]: if len(self.videos) > 0: return self.videos[0].width, self.videos[0].height return 1920, 1080 - def get_fps(self) -> float: - fps = None + def get_fps(self) -> Fraction: if len(self.videos) > 0: - fps = self.videos[0].fps - - return 30 if fps is None else fps + return self.videos[0].fps + return Fraction(30) def get_samplerate(self) -> int: if len(self.audios) > 0: return self.audios[0].samplerate return 48000 - def __init__(self, path: str, ffmpeg: FFmpeg, log: Log): - self.path = path - self.abspath = os.path.abspath(path) - self.basename = os.path.basename(path) - self.dirname = os.path.dirname(os.path.abspath(path)) - self.name, self.ext = os.path.splitext(path) - - self.videos: List[VideoStream] = [] - self.audios: List[AudioStream] = [] - self.subtitles: List[SubtitleStream] = [] + def __init__(self, path: str, ffmpeg: FFmpeg, log: Log, label: str = ""): + self.label = label + self.path = Path(path) + self.videos: list[VideoStream] = [] + self.audios: list[AudioStream] = [] + self.subtitles: list[SubtitleStream] = [] self.description = None + self.duration = "" + + try: + stats = os.stat(path) + self.modified = stats.st_mtime + except OSError: + log.error(f"Could not access: {path}") _dir = os.path.dirname(ffmpeg.path) _ext = os.path.splitext(ffmpeg.path)[1] @@ -227,9 +226,9 @@ ] ) except FileNotFoundError: - log.error(f"Could not find: {ffprobe}") + log.nofile(ffprobe) - def get_attr(name: str, dic: Dict[str, Any], default=-1) -> str: + def get_attr(name: str, dic: dict[Any, Any], default: Any = -1) -> str: if name in dic: if isinstance(dic[name], str): return dic[name] @@ -251,7 +250,7 @@ except Exception as e: log.error(f"{path}: Could not read ffprobe JSON: {e}") - self.bitrate: Optional[str] = None + self.bitrate: str | None = None if "bit_rate" in json_info["format"]: self.bitrate = json_info["format"]["bit_rate"] if ( @@ -260,6 +259,9 @@ ): self.description = json_info["format"]["tags"]["description"] + if "duration" in json_info["format"]: + self.duration = json_info["format"]["duration"] + for stream in json_info["streams"]: lang = None br = None @@ -275,32 +277,34 @@ if codec_type == "video": pix_fmt = get_attr("pix_fmt", stream) + vduration = get_attr("duration", stream, default=None) color_range = get_attr("color_range", stream, default=None) color_space = get_attr("color_space", stream, default=None) color_primaries = get_attr("color_primaries", stream, default=None) color_transfer = get_attr("color_transfer", stream, default=None) - fps_str = get_attr("avg_frame_rate", stream) + sar = get_attr("sample_aspect_ratio", stream, default=None) + fps_str = get_attr("r_frame_rate", stream) time_base_str = get_attr("time_base", stream) try: - fps = float(Fraction(fps_str)) + fps = Fraction(fps_str) except ZeroDivisionError: - fps = 0 + fps = Fraction(0) except ValueError: - log.error(f"Could not convert fps '{fps_str}' to float") + log.error(f"Could not convert fps '{fps_str}' to Fraction.") if fps < 1: if codec in IMG_CODECS: - fps = 25 - else: - log.error("fps cannot be less than 1.") + fps = Fraction(25) + elif fps == 0: + fps = Fraction(30) try: time_base = Fraction(time_base_str) except (ValueError, ZeroDivisionError): if codec not in IMG_CODECS: log.error( - f"Could not convert time_base '{time_base_str}' to Fraction" + f"Could not convert time_base '{time_base_str}' to Fraction." ) time_base = Fraction(0, 1) @@ -310,6 +314,8 @@ stream["height"], codec, fps, + vduration, + sar, time_base, pix_fmt, color_range, @@ -322,7 +328,8 @@ ) if codec_type == "audio": sr = int(stream["sample_rate"]) - self.audios.append(AudioStream(codec, sr, br, lang)) + adur = get_attr("duration", stream, default=None) + self.audios.append(AudioStream(codec, sr, adur, br, lang)) if codec_type == "subtitle": ext = SUB_EXTS.get(codec, "vtt") self.subtitles.append(SubtitleStream(codec, ext, lang)) diff -Nru auto-editor-22w28a+ds/auto_editor/formats/final_cut_pro.py auto-editor-22w52a+ds/auto_editor/formats/final_cut_pro.py --- auto-editor-22w28a+ds/auto_editor/formats/final_cut_pro.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/formats/final_cut_pro.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,3 +1,12 @@ +from __future__ import annotations + +from fractions import Fraction + +from auto_editor.ffwrapper import FileInfo +from auto_editor.timeline import Timeline + +from .utils import indent + """ Export a FCPXML 9 file readable with Final Cut Pro 10.4.9 or later. @@ -6,23 +15,14 @@ """ -from pathlib import Path, PureWindowsPath -from platform import system -from typing import Union -from auto_editor.ffwrapper import FileInfo -from auto_editor.timeline import Timeline - -from .utils import indent - - -def get_colorspace(inp: FileInfo) -> str: +def get_colorspace(src: FileInfo) -> str: # See: https://developer.apple.com/documentation/professional_video_applications/fcpxml_reference/asset#3686496 - if len(inp.videos) == 0: + if len(src.videos) == 0: return "1-1-1 (Rec. 709)" - s = inp.videos[0] + s = src.videos[0] if s.pix_fmt == "rgb24": return "sRGB IEC61966-2.1" if s.color_space == "smpte170m": @@ -38,20 +38,12 @@ return "1-1-1 (Rec. 709)" -def fraction(_a: Union[int, float], _fps: float) -> str: - from fractions import Fraction - +def fraction(_a: float, tb: Fraction) -> str: if _a == 0: return "0s" - if isinstance(_a, float): - a = Fraction(_a) - else: - a = _a - - fps = Fraction(_fps) - - frac = Fraction(a, fps).limit_denominator() + a = Fraction(_a) + frac = Fraction(a, tb).limit_denominator() num = frac.numerator dem = frac.denominator @@ -73,28 +65,24 @@ def fcp_xml(output: str, timeline: Timeline) -> None: - inp = timeline.inp - fps = timeline.fps + src = timeline.sources["0"] + tb = timeline.timebase chunks = timeline.chunks if chunks is None: raise ValueError("Timeline too complex") total_dur = chunks[-1][1] - - if system() == "Windows": - pathurl = "file://localhost/" + PureWindowsPath(inp.abspath).as_posix() - else: - pathurl = Path(inp.abspath).as_uri() + pathurl = src.path.resolve().as_uri() width, height = timeline.res - frame_duration = fraction(1, fps) + frame_duration = fraction(1, tb) - audio_file = len(inp.videos) == 0 and len(inp.audios) > 0 - group_name = "Auto-Editor {} Group".format("Audio" if audio_file else "Video") - name = inp.basename + audio_file = len(src.videos) == 0 and len(src.audios) > 0 + group_name = f"Auto-Editor {'Audio' if audio_file else 'Video'} Group" + name = src.path.stem - colorspace = get_colorspace(inp) + colorspace = get_colorspace(src) with open(output, "w", encoding="utf-8") as outfile: outfile.write('\n') @@ -102,7 +90,7 @@ outfile.write('\n') outfile.write("\t\n") outfile.write( - f'\t\t\n' @@ -110,7 +98,7 @@ outfile.write( f'\t\t\n' + f'duration="{fraction(total_dur, tb)}">\n' ) outfile.write( f'\t\t\t\n' @@ -134,7 +122,7 @@ continue clip_dur = (clip[1] - clip[0] + 1) / clip[2] - dur = fraction(clip_dur, fps) + dur = fraction(clip_dur, tb) close = "/" if clip[2] == 1 else "" @@ -146,8 +134,8 @@ ) ) else: - start = fraction(clip[0] / clip[2], fps) - off = fraction(last_dur, fps) + start = fraction(clip[0] / clip[2], tb) + off = fraction(last_dur, tb) outfile.write( indent( 6, @@ -161,8 +149,8 @@ # See the "Time Maps" section. # https://developer.apple.com/library/archive/documentation/FinalCutProX/Reference/FinalCutProXXMLFormat/StoryElements/StoryElements.html - frac_total = fraction(total_dur, fps) - speed_dur = fraction(total_dur / clip[2], fps) + frac_total = fraction(total_dur, tb) + speed_dur = fraction(total_dur / clip[2], tb) outfile.write( indent( diff -Nru auto-editor-22w28a+ds/auto_editor/formats/json.py auto-editor-22w52a+ds/auto_editor/formats/json.py --- auto-editor-22w28a+ds/auto_editor/formats/json.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/formats/json.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,16 +1,20 @@ -""" -Make a pre-edited file reference that can be inputted back into auto-editor. -""" +from __future__ import annotations import json import os import sys -from typing import Any, Optional, Union +from fractions import Fraction +from typing import Any from auto_editor.ffwrapper import FFmpeg, FileInfo -from auto_editor.timeline import Timeline, clipify, make_av +from auto_editor.objs.export import ExJson, ExTimeline +from auto_editor.objs.util import parse_dataclass +from auto_editor.timeline import Timeline, audio_objects, visual_objects from auto_editor.utils.log import Log -from auto_editor.utils.types import Chunks + +""" +Make a pre-edited file reference that can be inputted back into auto-editor. +""" def check_attrs(data: object, log: Log, *attrs: str) -> None: @@ -21,48 +25,18 @@ log.error(f"'{attr}' attribute not found!") -def check_file(path: str, log: Log): +def check_file(path: str, log: Log) -> None: if not os.path.isfile(path): log.error(f"Could not locate media file: '{path}'") -def validate_chunks(chunks: object, log: Log) -> Chunks: - if not isinstance(chunks, (list, tuple)): - log.error("Chunks must be a list") - - if len(chunks) == 0: - log.error("Chunks are empty!") - - new_chunks = [] - prev_end: Optional[int] = None - - for i, chunk in enumerate(chunks): - if len(chunk) != 3: - log.error("Chunk must have a length of 3.") - - if i == 0 and chunk[0] != 0: - log.error("First chunk must start with 0") - - if chunk[1] - chunk[0] < 1: - log.error("Chunk duration must be at least 1") - - if chunk[2] <= 0 or chunk[2] > 99999: - log.error("Chunk speed range must be >0 and <=99999") - - if prev_end is not None and chunk[0] != prev_end: - log.error(f"Chunk disjointed at {chunk}") - - prev_end = chunk[1] - - new_chunks.append((chunk[0], chunk[1], float(chunk[2]))) - - return new_chunks - - class Version: __slots__ = ("major", "minor", "micro") def __init__(self, val: str, log: Log) -> None: + if val.startswith("unstable:"): + val = val[9:] + ver_str = val.split(".") if len(ver_str) > 3: log.error("Version string: Too many separators!") @@ -90,72 +64,94 @@ check_attrs(data, log, "version") version = Version(data["version"], log) - if version == (1, 0) or version == (0, 1): - check_attrs(data, log, "source", "chunks") - check_file(data["source"], log) - - chunks = validate_chunks(data["chunks"], log) - inp = FileInfo(data["source"], ffmpeg, log) - - vspace, aspace = make_av([clipify(chunks, 0, 0)], [inp]) - - fps = inp.get_fps() - sr = inp.get_samplerate() - res = inp.get_res() - - return Timeline([inp], fps, sr, res, "#000", vspace, aspace, chunks) - - if version == (2, 0) or version == (0, 2): + if version == (3, 0): check_attrs(data, log, "timeline") - # check_file(data["source"], log) - # return data["background"], data["source"], chunks + tl = data["timeline"] + check_attrs( + tl, + log, + "sources", + "background", + "v", + "a", + "timebase", + "resolution", + "samplerate", + ) + + sources: dict[str, FileInfo] = {} + for _id, path in tl["sources"].items(): + check_file(path, log) + sources[_id] = FileInfo(path, ffmpeg, log) + + bg = tl["background"] + sr = tl["samplerate"] + res = (tl["resolution"][0], tl["resolution"][1]) + tb = Fraction(tl["timebase"]) + + v: Any = [] + a: Any = [] + + def dict_to_args(d: dict) -> str: + attrs = [] + for k, v in d.items(): + if k != "name": + attrs.append(f"{k}={v}") + return ",".join(attrs) + + for vlayers in tl["v"]: + if vlayers: + v_out = [] + for vdict in vlayers: + if "name" not in vdict: + log.error("Invalid video object: name not specified") + if vdict["name"] not in visual_objects: + log.error(f"Unknown video object: {vdict['name']}") + my_obj = visual_objects[vdict["name"]] + attr_str = dict_to_args(vdict) + vobj = parse_dataclass(attr_str, my_obj, log, None, True) + v_out.append(vobj) + v.append(v_out) + + for alayers in tl["a"]: + if alayers: + a_out = [] + for adict in alayers: + if "name" not in adict: + log.error("Invalid audio object: name not specified") + if adict["name"] not in audio_objects: + log.error(f"Unknown audio object: {adict['name']}") + my_obj = audio_objects[adict["name"]] + attr_str = dict_to_args(adict) + aobj = parse_dataclass(attr_str, my_obj, log, None, True) + a_out.append(aobj) + a.append(a_out) - raise ValueError("Incomplete") + return Timeline(sources, tb, sr, res, bg, v, a) - log.error(f"Unsupported version: {version}") + log.error(f"Importing version {version} timelines is not supported.") def make_json_timeline( - _version: str, - out: Union[str, int], - timeline: Timeline, + obj: ExJson | ExTimeline, + out: str | int, + tl: Timeline, log: Log, ) -> None: - version = Version(_version, log) - - if version == (1, 0) or version == (0, 1): - if timeline.chunks is None: - log.error("Timeline too complex to convert to version 1.0") - - data: Any = { - "version": "1.0.0", - "source": os.path.abspath(timeline.inp.path), - "chunks": timeline.chunks, - } - elif version == (2, 0) or version == (0, 2): - sources = [os.path.abspath(inp.path) for inp in timeline.inputs] - data = { - "version": "2.0.0", - "sources": sources, - "timeline": { - "background": timeline.background, - "resolution": timeline.res, - "fps": timeline.fps, - "samplerate": timeline.samplerate, - "video": timeline.v, - "audio": timeline.a, - }, - } - else: + if (version := Version(obj.api, log)) != (3, 0): log.error(f"Version {version} is not supported!") if isinstance(out, str): if not out.endswith(".json"): log.error("Output extension must be .json") + outfile: Any = open(out, "w") + else: + outfile = sys.stdout + + json.dump(tl.as_dict(), outfile, indent=2, default=lambda o: o.__dict__) - with open(out, "w") as outfile: - json.dump(data, outfile, indent=2, default=lambda o: o.__dict__) + if isinstance(out, str): + outfile.close() else: - json.dump(data, sys.stdout, indent=2, default=lambda o: o.__dict__) print("") # Flush stdout diff -Nru auto-editor-22w28a+ds/auto_editor/formats/premiere.py auto-editor-22w52a+ds/auto_editor/formats/premiere.py --- auto-editor-22w28a+ds/auto_editor/formats/premiere.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/formats/premiere.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,12 +1,18 @@ +from __future__ import annotations + import os.path -from os.path import abspath -from platform import system +import xml.etree.ElementTree as ET +from fractions import Fraction +from math import ceil from shutil import move -from urllib.parse import quote +from xml.etree.ElementTree import Element -from auto_editor.timeline import Timeline +from auto_editor.ffwrapper import FFmpeg, FileInfo +from auto_editor.output import Ensure +from auto_editor.timeline import ASpace, Timeline, TlAudio, TlVideo, VSpace +from auto_editor.utils.log import Log -from .utils import indent, safe_mkdir +from .utils import Validator, safe_mkdir, show """ Premiere Pro uses the Final Cut Pro 7 XML Interchange Format @@ -24,358 +30,418 @@ DEPTH = "16" -def fix_url(path: str) -> str: - if system() == "Windows": - return "file://localhost/" + quote(abspath(path)).replace("%5C", "/") - return f"file://localhost{abspath(path)}" - - -def speedup(speed: float) -> str: - return indent( - 6, - "", - "\t", - "\t\tTime Remap", - "\t\ttimeremap", - "\t\tmotion", - "\t\tmotion", - "\t\tvideo", - '\t\t', - "\t\t\tvariablespeed", - "\t\t\tvariablespeed", - "\t\t\t0", - "\t\t\t1", - "\t\t\t0", - "\t\t", - '\t\t', - "\t\t\tspeed", - "\t\t\tspeed", - "\t\t\t-100000", - "\t\t\t100000", - f"\t\t\t{speed}", - "\t\t", - '\t\t', - "\t\t\treverse", - "\t\t\treverse", - "\t\t\tFALSE", - "\t\t", - '\t\t', - "\t\t\tframeblending", - "\t\t\tframeblending", - "\t\t\tFALSE", - "\t\t", - "\t", - "", +def uri_to_path(uri: str) -> str: + from urllib.parse import urlparse + from urllib.request import url2pathname + + parsed = urlparse(uri) + host = "{0}{0}{mnt}{0}".format(os.path.sep, mnt=parsed.netloc) + return os.path.normpath(os.path.join(host, url2pathname(parsed.path))) + + # /Users/wyattblue/projects/auto-editor/example.mp4 + # file:///Users/wyattblue/projects/auto-editor/example.mp4 + # file://localhost/Users/wyattblue/projects/auto-editor/example.mp4 + + +def set_tb_ntsc(tb: Fraction) -> tuple[int, str]: + # See chart: https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/FinalCutPro_XML/FrameRate/FrameRate.html#//apple_ref/doc/uid/TP30001158-TPXREF103 + if tb == Fraction(24000, 1001): + return 24, "TRUE" + if tb == Fraction(30000, 1001): + return 30, "TRUE" + if tb == Fraction(60000, 1001): + return 60, "TRUE" + + ctb = ceil(tb) + if ctb not in (24, 30, 60) and ctb * Fraction(999, 1000) == tb: + return ctb, "TRUE" + + return int(tb), "FALSE" + + +def read_tb_ntsc(tb: int, ntsc: bool) -> Fraction: + if ntsc: + if tb == 24: + return Fraction(24000, 1001) + if tb == 30: + return Fraction(30000, 1001) + if tb == 60: + return Fraction(60000, 1001) + return tb * Fraction(999, 1000) + + return Fraction(tb) + + +def speedup(speed: float) -> Element: + fil = Element("filter") + effect = ET.SubElement(fil, "effect") + ET.SubElement(effect, "name").text = "Time Remap" + ET.SubElement(effect, "effectid").text = "timeremap" + + para = ET.SubElement(effect, "parameter", authoringApp="PremierePro") + ET.SubElement(para, "parameterid").text = "variablespeed" + ET.SubElement(para, "name").text = "variablespeed" + ET.SubElement(para, "valuemin").text = "0" + ET.SubElement(para, "valuemax").text = "1" + ET.SubElement(para, "value").text = "0" + + para2 = ET.SubElement(effect, "parameter", authoringApp="PremierePro") + ET.SubElement(para2, "parameterid").text = "speed" + ET.SubElement(para2, "name").text = "speed" + ET.SubElement(para2, "valuemin").text = "-100000" + ET.SubElement(para2, "valuemax").text = "100000" + ET.SubElement(para2, "value").text = str(speed) + + para3 = ET.SubElement(effect, "parameter", authoringApp="PremierePro") + ET.SubElement(para3, "parameterid").text = "frameblending" + ET.SubElement(para3, "name").text = "frameblending" + ET.SubElement(para3, "value").text = "FALSE" + + return fil + + +def premiere_read_xml(path: str, ffmpeg: FFmpeg, log: Log) -> Timeline: + def xml_bool(val: str) -> bool: + if val == "TRUE": + return True + if val == "FALSE": + return False + raise TypeError("Value must be 'TRUE' or 'FALSE'") + + try: + tree = ET.parse(path) + except FileNotFoundError: + log.nofile(path) + + root = tree.getroot() + + valid = Validator(log) + + valid.check(root, "xmeml") + valid.check(root[0], "sequence") + result = valid.parse( + root[0], + { + "name": str, + "duration": int, + "rate": { + "timebase": Fraction, + "ntsc": xml_bool, + }, + "media": None, + }, ) + tb = read_tb_ntsc(result["rate"]["timebase"], result["rate"]["ntsc"]) -def premiere_xml( - temp: str, - output: str, - timeline: Timeline, -) -> None: + av = valid.parse( + result["media"], + { + "video": None, + "audio": None, + }, + ) - inp = timeline.inp - chunks = timeline.chunks + sources: dict[str, FileInfo] = {} + vobjs: VSpace = [] + aobjs: ASpace = [] + + vclip_schema = { + "format": { + "samplecharacteristics": { + "width": int, + "height": int, + }, + }, + "track": { + "__arr": "", + "clipitem": { + "__arr": "", + "start": int, + "end": int, + "in": int, + "out": int, + "file": None, + }, + }, + } + + aclip_schema = { + "format": {"samplecharacteristics": {"samplerate": int}}, + "track": { + "__arr": "", + "clipitem": { + "__arr": "", + "start": int, + "end": int, + "in": int, + "out": int, + "file": None, + }, + }, + } + + sr = 48000 + res = (1920, 1080) + + if "video" in av: + tracks = valid.parse(av["video"], vclip_schema) + + width = tracks["format"]["samplecharacteristics"]["width"] + height = tracks["format"]["samplecharacteristics"]["height"] + res = width, height + + for t, track in enumerate(tracks["track"]): + if len(track["clipitem"]) > 0: + vobjs.append([]) + for i, clipitem in enumerate(track["clipitem"]): + file_id = clipitem["file"].attrib["id"] + if file_id not in sources: + fileobj = valid.parse(clipitem["file"], {"pathurl": str}) + + if "pathurl" in fileobj: + sources[file_id] = FileInfo( + uri_to_path(fileobj["pathurl"]), + ffmpeg, + log, + str(len(sources)), + ) + else: + show(clipitem["file"], 3) + log.error( + f"'pathurl' child element not found in {clipitem['file'].tag}" + ) - if chunks is None: - raise ValueError("Timeline too complex") + start = clipitem["start"] + dur = clipitem["end"] - start + offset = clipitem["in"] + + vobjs[t].append(TlVideo(start, dur, file_id, offset, speed=1, stream=0)) + + if "audio" in av: + tracks = valid.parse(av["audio"], aclip_schema) + sr = tracks["format"]["samplecharacteristics"]["samplerate"] + + for t, track in enumerate(tracks["track"]): + if len(track["clipitem"]) > 0: + aobjs.append([]) + for i, clipitem in enumerate(track["clipitem"]): + file_id = clipitem["file"].attrib["id"] + if file_id not in sources: + fileobj = valid.parse(clipitem["file"], {"pathurl": str}) + sources[file_id] = FileInfo( + uri_to_path(fileobj["pathurl"]), ffmpeg, log, str(len(sources)) + ) - fps = timeline.fps - samplerate = timeline.samplerate + start = clipitem["start"] + dur = clipitem["end"] - start + offset = clipitem["in"] + + aobjs[t].append( + TlAudio(start, dur, file_id, offset, speed=1, volume=1, stream=0) + ) - audio_file = len(inp.videos) == 0 and len(inp.audios) == 1 + timeline = Timeline(sources, tb, sr, res, "#000", vobjs, aobjs, None) + return timeline - # This is not at all how timebase works in actual media but that's how it works here. - timebase = int(fps) - if fps == 23.98 or fps == 23.97602397 or fps == 23.976: - timebase = 24 - ntsc = "TRUE" - elif fps == 29.97 or fps == 29.97002997: - timebase = 30 - ntsc = "TRUE" - elif fps == 59.94 or fps == 59.94005994: - timebase = 60 - ntsc = "TRUE" - else: - ntsc = "FALSE" +def premiere_write_xml(ensure: Ensure, output: str, timeline: Timeline) -> None: - duration = chunks[-1][1] + if timeline.chunks is None: + raise ValueError("Timeline too complex") clips = [] - for chunk in chunks: + duration = timeline.chunks[-1][1] + for chunk in timeline.chunks: if chunk[2] != 99999: clips.append(chunk) - pathurls = [fix_url(inp.path)] + samplerate = timeline.samplerate + src = timeline.sources["0"] - tracks = len(inp.audios) + audio_file = len(src.videos) == 0 and len(src.audios) == 1 + timebase, ntsc = set_tb_ntsc(timeline.timebase) - if tracks > 1: - name_without_extension = inp.basename[: inp.basename.rfind(".")] + pathurls = [src.path.resolve().as_uri()] - fold = safe_mkdir(os.path.join(inp.dirname, f"{name_without_extension}_tracks")) + tracks = len(src.audios) + + if tracks > 1: + fold = src.path.parent / f"{src.path.stem}_tracks" + safe_mkdir(fold) for i in range(1, tracks): - newtrack = os.path.join(fold, f"{i}.wav") - move(os.path.join(temp, f"0-{i}.wav"), newtrack) - pathurls.append(fix_url(newtrack)) + newtrack = fold / f"{i}.wav" + move(ensure.audio(f"{src.path.resolve()}", "0", i), newtrack) + pathurls.append(newtrack.resolve().as_uri()) width, height = timeline.res group_name = f"Auto-Editor {'Audio' if audio_file else 'Video'} Group" - with open(output, "w", encoding="utf-8") as outfile: - outfile.write('\n\n') - outfile.write('\n') - outfile.write("\t\n") - outfile.write(f"\t\t{group_name}\n") - outfile.write(f"\t\t{duration}\n") - outfile.write("\t\t\n") - outfile.write(f"\t\t\t{timebase}\n") - outfile.write(f"\t\t\t{ntsc}\n") - outfile.write("\t\t\n") - outfile.write("\t\t\n") - outfile.write( - indent( - 3, - "" if len(inp.videos) == 0 else "\t", - ) - ) - - if len(inp.videos) > 0: - # Handle video clips + if clip[2] != 1: + clipitem.append(speedup(clip[2] * 100)) - total = 0.0 - for j, clip in enumerate(clips): - - clip_duration = (clip[1] - clip[0] + 1) / clip[2] - - _start = int(total) - _end = int(total) + int(clip_duration) - _in = int(clip[0] / clip[2]) - _out = int(clip[1] / clip[2]) - - total += clip_duration - - outfile.write( - indent( - 5, - f'', - "\tmasterclip-2", - f"\t{inp.basename}", - f"\t{_start}", - f"\t{_end}", - f"\t{_in}", - f"\t{_out}", - ) - ) - - if j == 0: - outfile.write( - indent( - 6, - '', - f"\t{inp.basename}", - f"\t{pathurls[0]}", - "\t", - f"\t\t{timebase}", - f"\t\t{ntsc}", - "\t", - f"\t{duration}", - "\t", - "\t\t", - "\t\t", - "\t", - "", - ) - ) - else: - outfile.write('\t\t\t\t\t\t\n') - - if clip[2] != 1: - outfile.write(speedup(clip[2] * 100)) - - # Linking for video blocks - for i in range(max(3, tracks + 1)): - outfile.write("\t\t\t\t\t\t\n") - outfile.write( - f"\t\t\t\t\t\t\tclipitem-{(i*(len(clips)))+j+1}\n" - ) - if i == 0: - outfile.write("\t\t\t\t\t\t\tvideo\n") - else: - outfile.write("\t\t\t\t\t\t\taudio\n") - if i == 2: - outfile.write("\t\t\t\t\t\t\t2\n") - else: - outfile.write("\t\t\t\t\t\t\t1\n") - outfile.write(f"\t\t\t\t\t\t\t{j+1}\n") - if i > 0: - outfile.write("\t\t\t\t\t\t\t1\n") - outfile.write("\t\t\t\t\t\t\n") - - outfile.write("\t\t\t\t\t\n") - outfile.write(indent(3, "\t", "")) - - # Audio Clips - outfile.write( - indent( - 3, - "\n") - outfile.write("\t\t\n") - outfile.write("\t\n") - outfile.write("\n") + tree = ET.ElementTree(xmeml) + ET.indent(tree, space="\t", level=0) + tree.write(output, xml_declaration=True, encoding="utf-8") diff -Nru auto-editor-22w28a+ds/auto_editor/formats/shotcut.py auto-editor-22w52a+ds/auto_editor/formats/shotcut.py --- auto-editor-22w28a+ds/auto_editor/formats/shotcut.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/formats/shotcut.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,183 +1,182 @@ +from __future__ import annotations + +import xml.etree.ElementTree as ET + +from auto_editor.ffwrapper import FFmpeg from auto_editor.timeline import Timeline from auto_editor.utils.func import aspect_ratio, to_timecode +from auto_editor.utils.log import Log +from .utils import Validator, show -def timecode_to_frames(timecode: str, fps: float) -> int: - _h, _m, _s = timecode.split(":") - h = int(_h) - m = int(_m) - s = float(_s) - return round((h * 3600 + m * 60 + s) * fps) +""" +Shotcut uses the MLT timeline format +See docs here: +https://mltframework.org/docs/mltxml/ -def shotcut_xml( - output: str, - timeline: Timeline, -) -> None: - width, height = timeline.res - num, den = aspect_ratio(width, height) +""" + + +def shotcut_read_mlt(path: str, ffmpeg: FFmpeg, log: Log) -> Timeline: + try: + tree = ET.parse(path) + except FileNotFoundError: + log.nofile(path) + + Validator(log) + + root = tree.getroot() + + show(root, 10) - chunks = timeline.chunks - if chunks is None: + quit() + + +def shotcut_write_mlt(output: str, timeline: Timeline) -> None: + if timeline.chunks is None: raise ValueError("Timeline too complex") - fps = timeline.fps - inp = timeline.inp - global_out = to_timecode(timeline.out_len() / fps, "standard") - version = "21.05.18" + mlt = ET.Element( + "mlt", + attrib={ + "LC_NUMERIC": "C", + "version": "7.9.0", + "title": "Shotcut version 22.09.23", + "producer": "main_bin", + }, + ) - with open(output, "w", encoding="utf-8") as out: - out.write('\n') - out.write( - '\n' - ) - out.write( - '\t\n' - ) - out.write('\t\n') - out.write('\t\t1\n') - out.write("\t\n") - - # out was the new video length in the original xml - out.write(f'\t\n') - out.write(f'\t\t{global_out}\n') - out.write('\t\tpause\n') - out.write('\t\t0\n') - out.write('\t\t1\n') - out.write('\t\tcolor\n') - out.write('\t\trgba\n') - out.write('\t\t0\n') - out.write("\t\n") - - out.write('\t\n') # same for this out too. - out.write( - f'\t\t\n' + width, height = timeline.res + num, den = aspect_ratio(width, height) + tb = timeline.timebase + src = timeline.sources["0"] + + profile = ET.SubElement( + mlt, + "profile", + attrib={ + "description": "automatic", + "width": f"{width}", + "height": f"{height}", + "progressive": "1", + "sample_aspect_num": "1", + "sample_aspect_den": "1", + "display_aspect_num": f"{num}", + "display_aspect_den": f"{den}", + "frame_rate_num": f"{tb.numerator}", + "frame_rate_den": f"{tb.denominator}", + "colorspace": "709", + }, + ) + + playlist_bin = ET.SubElement(mlt, "playlist", id="main_bin") + ET.SubElement(playlist_bin, "property", name="xml_retain").text = "1" + + global_out = to_timecode(timeline.out_len() / tb, "standard") + + producer = ET.SubElement(mlt, "producer", id="bg") + + ET.SubElement(producer, "property", name="length").text = global_out + ET.SubElement(producer, "property", name="eof").text = "pause" + ET.SubElement(producer, "property", name="resource").text = timeline.background + ET.SubElement(producer, "property", name="mlt_service").text = "color" + ET.SubElement(producer, "property", name="mlt_image_format").text = "rgba" + ET.SubElement(producer, "property", name="aspect_ratio").text = "1" + + playlist = ET.SubElement(mlt, "playlist", id="background") + ET.SubElement( + playlist, + "entry", + attrib={"producer": "bg", "in": "00:00:00.000", "out": global_out}, + ).text = "1" + + chains = 0 + producers = 0 + + for clip in timeline.chunks: + if clip[2] == 99999: + continue + + speed = clip[2] + _out = to_timecode(clip[1] / speed / tb, "standard") + length = to_timecode((clip[1] / speed + 1) / tb, "standard") + + if speed == 1: + resource = f"{src.path}" + caption = f"{src.path.stem}" + chain = ET.SubElement( + mlt, "chain", attrib={"id": f"chain{chains}", "out": f"{_out}"} + ) + else: + chain = ET.SubElement( + mlt, "producer", attrib={"id": f"producer{producers}", "out": f"{_out}"} + ) + + resource = f"{speed}:{src.path}" + caption = f"{src.path.stem} ({speed}x)" + + producers += 1 + + ET.SubElement(chain, "property", name="length").text = length + ET.SubElement(chain, "property", name="resource").text = resource + + if speed != 1: + ET.SubElement(chain, "property", name="warp_speed").text = str(speed) + ET.SubElement(chain, "property", name="warp_pitch").text = "1" + ET.SubElement(chain, "property", name="mlt_service").text = "timewarp" + + ET.SubElement(chain, "property", name="caption").text = caption + + chains += 1 + + main_playlist = ET.SubElement(mlt, "playlist", id="playlist0") + ET.SubElement(main_playlist, "property", name="shotcut:video").text = "1" + ET.SubElement(main_playlist, "property", name="shotcut:name").text = "V1" + + producers = 0 + i = 0 + for clip in timeline.chunks: + if clip[2] == 99999: + continue + + speed = clip[2] + + if speed == 1: + in_len: float = clip[0] - 1 + else: + in_len = max(clip[0] / speed, 0) + + out_len = max((clip[1] - 2) / speed, 0) + + _in = to_timecode(in_len / tb, "standard") + _out = to_timecode(out_len / tb, "standard") + + tag_name = f"chain{i}" + if speed != 1: + tag_name = f"producer{producers}" + producers += 1 + + ET.SubElement( + main_playlist, + "entry", + attrib={"producer": tag_name, "in": _in, "out": _out}, ) - out.write("\t\n") - chains = 0 - producers = 0 + i += 1 - # Speeds like [1.5, 3] don't work because of duration issues, too bad! + tractor = ET.SubElement( + mlt, + "tractor", + attrib={"id": "tractor0", "in": "00:00:00.000", "out": global_out}, + ) + ET.SubElement(tractor, "property", name="shotcut").text = "1" + ET.SubElement(tractor, "property", name="shotcut:projectAudioChannels").text = "2" + ET.SubElement(tractor, "track", producer="background") + ET.SubElement(tractor, "track", producer="playlist0") - for clip in chunks: - if clip[2] == 99999: - continue - - speed = clip[2] - - _out = to_timecode(clip[1] / speed / fps, "standard") - length = to_timecode((clip[1] / speed + 1) / fps, "standard") - - if speed == 1: - resource = inp.path - caption = inp.basename - out.write(f'\t\n') - else: - resource = f"{speed}:{inp.path}" - caption = f"{inp.basename} ({speed}x)" - out.write( - f'\t\n' - ) - producers += 1 - chains += 1 - - out.write(f'\t\t{length}\n') - out.write('\t\tpause\n') - out.write(f'\t\t{resource}\n') - - if speed == 1: - out.write( - '\t\tavformat-novalidate\n' - ) - out.write('\t\t1\n') - out.write('\t\t1\n') - out.write('\t\t0\n') - out.write('\t\t0\n') - out.write( - f'\t\t{caption}\n' - ) - out.write('\t\twas here\n') - else: - out.write('\t\t1\n') - out.write('\t\t1\n') - out.write('\t\t1\n') - out.write('\t\t0\n') - out.write('\t\t1\n') - out.write(f'\t\t{speed}\n') - out.write(f'\t\t{inp.path}\n') - out.write('\t\ttimewarp\n') - out.write('\t\tavformat\n') - out.write('\t\t0\n') - out.write( - f'\t\t{caption}\n' - ) - out.write('\t\twas here\n') - out.write('\t\t1\n') - - out.write("\t\n" if speed == 1 else "\t\n") - - out.write('\t\n') - out.write('\t\t1\n') - out.write('\t\tV1\n') - - producers = 0 - i = 0 - for clip in chunks: - if clip[2] == 99999: - continue - - speed = clip[2] - - if speed == 1: - in_len: float = clip[0] - 1 - else: - in_len = max(clip[0] / speed, 0) - - out_len = max((clip[1] - 2) / speed, 0) - - _in = to_timecode(in_len / fps, "standard") - _out = to_timecode(out_len / fps, "standard") - - tag_name = f"chain{i}" - if speed != 1: - tag_name = f"producer{producers}" - producers += 1 - - out.write(f'\t\t\n') - i += 1 - - out.write("\t\n") - - out.write( - f'\t\n' - ) - out.write('\t\t1\n') - out.write('\t\t2\n') - out.write('\t\t0\n') - out.write('\t\t\n') - out.write('\t\t\n') - out.write('\t\t\n') - out.write('\t\t\t0\n') - out.write('\t\t\t1\n') - out.write('\t\t\tmix\n') - out.write('\t\t\t1\n') - out.write('\t\t\t1\n') - out.write("\t\t\n") - out.write('\t\t\n') - out.write('\t\t\t0\n') - out.write('\t\t\t1\n') - out.write('\t\t\t0.9\n') - out.write('\t\t\tfrei0r.cairoblend\n') - out.write('\t\t\t0\n') - out.write('\t\t\t1\n') - out.write("\t\t\n") + tree = ET.ElementTree(mlt) + + ET.indent(tree, space="\t", level=0) - out.write("\t\n") - out.write("\n") + tree.write(output, xml_declaration=True, encoding="utf-8") diff -Nru auto-editor-22w28a+ds/auto_editor/formats/utils.py auto-editor-22w52a+ds/auto_editor/formats/utils.py --- auto-editor-22w28a+ds/auto_editor/formats/utils.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/formats/utils.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,4 +1,21 @@ -def safe_mkdir(path: str) -> str: +from __future__ import annotations + +from pathlib import Path +from xml.etree.ElementTree import Element + +from auto_editor.utils.log import Log + + +def show(ele: Element, limit: int, depth: int = 0) -> None: + print( + f"{' ' * (depth * 4)}<{ele.tag} {ele.attrib}> {ele.text.strip() if ele.text is not None else ''}" + ) + for child in ele: + if isinstance(child, Element) and depth < limit: + show(child, limit, depth + 1) + + +def safe_mkdir(path: str | Path) -> None: from os import mkdir from shutil import rmtree @@ -7,7 +24,6 @@ except OSError: rmtree(path) mkdir(path) - return path def indent(base: int, *lines: str) -> str: @@ -15,3 +31,43 @@ for line in lines: new_lines += ("\t" * base) + line + "\n" return new_lines + + +class Validator: + def __init__(self, log: Log): + self.log = log + + def parse(self, ele: Element, schema: dict) -> dict: + new: dict = {} + + for key, val in schema.items(): + if isinstance(val, dict) and "__arr" in val: + new[key] = [] + + is_arr = False + for child in ele: + if child.tag not in schema: + continue + + if schema[child.tag] is None: + new[child.tag] = child + continue + + if isinstance(schema[child.tag], dict): + val = self.parse(child, schema[child.tag]) + is_arr = "__arr" in schema[child.tag] + else: + val = schema[child.tag](child.text) + + if child.tag in new: + if not is_arr: + self.log.error(f"<{child.tag}> can only occur once") + new[child.tag].append(val) + else: + new[child.tag] = [val] if is_arr else val + + return new + + def check(self, ele: Element, tag: str) -> None: + if tag != ele.tag: + self.log.error(f"Expected '{tag}' tag, got '{ele.tag}'") diff -Nru auto-editor-22w28a+ds/auto_editor/help.json auto-editor-22w52a+ds/auto_editor/help.json --- auto-editor-22w28a+ds/auto_editor/help.json 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/help.json 1970-01-01 00:00:00.000000000 +0000 @@ -1,40 +0,0 @@ -{ - "Auto-Editor": { - "_": "Auto-Editor is an automatic video/audio creator and editor. By default, it will detect silence and create a new video with those sections cut out. By changing some of the options, you can export to a traditional editor like Premiere Pro and adjust the edits there, adjust the pacing of the cuts, and change the method of editing like using audio loudness and video motion to judge making cuts.\n\nRun:\n auto-editor --help\n\nTo get the list of options.", - "--add-ellipse": "The x and y coordinates specify a bounding box where the ellipse is drawn.", - "--add-image": "Opacity is how transparent or solid the image is. A transparency of 1 or 100% is completely solid. A transparency of 0 or 0% is completely transparent.\nThe anchor point tells how the image is placed relative to its x y coordinates.", - "--set-speed-for-range": "This option takes 3 arguments delimited with commas and they are as follows:\n Speed\n - How fast the media plays. Speeds 0 or below and 99999 or above will be cut completely.\n Start\n - When the speed first gets applied. The default unit is in frames, but second units can also be used.\n End\n - When the speed stops being applied. It can use both frame and second units.", - "--edit-based-on": "Editing Methods:\n - audio: General audio detection\n - motion: Motion detection specialized for real life noisy video\n - pixeldiff: Detect when a certain amount of pixels have changed between frames\n - random: Set silent/loud randomly based on a random or preset seed\n - none: Do not modify the media in anyway (Mark all sections as \"loud\")\n - all: Cut out everything out (Mark all sections as \"silent\")\n\nDefault Attributes:\n - audio\n - stream: 0 (int | \"all\")\n - threshold: args.silent_threshold (float_type)\n - motion\n - threshold: 2% (float_type)\n - blur: 9 (int)\n - width: 400 (int)\n - pixeldiff\n - threshold: 1 (int)\n - random\n - cutchance: 0.5 (float_type)\n - seed: RANDOMLY-GENERATED (int)\n\nLogical Operators:\n - and\n - or\n - xor\n\nExamples:\n --edit audio\n --edit audio:stream=1\n --edit audio:threshold=4%\n --edit audio:threshold=0.03\n --edit motion\n --edit motion:threshold=2%,blur=3\n --edit audio:threshold=4% or motion:threshold=2%,blur=3\n --edit none\n --edit all", - "--export": "Instead of exporting a video, export as one of these options instead.\n\ndefault : Export as usual\npremiere : Export as an XML timeline file for Adobe Premiere Pro\nfinal-cut-pro : Export as an XML timeline file for Final Cut Pro\nshotcut : Export as an XML timeline file for Shotcut\njson : Export as an auto-editor JSON timeline file\naudio : Export as a WAV audio file\nclip-sequence : Export as multiple numbered media files", - "--player": "This option uses shell-like syntax to support using a specific player:\n\n auto-editor in.mp4 --player mpv\n\nArgs for the player program can be added as well:\n\n auto-editor in.mp4 --player 'mpv --keep-open'\n\nAbsolute or relative paths can also be used in the event the player's executable can not be resolved:\n\n auto-editor in.mp4 --player '/path/to/mpv'\n auto-editor in.mp4 --player './my-relative-path/mpv'\n\nIf --player is not set, auto-editor will use the system default.\nIf --no-open is used, --player will always be ignored.\n\nIf on MacOS, you can use QuickTime using this command:\n\n auto-editor in.mp4 --player 'open -a \"quicktime player\"'", - "--resolution": "By default, global resolution is set to the first input's resolution.\nIf the first input does not have a resolution (audio files), the\nresolution will be set to 1920 by 1080.", - "--temp-dir": "If not set, tempdir will be set with Python's tempfile module\nThe directory doesn't have to exist beforehand, however, the root path must be valid.\nThe temp file can get quite big if you're generating a huge video, so make sure your location has enough space.", - "--ffmpeg-location": "This takes precedence over `--my-ffmpeg`.", - "--my-ffmpeg": "This is equivalent to `--ffmpeg-location ffmpeg`.", - "--silent-threshold": "Silent threshold is a percentage where 0% represents absolute silence and 100% represents the highest volume in the media file.\nSetting the threshold to `0%` will cut only out areas where area is absolutely silence.", - "--frame-margin": "Margin is measured in frames if no units are specified. Seconds can be used. e.g. `0.3secs`\nThe starting and ending margins can be set separately with the use of a comma. e.g. `2sec,3sec` `7,10` `-1,6`", - "--silent-speed": "99999 is the 'cut speed' and values over that or <=0 are considered 'cut speeds' as well", - "--video-speed": "99999 is the 'cut speed' and values over that or <=0 are considered 'cut speeds' as well", - "--min-clip-length": "Range: 0 to Infinity", - "--min-cut-length": "Range: 0 to Infinity" - }, - "info": { - "_": "Retrieve information and properties about media files", - "--include-vfr": "A typical output looks like this:\n - VFR:0.583394 (3204/2288) min: 41 max: 42 avg: 41\n\nThe first number is the ratio of how many VFR frames are there.\nThe second number is the total number of VFR frames, and the third is the number of non-VFR frames. Adding the second and third number will result in how many frames the video has in total." - }, - "levels": { - "_": "Display loudness over time" - }, - "subdump": { - "_": "Dump text-based subtitles to stdout with formatting stripped out" - }, - "grep": { - "_": "Read and match text-based subtitle tracks" - }, - "desc": { - "_": "Display a media's description metadata" - }, - "test": { - "_": "Self-Hosted Unit and End-to-End tests" - } -} diff -Nru auto-editor-22w28a+ds/auto_editor/help.py auto-editor-22w52a+ds/auto_editor/help.py --- auto-editor-22w28a+ds/auto_editor/help.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/help.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,178 @@ +data = { + "Auto-Editor": { + "_": """ +Auto-Editor is an automatic video/audio creator and editor. By default, it will detect silence and create a new video with those sections cut out. By changing some of the options, you can export to a traditional editor like Premiere Pro and adjust the edits there, adjust the pacing of the cuts, and change the method of editing like using audio loudness and video motion to judge making cuts. + +Run: + auto-editor --help + +To get the list of options. +""".strip(), + "--set-speed-for-range": """ +This option takes 3 arguments delimited with commas and they are as follows: + - speed: + - How fast to play the media (number) +Start: + - The time when speed first gets applied (time) +End: + - The time when speed stops being applied (time) + +example: + +--set-range-for-speed 2.5,400,800 + +will set the speed from 400 ticks to 800 ticks to 2.5x +If timebase is 30, 400 ticks to 800 means 13.33 to 26.66 seconds +""".strip(), + "--edit-based-on": """ +Editing Methods: + - audio: General audio detection + - motion: Motion detection specialized for real life noisy video + - pixeldiff: Detect when a certain amount of pixels have changed between frames + - none: Do not modify the media in anyway (Mark all sections as "loud") + - all: Cut out everything out (Mark all sections as "silent") + +Attribute Defaults: + - audio + - threshold: 4% (number) + - stream: 0 (natural | "all") + - motion + - threshold: 2% (number) + - stream: 0 (natural | "all") + - blur: 9 (natural) + - width: 400 (natural) + - pixeldiff + - threshold: 1 (natural) + - stream: 0 (natural | "all") + +Examples: + --edit audio + --edit audio:stream=1 + --edit audio:threshold=4% + --edit audio:threshold=0.03 + --edit motion + --edit motion:threshold=2%,blur=3 + --edit (or audio:threshold=4% motion:threshold=2%,blur=3) + --edit none + --edit all +""".strip(), + "--export": """ +Instead of exporting a video, export as one of these options instead. + +default : Export as usual +premiere : Export as an XML timeline file for Adobe Premiere Pro +final-cut-pro : Export as an XML timeline file for Final Cut Pro +shotcut : Export as an XML timeline file for Shotcut +json : Export as an auto-editor JSON timeline file +audio : Export as a WAV audio file +clip-sequence : Export as multiple numbered media files +""".strip(), + "--player": """ +This option uses shell-like syntax to support using a specific player: + + auto-editor in.mp4 --player mpv + +Args for the player program can be added as well: + + auto-editor in.mp4 --player 'mpv --keep-open' + +Absolute or relative paths can also be used in the event the player's +executable can not be resolved: + + auto-editor in.mp4 --player '/path/to/mpv' + auto-editor in.mp4 --player './my-relative-path/mpv' + +If --player is not set, auto-editor will use the system default. +If --no-open is used, --player will always be ignored. + +on MacOS, QuickTime can be used as the default player this way: + + auto-editor in.mp4 --player 'open -a "quicktime player"' +""".strip(), + "--resolution": """ + +When working with media files, resolution will be based on the first input with a +fallback value of 1920x1080 +""".strip(), + "--frame-rate": """ +Set the timeline's timebase and the output media's frame rate. + +When working with media files, frame-rate will be the first input's frame rate +with a fallback value of 30 + +The format must be a string in the form: + - frame_rate_num/frame_rate_den + - an integer + - an floating point number + - a valid frame rate label + +The following labels are recognized: + - ntsc -> 30000/1001 + - ntsc_film -> 24000/1001 + - pal -> 25 + - film -> 24 +""".strip(), + "--temp-dir": """ +If not set, tempdir will be set with Python's tempfile module +The directory doesn't have to exist beforehand, however, the root path must be valid. +Beware that the temp directory can get quite big. +""".strip(), + "--ffmpeg-location": "This takes precedence over `--my-ffmpeg`.", + "--my-ffmpeg": "This is equivalent to `--ffmpeg-location ffmpeg`.", + "--silent-threshold": """ +Silent threshold is a percentage where 0% represents absolute silence and 100% represents the highest volume in the media file. +Setting the threshold to `0%` will cut only out areas where area is absolutely silence. +""".strip(), + "--margin": """ +Default value: 0.2sec,0.2sec + +Setting margin examples: + - `--margin 6` + - `--margin 4,10` + - `--margin 0.3s,0.5s` + +Behind the scenes, margin is a function that operates on boolean arrays +(where usually 1 represents "loud" and 0 represents "silence") + +Here is a list of examples on how margin mutates boolean arrays + +(margin 0 0 (bool-array 0 0 0 1 0 0 0)) +> (array 'bool 0 0 0 1 0 0 0) + +(margin 1 0 (bool-array 0 0 0 1 0 0 0)) +> (array 'bool 0 0 1 1 0 0 0) + +(margin 1 1 (bool-array 0 0 0 1 0 0 0)) +> (array 'bool 0 0 1 1 1 0 0) + +(margin 1 2 (bool-array 0 0 1 1 0 0 0 0 1 0)) +> (array 'bool 0 1 1 1 1 1 0 1 1 1) + +(margin -2 2 (bool-array 0 0 1 1 0 0 0)) +> (array 'bool 0 0 0 0 1 1 0) +""".strip(), + "--silent-speed": "99999 is the 'cut speed' and values over that or <=0 are considered 'cut speeds' as well", + "--video-speed": "99999 is the 'cut speed' and values over that or <=0 are considered 'cut speeds' as well", + "--min-clip-length": "Type: nonnegative-integer?", + "--min-cut-length": "Type: nonnegative-integer?", + }, + "info": { + "_": "Retrieve information and properties about media files", + "--include-vfr": """ +A typical output will look like this: + +- VFR:0.583394 (3204/2288) min: 41 max: 42 avg: 41 + +'0.583394' is the ratio of how many VFR frames are there. +'3204' is the number of VFR frames, '2288' is the number of non-VFR frames. + Adding '3204' and '2288' will result in how many frames the video has in total. +""".strip(), + }, + "levels": {"_": "Display loudness over time"}, + "subdump": { + "_": "Dump text-based subtitles to stdout with formatting stripped out" + }, + "grep": {"_": "Read and match text-based subtitle tracks"}, + "desc": {"_": "Display a media's description metadata"}, + "test": {"_": "Self-Hosted Unit and End-to-End tests"}, +} diff -Nru auto-editor-22w28a+ds/auto_editor/__init__.py auto-editor-22w52a+ds/auto_editor/__init__.py --- auto-editor-22w28a+ds/auto_editor/__init__.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/__init__.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,2 +1,2 @@ -__version__ = "22.28.1" -version = "22w28a" +__version__ = "22.52.1" +version = "22w52a" diff -Nru auto-editor-22w28a+ds/auto_editor/interpreter.py auto-editor-22w52a+ds/auto_editor/interpreter.py --- auto-editor-22w28a+ds/auto_editor/interpreter.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/interpreter.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,1252 @@ +from __future__ import annotations + +import cmath +import math +import random +import sys +from dataclasses import dataclass +from fractions import Fraction +from functools import reduce +from typing import TYPE_CHECKING + +import numpy as np + +from auto_editor.analyze import edit_method +from auto_editor.utils.func import boolop, mut_margin + +if TYPE_CHECKING: + from typing import Any, Callable, Union + + from numpy.typing import NDArray + + from auto_editor.ffwrapper import FileInfo + from auto_editor.output import Ensure + from auto_editor.utils.bar import Bar + from auto_editor.utils.log import Log + + Number = Union[int, float, complex, Fraction] + Real = Union[int, float, Fraction] + BoolList = NDArray[np.bool_] + + +class MyError(Exception): + pass + + +def display_dtype(dtype: np.dtype) -> str: + if dtype.kind == "b": + return "bool" + + if dtype.kind == "i": + return f"int{dtype.itemsize * 8}" + + if dtype.kind == "u": + return f"uint{dtype.itemsize * 8}" + + return f"float{dtype.itemsize * 8}" + + +def print_val(val: object) -> str: + if val is True: + return "#t" + if val is False: + return "#f" + if isinstance(val, Symbol): + return f"{val}" + if isinstance(val, list): + if not val: + return "#()" + result = f"#({print_val(val[0])}" + for item in val[1:]: + result += f" {print_val(item)}" + return result + ")" + if isinstance(val, range): + return "#" + if isinstance(val, np.ndarray): + kind = val.dtype.kind + result = f"(array '{display_dtype(val.dtype)}" + if kind == "b": + for item in val: + result += " 1" if item else " 0" + else: + for item in val: + result += f" {item}" + return result + ")" + if isinstance(val, complex): + join = "" if val.imag < 0 else "+" + return f"{val.real}{join}{val.imag}i" + + if isinstance(val, str): + return f'"{val}"' + + return f"{val!r}" + + +class Null: + def __init__(self) -> None: + pass + + def __eq__(self, obj: object) -> bool: + return isinstance(obj, Null) + + def __len__(self) -> int: + return 0 + + def __next__(self) -> StopIteration: + raise StopIteration + + def __getitem__(self, ref: int | slice) -> None: + raise IndexError + + def __str__(self) -> str: + return "'()" + + __repr__ = __str__ + + +class Cons: + __slots__ = ("a", "d") + + def __init__(self, a: Any, d: Any): + self.a = a + self.d = d + + def __repr__(self) -> str: + result = f"({print_val(self.a)}" + tail = self.d + while isinstance(tail, Cons): + result += f" {print_val(tail.a)}" + tail = tail.d + + if isinstance(tail, Null): + return f"{result})" + return f"{result} . {print_val(tail)})" + + def __eq__(self, obj: object) -> bool: + return isinstance(obj, Cons) and self.a == obj.a and self.d == obj.d + + def __len__(self) -> int: + count = 0 + while isinstance(self, Cons): + self = self.d + count += 1 + if not isinstance(self, Null): + raise MyError("length expects: list?") + return count + + def __next__(self) -> Any: + if isinstance(self.d, Cons): + return self.d + raise StopIteration + + def __getitem__(self, ref: int | slice) -> Any: + if isinstance(ref, int): + if ref < 0: + raise MyError(f"ref: negative index not allowed") + pos = ref + while pos > 0: + pos -= 1 + self = self.d + if not isinstance(self, Cons): + raise MyError(f"ref: Index {ref} out of range") + + return self.a + + lst: Cons | Null = Null() + steps: int = -1 + i: int = 0 + + do_reverse = True + start, stop, step = ref.start, ref.stop, ref.step + if start is None: + start = 0 + if step < 0: + do_reverse = False + step = -step + + if stop is None: + stop = float("inf") + else: + start, stop = stop + 1, start + + while isinstance(self, Cons): + if i > stop - 1: + break + if i >= start: + steps = (steps + 1) % step + if steps == 0: + lst = Cons(self.a, lst) + + self = self.d + i += 1 + + if not do_reverse: + return lst + + result: Cons | Null = Null() + while isinstance(lst, Cons): + result = Cons(lst.a, result) + lst = lst.d + return result + + +class Char: + __slots__ = "val" + + def __init__(self, val: str | int): + if isinstance(val, int): + self.val: str = chr(val) + else: + assert isinstance(val, str) and len(val) == 1 + self.val = val + + __str__: Callable[[Char], str] = lambda self: self.val + + def __repr__(self) -> str: + names = {" ": "space", "\n": "newline", "\t": "tab"} + return f"#\\{self.val}" if self.val not in names else f"#\\{names[self.val]}" + + def __eq__(self, obj: object) -> bool: + return isinstance(obj, Char) and self.val == obj.val + + def __radd__(self, obj2: str) -> str: + return obj2 + self.val + + +class Symbol: + __slots__ = ("val", "hash") + + def __init__(self, val: str): + self.val = val + self.hash = hash(val) + + __str__: Callable[[Symbol], str] = lambda self: self.val + __repr__ = __str__ + + def __hash__(self) -> int: + return self.hash + + def __eq__(self, obj: object) -> bool: + return isinstance(obj, Symbol) and self.hash == obj.hash + + +############################################################################### +# # +# LEXER # +# # +############################################################################### + +METHODS = {"audio", "motion", "pixeldiff", "none", "all"} +SEC_UNITS = {"s", "sec", "secs", "second", "seconds"} +ID, QUOTE, NUM, BOOL, STR, CHAR = "ID", "QUOTE", "NUM", "BOOL", "STR", "CHAR" +ARR, SEC, DB, PER = "ARR", "SEC", "DB", "PER" +LPAREN, RPAREN, LBRAC, RBRAC, LCUR, RCUR, EOF = "(", ")", "[", "]", "{", "}", "EOF" + + +class Token: + __slots__ = ("type", "value") + + def __init__(self, type: str, value: Any): + self.type = type + self.value = value + + __str__: Callable[[Token], str] = lambda self: f"(Token {self.type} {self.value})" + + +class Lexer: + __slots__ = ("text", "pos", "char") + + def __init__(self, text: str): + self.text = text + self.pos: int = 0 + self.char: str | None = self.text[self.pos] if text else None + + def char_is_norm(self) -> bool: + return self.char is not None and self.char not in '()[]{}"; \t\n\r\x0b\x0c' + + def advance(self) -> None: + self.pos += 1 + self.char = None if self.pos > len(self.text) - 1 else self.text[self.pos] + + def peek(self) -> str | None: + peek_pos = self.pos + 1 + return None if peek_pos > len(self.text) - 1 else self.text[peek_pos] + + def skip_whitespace(self) -> None: + while self.char is not None and self.char in " \t\n\r\x0b\x0c": + self.advance() + + def string(self) -> str: + result = "" + while self.char is not None and self.char != '"': + if self.char == "\\": + self.advance() + if self.char in 'nt"\\': + if self.char == "n": + result += "\n" + if self.char == "t": + result += "\t" + if self.char == '"': + result += '"' + if self.char == "\\": + result += "\\" + self.advance() + continue + + if self.char is None: + raise MyError("Unexpected EOF while parsing") + raise MyError( + f"Unexpected character {self.char} during escape sequence" + ) + else: + result += self.char + self.advance() + + self.advance() + return result + + def number(self) -> Token: + result = "" + token = NUM + + while self.char is not None and self.char in "+-0123456789./": + result += self.char + self.advance() + + unit = "" + if self.char_is_norm(): + while self.char_is_norm(): + assert self.char is not None + unit += self.char + self.advance() + + if unit in SEC_UNITS: + token = SEC + elif unit == "dB": + token = DB + elif unit == "%": + token = PER + elif unit != "i": + return Token(ID, result + unit) + + try: + if unit == "i": + return Token(NUM, complex(result + "j")) + elif "/" in result: + val = Fraction(result) + if val.denominator == 1: + return Token(token, val.numerator) + return Token(token, val) + elif "." in result: + return Token(token, float(result)) + else: + return Token(token, int(result)) + except ValueError: + return Token(ID, result + unit) + + def hash_literal(self) -> Token: + if self.char == "\\": + self.advance() + if self.char is None: + raise MyError("Expected a character after #\\") + + char = self.char + self.advance() + return Token(CHAR, Char(char)) + + result = "" + while self.char_is_norm(): + assert self.char is not None + result += self.char + self.advance() + + if result in ("t", "true"): + return Token(BOOL, True) + + if result in ("f", "false"): + return Token(BOOL, False) + + raise MyError(f"Unknown hash literal: {result}") + + def get_next_token(self) -> Token: + while self.char is not None: + self.skip_whitespace() + if self.char is None: + continue + + if self.char == ";": + while self.char is not None and self.char != "\n": + self.advance() + continue + + if self.char == '"': + self.advance() + return Token(STR, self.string()) + + if self.char == "'": + self.advance() + return Token(QUOTE, "'") + + if self.char in "(){}[]": + _par = self.char + self.advance() + return Token(_par, _par) + + if self.char in "+-": + _peek = self.peek() + if _peek is not None and _peek in "0123456789.": + return self.number() + + if self.char in "0123456789.": + return self.number() + + if self.char == "#": + self.advance() + return self.hash_literal() + + result = "" + has_illegal = False + while self.char_is_norm(): + result += self.char + if self.char in "'`|\\": + has_illegal = True + self.advance() + + if has_illegal: + raise MyError(f"Symbol has illegal character(s): {result}") + + for method in METHODS: + if result == method or result.startswith(method + ":"): + return Token(ARR, result) + + return Token(ID, result) + + return Token(EOF, "EOF") + + +############################################################################### +# # +# PARSER # +# # +############################################################################### + + +class Compound: + __slots__ = "children" + + def __init__(self, children: list): + self.children = children + + def __str__(self) -> str: + s = "{Compound" + for child in self.children: + s += f" {child}" + s += "}" + return s + + +class BoolArr: + __slots__ = "val" + + def __init__(self, val: str): + self.val = val + + __str__: Callable[[BoolArr], str] = lambda self: f"(boolarr {self.val})" + + +class Parser: + def __init__(self, lexer: Lexer): + self.lexer = lexer + self.current_token = self.lexer.get_next_token() + + def eat(self, token_type: str) -> None: + if self.current_token.type != token_type: + raise MyError(f"Expected {token_type}, got {self.current_token.type}") + + self.current_token = self.lexer.get_next_token() + + def comp(self) -> Compound: + comp_kids = [] + while self.current_token.type not in (EOF, RPAREN, RBRAC, RCUR): + comp_kids.append(self.expr()) + return Compound(comp_kids) + + def expr(self) -> Any: + token = self.current_token + + if token.type in {CHAR, NUM, STR, BOOL}: + self.eat(token.type) + return token.value + + matches = {ID: Symbol, ARR: BoolArr} + if token.type in matches: + self.eat(token.type) + return matches[token.type](token.value) + + if token.type == SEC: + self.eat(SEC) + return [Symbol("round"), [Symbol("*"), token.value, Symbol("timebase")]] + + if token.type == DB: + self.eat(DB) + return [Symbol("expt"), 10, [Symbol("/"), token.value, 20]] + + if token.type == PER: + self.eat(PER) + return [Symbol("/"), token.value, 100.0] + + if token.type == QUOTE: + self.eat(QUOTE) + return [Symbol("quote"), self.expr()] + + pars = {LPAREN: RPAREN, LBRAC: RBRAC, LCUR: RCUR} + if token.type in pars: + self.eat(token.type) + closing = pars[token.type] + childs = [] + while self.current_token.type != closing: + if self.current_token.type == EOF: + raise MyError(f"Expected closing '{closing}' before end") + childs.append(self.expr()) + + self.eat(closing) + return childs + + self.eat(token.type) + childs = [] + while self.current_token.type not in (RPAREN, RBRAC, RCUR, EOF): + childs.append(self.expr()) + return childs + + def __str__(self) -> str: + result = str(self.comp()) + + self.lexer.pos = 0 + self.lexer.char = self.lexer.text[0] + self.current_token = self.lexer.get_next_token() + + return result + + +############################################################################### +# # +# STANDARD LIBRARY # +# # +############################################################################### + + +class Contract: + # Convenient flat contract class + __slots__ = ("name", "c") + + def __init__(self, name: str, c: Callable[[object], bool]): + self.name = name + self.c = c + + def __call__(self, v: object) -> bool: + return self.c(v) + + +def check_args( + o: str, + values: list | tuple, + arity: tuple[int, int | None], + types: list[Contract] | None, +) -> None: + lower, upper = arity + amount = len(values) + if upper is not None and lower > upper: + raise ValueError("lower must be less than upper") + if lower == upper: + if len(values) != lower: + raise MyError(f"{o}: Arity mismatch. Expected {lower}, got {amount}") + + if upper is None and amount < lower: + raise MyError(f"{o}: Arity mismatch. Expected at least {lower}, got {amount}") + if upper is not None and (amount > upper or amount < lower): + raise MyError( + f"{o}: Arity mismatch. Expected between {lower} and {upper}, got {amount}" + ) + + if types is None: + return + + for i, val in enumerate(values): + check = types[-1] if i >= len(types) else types[i] + if not check(val): + raise MyError(f"{o} expects: {' '.join([c.name for c in types])}") + + +any_c = Contract("any/c", lambda v: True) +is_proc = Contract("procedure?", lambda v: isinstance(v, (Proc, Contract))) +is_bool = Contract("boolean?", lambda v: isinstance(v, bool)) +is_pair = Contract("pair?", lambda v: isinstance(v, Cons)) +is_null = Contract("null?", lambda v: isinstance(v, Null)) +is_symbol = Contract("symbol?", lambda v: isinstance(v, Symbol)) +is_str = Contract("string?", lambda v: isinstance(v, str)) +is_char = Contract("char?", lambda v: isinstance(v, Char)) +is_iterable = Contract( + "iterable?", + lambda v: isinstance(v, (str, list, range, np.ndarray, Cons, Null)), +) +is_range = Contract("range?", lambda v: isinstance(v, range)) +is_vector = Contract("vector?", lambda v: isinstance(v, list)) +is_array = Contract("array?", lambda v: isinstance(v, np.ndarray)) +is_boolarr = Contract( + "bool-array?", + lambda v: isinstance(v, np.ndarray) and v.dtype.kind == "b", +) +is_num = Contract( + "number?", + lambda v: not isinstance(v, bool) + and isinstance(v, (int, float, Fraction, complex)), +) +is_real = Contract( + "real?", lambda v: not isinstance(v, bool) and isinstance(v, (int, float, Fraction)) +) +is_int = Contract( + "integer?", + lambda v: not isinstance(v, bool) and isinstance(v, int), +) +is_frac = Contract("fraction?", lambda v: isinstance(v, Fraction)) +is_float = Contract("float?", lambda v: isinstance(v, float)) +us_int = Contract("nonnegative-integer?", lambda v: isinstance(v, int) and v > -1) + + +def raise_(msg: str) -> None: + raise MyError(msg) + + +def display(val: Any) -> None: + if val is None: + return + if isinstance(val, str): + sys.stdout.write(val) + else: + sys.stdout.write(print_val(val)) + + +def is_equal(a: object, b: object) -> bool: + if isinstance(a, np.ndarray) and isinstance(b, np.ndarray): + return np.array_equal(a, b) + return type(a) == type(b) and a == b + + +def equal_num(*values: object) -> bool: + return all(values[0] == val for val in values[1:]) + + +def mul(*vals: Any) -> Number: + return reduce(lambda a, b: a * b, vals, 1) + + +def minus(*vals: Number) -> Number: + if len(vals) == 1: + return -vals[0] + return reduce(lambda a, b: a - b, vals) + + +def div(*vals: Any) -> Number: + if len(vals) == 1: + vals = (1, vals[0]) + + if not {float, complex}.intersection({type(val) for val in vals}): + result = reduce(lambda a, b: Fraction(a, b), vals) + if result.denominator == 1: + return result.numerator + return result + return reduce(lambda a, b: a / b, vals) + + +def _sqrt(v: Number) -> Number: + r = cmath.sqrt(v) + if r.imag == 0: + if int(r.real) == r.real: + return int(r.real) + return r.real + return r + + +def _not(val: Any) -> bool | BoolList: + if is_boolarr(val): + return np.logical_not(val) + if is_bool(val): + return not val + raise MyError("not expects: boolean? or bool-array?") + + +def _and(*vals: Any) -> bool | BoolList: + if is_boolarr(vals[0]): + check_args("and", vals, (2, None), [is_boolarr]) + return reduce(lambda a, b: boolop(a, b, np.logical_and), vals) + check_args("and", vals, (1, None), [is_bool]) + return reduce(lambda a, b: a and b, vals) + + +def _or(*vals: Any) -> bool | BoolList: + if is_boolarr(vals[0]): + check_args("or", vals, (2, None), [is_boolarr]) + return reduce(lambda a, b: boolop(a, b, np.logical_or), vals) + check_args("or", vals, (1, None), [is_bool]) + return reduce(lambda a, b: a or b, vals) + + +def _xor(*vals: Any) -> bool | BoolList: + if is_boolarr(vals[0]): + check_args("xor", vals, (2, None), [is_boolarr]) + return reduce(lambda a, b: boolop(a, b, np.logical_xor), vals) + check_args("xor", vals, (2, None), [is_bool]) + return reduce(lambda a, b: a ^ b, vals) + + +def string_append(*vals: str | Char) -> str: + return reduce(lambda a, b: a + b, vals, "") + + +def vector_append(*vals: list) -> list: + return reduce(lambda a, b: a + b, vals, []) + + +def string_ref(s: str, ref: int) -> Char: + try: + return Char(s[ref]) + except IndexError: + raise MyError(f"string index {ref} is out of range") + + +def number_to_string(val: Number) -> str: + if isinstance(val, complex): + join = "" if val.imag < 0 else "+" + return f"{val.real}{join}{val.imag}i" + return f"{val}" + + +def _dtype_to_np(dtype: Symbol) -> type[np.generic]: + dtype_map = { + Symbol("bool"): np.bool_, + Symbol("int8"): np.int8, + Symbol("int16"): np.int16, + Symbol("int32"): np.int32, + Symbol("int64"): np.int64, + Symbol("uint8"): np.uint8, + Symbol("uint16"): np.uint16, + Symbol("uint32"): np.uint32, + Symbol("uint64"): np.uint64, + Symbol("float32"): np.float32, + Symbol("float64"): np.float64, + } + np_dtype = dtype_map.get(dtype) + if np_dtype is None: + raise MyError(f"Invalid array dtype: {dtype}") + return np_dtype + + +def array_proc(dtype: Symbol, *vals: Any) -> np.ndarray: + try: + return np.array(vals, dtype=_dtype_to_np(dtype)) + except OverflowError: + raise MyError(f"number too large to be converted to {dtype}") + + +def make_array(dtype: Symbol, size: int, v: int = 0) -> np.ndarray: + try: + return np.array([v] * size, dtype=_dtype_to_np(dtype)) + except OverflowError: + raise MyError(f"number too large to be converted to {dtype}") + + +def mut_remove_small(arr: BoolList, lim: int, replace: int, with_: int) -> None: + start_p = 0 + active = False + for j, item in enumerate(arr): + if item == replace: + if not active: + start_p = j + active = True + # Special case for end. + if j == len(arr) - 1: + if j - start_p < lim: + arr[start_p : j + 1] = with_ + else: + if active: + if j - start_p < lim: + arr[start_p:j] = with_ + active = False + + +def minclip(oarr: BoolList, _min: int) -> BoolList: + arr = np.copy(oarr) + mut_remove_small(arr, _min, replace=1, with_=0) + return arr + + +def mincut(oarr: BoolList, _min: int) -> BoolList: + arr = np.copy(oarr) + mut_remove_small(arr, _min, replace=0, with_=1) + return arr + + +def margin(a: int, b: Any, c: Any = None) -> BoolList: + if c is None: + check_args("margin", [a, b], (2, 2), [is_int, is_boolarr]) + oarr = b + start, end = a, a + else: + check_args("margin", [a, b, c], (3, 3), [is_int, is_int, is_boolarr]) + oarr = c + start, end = a, b + + arr = np.copy(oarr) + mut_margin(arr, start, end) + return arr + + +def cook(min_clip: int, min_cut: int, oarr: BoolList) -> BoolList: + arr = np.copy(oarr) + mut_remove_small(arr, min_clip, replace=1, with_=0) + mut_remove_small(arr, min_cut, replace=0, with_=1) + return arr + + +def _list(*values: Any) -> Cons | Null: + result: Cons | Null = Null() + for val in reversed(values): + result = Cons(val, result) + return result + + +# convert nested vectors to nested lists +def deep_list(vec: list) -> Cons | Null: + result: Cons | Null = Null() + for val in reversed(vec): + if isinstance(val, list): + val = deep_list(val) + result = Cons(val, result) + return result + + +def list_to_vector(val: Cons | Null) -> list: + result = [] + while isinstance(val, Cons): + result.append(val.a) + val = val.d + return result + + +def vector_to_list(values: list) -> Cons | Null: + result: Cons | Null = Null() + for val in reversed(values): + result = Cons(val, result) + return result + + +def vector_add(vec: list, val: Any) -> None: + vec.append(val) + + +def vector_set(vec: list, pos: int, v: Any) -> None: + try: + vec[pos] = v + except IndexError: + raise MyError(f"vector-set: Invalid index {pos}") + + +def vector_extend(vec: list, *more_vecs: list) -> None: + for more in more_vecs: + vec.extend(more) + + +def string_to_list(s: str) -> Cons | Null: + return vector_to_list([Char(s) for s in s]) + + +def is_list(val: Any) -> bool: + while isinstance(val, Cons): + val = val.d + return isinstance(val, Null) + + +def palet_random(*args: int) -> int | float: + if not args: + return random.random() + + if args[0] < 1: + raise MyError(f"random: arg1 ({args[0]}) must be greater than zero") + + if len(args) == 1: + return random.randrange(0, args[0]) + + if args[0] >= args[1]: + raise MyError(f"random: arg2 ({args[1]}) must be greater than arg1") + return random.randrange(args[0], args[1]) + + +def palet_map(proc: Proc, seq: str | list | range | NDArray | Cons | Null) -> Any: + if isinstance(seq, (list, range)): + return list(map(proc.proc, seq)) + if isinstance(seq, str): + return str(map(proc.proc, seq)) + + if isinstance(seq, np.ndarray): + if proc.arity[0] != 0: + raise MyError(f"map: procedure must take at least one arg") + check_args(proc.name, [0], (1, 1), None) + return proc.proc(seq) + + result: Cons | Null = Null() + while isinstance(seq, Cons): + result = Cons(proc.proc(seq.a), result) + seq = seq.d + return result[::-1] + + +def apply(proc: Proc, seq: str | list | range | Cons | Null) -> Any: + if isinstance(seq, (Cons, Null)): + return reduce(proc.proc, list_to_vector(seq)) + return reduce(proc.proc, seq) + + +def ref(seq: str | list | range | Cons | Null | NDArray, ref: int) -> Any: + try: + return Char(seq[ref]) if isinstance(seq, str) else seq[ref] + except IndexError: + raise MyError(f"ref: Invalid index {ref}") + + +def p_slice( + seq: str | list | range | NDArray | Cons | Null, + start: int = 0, + end: int | None = None, + step: int = 1, +) -> Any: + if end is None: + end = len(seq) + + return seq[start:end:step] + + +def splice( + arr: NDArray, v: int, start: int | None = None, end: int | None = None +) -> None: + arr[start:end] = v + + +def stream_to_list(s: range) -> Cons | Null: + result: Cons | Null = Null() + for item in reversed(s): + result = Cons(item, result) + return result + + +############################################################################### +# # +# INTERPRETER # +# # +############################################################################### + + +@dataclass +class FileSetup: + src: FileInfo + ensure: Ensure + strict: bool + tb: Fraction + bar: Bar + temp: str + log: Log + + +@dataclass +class Proc: + name: str + proc: Callable + arity: tuple[int, int | None] = (1, None) + contracts: list[Any] | None = None + + def __str__(self) -> str: + return f"#" + + __repr__ = __str__ + + def __call__(self, *vals: Any) -> Any: + return self.proc(*vals) + + +class Interpreter: + GLOBAL_SCOPE: dict[str, Any] = { + # constants + "true": True, + "false": False, + "null": Null(), + "pi": math.pi, + # actions + "begin": Proc("begin", lambda *x: x[-1] if x else None, (0, None)), + "display": Proc("display", display, (1, 1)), + "exit": Proc("exit", sys.exit, (0, None)), + "error": Proc("error", raise_, (1, 1), [is_str]), + # booleans + "boolean?": is_bool, + ">": Proc(">", lambda a, b: a > b, (2, 2), [is_real, is_real]), + ">=": Proc(">=", lambda a, b: a >= b, (2, 2), [is_real, is_real]), + "<": Proc("<", lambda a, b: a < b, (2, 2), [is_real, is_real]), + "<=": Proc("<=", lambda a, b: a <= b, (2, 2), [is_real, is_real]), + "=": Proc("=", equal_num, (1, None), [is_num]), + "eq?": Proc("eq?", lambda a, b: a is b, (2, 2)), + "equal?": Proc("equal?", is_equal, (2, 2)), + "not": Proc("not", _not, (1, 1)), + "and": Proc("and", _and, (1, None)), + "or": Proc("or", _or, (1, None)), + "xor": Proc("xor", _xor, (2, None)), + # number questions + "number?": is_num, + "real?": is_real, + "integer?": is_int, + "nonnegative-integer?": us_int, + "float?": is_float, + "fraction?": is_frac, + "positive?": Proc("positive?", lambda v: v > 0, (1, 1), [is_real]), + "negative?": Proc("negative?", lambda v: v < 0, (1, 1), [is_real]), + "zero?": Proc("zero?", lambda v: v == 0, (1, 1), [is_num]), + # numbers + "+": Proc("+", lambda *v: sum(v), (0, None), [is_num]), + "-": Proc("-", minus, (1, None), [is_num]), + "*": Proc("*", mul, (0, None), [is_num]), + "/": Proc("/", div, (1, None), [is_num]), + "add1": Proc("add1", lambda v: v + 1, (1, 1), [is_num]), + "sub1": Proc("sub1", lambda v: v - 1, (1, 1), [is_num]), + "sqrt": Proc("sqrt", _sqrt, (1, 1), [is_num]), + "real-part": Proc("real-part", lambda v: v.real, (1, 1), [is_num]), + "imag-part": Proc("imag-part", lambda v: v.imag, (1, 1), [is_num]), + # reals + "expt": Proc("expt", pow, (2, 2), [is_real]), + "exp": Proc("exp", math.exp, (1, 1), [is_real]), + "abs": Proc("abs", abs, (1, 1), [is_real]), + "ceil": Proc("ceil", math.ceil, (1, 1), [is_real]), + "floor": Proc("floor", math.floor, (1, 1), [is_real]), + "round": Proc("round", round, (1, 1), [is_real]), + "max": Proc("max", lambda *v: max(v), (1, None), [is_real]), + "min": Proc("min", lambda *v: min(v), (1, None), [is_real]), + "sin": Proc("sin", math.sin, (1, 1), [is_real]), + "cos": Proc("cos", math.cos, (1, 1), [is_real]), + "log": Proc("log", math.log, (1, 2), [is_real, is_real]), + "tan": Proc("tan", math.tan, (1, 1), [is_real]), + "mod": Proc("mod", lambda a, b: a % b, (2, 2), [is_int, is_int]), + "modulo": Proc("mod", lambda a, b: a % b, (2, 2), [is_int, is_int]), + "random": Proc("random", palet_random, (0, 2), [is_int]), + # symbols + "symbol?": is_symbol, + "symbol->string": Proc("symbol->string", str, (1, 1), [is_symbol]), + "string->symbol": Proc("string->symbol", Symbol, (1, 1), [is_str]), + # strings + "string?": is_str, + "char?": is_char, + "string": Proc("string", string_append, (0, None), [is_char]), + "string-append": Proc("string-append", string_append, (0, None), [is_str]), + "string-upcase": Proc("string-upcase", str.upper, (1, 1), [is_str]), + "string-downcase": Proc("string-downcase", str.lower, (1, 1), [is_str]), + "string-titlecase": Proc("string-titlecase", str.title, (1, 1), [is_str]), + "char->integer": Proc("char->integer", lambda c: ord(c.val), (1, 1), [is_char]), + "integer->char": Proc("integer->char", Char, (1, 1), [is_int]), + # vectors + "vector?": is_vector, + "vector": Proc("vector", lambda *a: list(a), (0, None)), + "make-vector": Proc( + "make-vector", lambda size, a=0: [a] * size, (1, 2), [us_int, any_c] + ), + "vector-append": Proc("vector-append", vector_append, (0, None), [is_vector]), + "vector-pop!": Proc("vector-pop!", list.pop, (1, 1), [is_vector]), + "vector-add!": Proc("vector-add!", vector_add, (2, 2), [is_vector, any_c]), + "vector-set!": Proc( + "vector-set!", vector_set, (3, 3), [is_vector, is_int, any_c] + ), + "vector-extend!": Proc("vector-extend!", vector_extend, (2, None), [is_vector]), + # cons/list + "pair?": is_pair, + "null?": is_null, + "cons": Proc("cons", Cons, (2, 2)), + "car": Proc("car", lambda val: val.a, (1, 1), [is_pair]), + "cdr": Proc("cdr", lambda val: val.d, (1, 1), [is_pair]), + "list?": is_list, + "list": Proc("list", _list, (0, None)), + "list-ref": Proc("list-ref", ref, (2, 2), [is_pair, us_int]), + # arrays + "array?": is_array, + "array": Proc("array", array_proc, (2, None), [is_symbol, is_real]), + "make-array": Proc( + "make-array", make_array, (2, 3), [is_symbol, us_int, is_real] + ), + "array-splice!": Proc( + "array-splice!", splice, (2, 4), [is_array, is_real, is_int, is_int] + ), + "count-nonzero": Proc("count-nonzero", np.count_nonzero, (1, 1), [is_array]), + # bool arrays + "bool-array?": is_boolarr, + "bool-array": Proc( + "bool-array", lambda *a: np.array(a, dtype=np.bool_), (1, None), [us_int] + ), + "margin": Proc("margin", margin, (2, 3), None), + "mincut": Proc("mincut", mincut, (2, 2), [is_int, is_boolarr]), + "minclip": Proc("minclip", minclip, (2, 2), [is_int, is_boolarr]), + "cook": Proc("cook", cook, (3, 3), [is_int, is_int, is_boolarr]), + # ranges + "range?": is_range, + "in-range": Proc("in-range", range, (1, 3), [is_real, is_real, is_real]), + # generic iterables + "iterable?": is_iterable, + "length": Proc("length", len, (1, 1), [is_iterable]), + "reverse": Proc("reverse", lambda v: v[::-1], (1, 1), [is_iterable]), + "ref": Proc("ref", ref, (2, 2), [is_iterable, is_int]), + "slice": Proc("slice", p_slice, (2, 4), [is_iterable, is_int]), + # procedures + "procedure?": is_proc, + "map": Proc("map", palet_map, (2, 2), [is_proc, is_iterable]), + "apply": Proc("apply", apply, (2, 2), [is_proc, is_iterable]), + # conversions + "number->string": Proc("number->string", number_to_string, (1, 1), [is_num]), + "string->list": Proc("string->list", string_to_list, (1, 1), [is_str]), + "string->vector": Proc( + "string->vector", lambda s: [Char(c) for c in s], (1, 1), [is_str] + ), + "list->vector": Proc("list->vector", list_to_vector, (1, 1), [is_pair]), + "vector->list": Proc("vector->list", vector_to_list, (1, 1), [is_vector]), + "range->list": Proc("range->list", stream_to_list, (1, 1), [is_range]), + "range->vector": Proc("range->vector", list, (1, 1), [is_range]), + # contracts + "any/c": any_c, + } + + def __init__(self, parser: Parser, filesetup: FileSetup | None): + self.parser = parser + self.filesetup = filesetup + + if filesetup is not None: + self.GLOBAL_SCOPE["timebase"] = filesetup.tb + + def visit(self, node: Any) -> Any: + if isinstance(node, Symbol): + val = self.GLOBAL_SCOPE.get(node.val) + if val is None: + raise MyError(f"{node.val} is undefined") + return val + + if isinstance(node, BoolArr): + if self.filesetup is None: + raise MyError("Can't use edit methods if there's no input files") + return edit_method(node.val, self.filesetup) + + if isinstance(node, Compound): + return [self.visit(c) for c in node.children] + + if isinstance(node, list): + if not node: + raise MyError("(): Missing procedure expression") + + name = node[0].val if isinstance(node[0], Symbol) else "" + + def check_for_syntax(name: str, node: list) -> Any: + if len(node) < 2: + raise MyError(f"{name}: bad syntax") + + if len(node) == 2: + raise MyError(f"{name}: missing body") + + assert isinstance(node[1], list) + assert isinstance(node[1][0], list) + + var = node[1][0][0] + if not isinstance(var, Symbol): + raise MyError(f"{name}: binding must be an identifier") + my_iter = self.visit(node[1][0][1]) + + if not is_iterable(my_iter): + if isinstance(my_iter, int): + return var, range(my_iter) + raise MyError(f"{name}: got non-iterable in iter slot") + + return var, my_iter + + if name == "for": + var, my_iter = check_for_syntax("for", node) + for item in my_iter: + self.GLOBAL_SCOPE[var.val] = item + for c in node[2:]: + self.visit(c) + return None + + if name == "for/vector": + results = [] + var, my_iter = check_for_syntax("for", node) + for item in my_iter: + self.GLOBAL_SCOPE[var.val] = item + results.append([self.visit(c) for c in node[2:]][-1]) + + del self.GLOBAL_SCOPE[var.val] + return results + + if name == "if": + if len(node) != 4: + raise MyError("if: bad syntax") + test_expr = self.visit(node[1]) + if not isinstance(test_expr, bool): + raise MyError(f"if: test-expr arg must be: boolean?") + if test_expr: + return self.visit(node[2]) + return self.visit(node[3]) + + if name == "when": + if len(node) != 3: + raise MyError("when: bad syntax") + test_expr = self.visit(node[1]) + if not isinstance(test_expr, bool): + raise MyError(f"when: test-expr arg must be: boolean?") + if test_expr: + return self.visit(node[2]) + return None + + if name == "quote": + if len(node) != 2: + raise MyError("quote: bad syntax") + + if isinstance(node[1], list): + return deep_list(node[1]) + return node[1] + + if name == "define": + if len(node) != 3: + raise MyError("define: bad syntax") + + if not isinstance(node[1], Symbol): + raise MyError("define: Must be an identifier") + + symbol = node[1].val + self.GLOBAL_SCOPE[symbol] = self.visit(node[2]) + return None + + if name == "set!": + if len(node) != 3: + raise MyError("set!: bad syntax") + if not isinstance(node[1], Symbol): + raise MyError("set!: Must be an identifier") + + symbol = node[1].val + if symbol not in self.GLOBAL_SCOPE: + raise MyError(f"Cannot set variable {symbol} before definition") + self.GLOBAL_SCOPE[symbol] = self.visit(node[2]) + return None + + oper = self.visit(node[0]) + + if not callable(oper): + raise MyError(f"{oper}, expected procedure") + + values = [self.visit(c) for c in node[1:]] + if isinstance(oper, Contract): + check_args(oper.name, values, (1, 1), None) + else: + check_args(oper.name, values, oper.arity, oper.contracts) + return oper(*values) + + return node + + def interpret(self) -> Any: + return self.visit(self.parser.comp()) diff -Nru auto-editor-22w28a+ds/auto_editor/__main__.py auto-editor-22w52a+ds/auto_editor/__main__.py --- auto-editor-22w28a+ds/auto_editor/__main__.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/__main__.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,21 +1,21 @@ #!/usr/bin/env python3 -import os import sys -import tempfile import auto_editor -import auto_editor.utils.func as usefulfunctions from auto_editor.edit import edit_media from auto_editor.ffwrapper import FFmpeg -from auto_editor.utils.log import Log, Timer +from auto_editor.utils.func import setup_tempdir +from auto_editor.utils.log import Log from auto_editor.utils.types import ( - MainArgs, + Args, color, + frame_rate, margin, number, resolution, sample_rate, + speed, speed_range, time, time_range, @@ -25,267 +25,278 @@ def main_options(parser: ArgumentParser) -> ArgumentParser: - parser.add_text("Object Options") + parser.add_required("input", nargs="*", metavar="[file | url ...] [options]") + parser.add_text("Editing Options:") parser.add_argument( - "--add-text", - nargs="*", - pool=True, - help="Add a text object to the timeline.", - ) - parser.add_argument( - "--add-rectangle", - nargs="*", - pool=True, - help="Add a rectangle object to the timeline.", - ) - parser.add_argument( - "--add-ellipse", - nargs="*", - pool=True, - help="Add an ellipse object to the timeline.", - ) - parser.add_argument( - "--add-image", - nargs="*", - pool=True, - help="Add an image object onto the timeline.", - ) - parser.add_text("URL Download Options") - parser.add_argument("--yt-dlp-location", help="Set a custom path to yt-dlp.") - parser.add_argument( - "--download-format", help="Set the yt-dlp download format. (--format, -f)" - ) - parser.add_argument( - "--output-format", help="Set the yt-dlp output file template. (--output, -o)" - ) - parser.add_argument( - "--yt-dlp-extras", help="Add extra options for yt-dlp. Must be in quotes" - ) - parser.add_text("Exporting as Media Options") - parser.add_argument( - "--video-codec", - "-vcodec", - "-c:v", - help="Set video codec for output media.", - ) - parser.add_argument( - "--audio-codec", - "-acodec", - "-c:a", - help="Set audio codec for output media.", - ) - parser.add_argument( - "--video-bitrate", - "-b:v", - help="Set the number of bits per second for video.", + "--margin", + "-m", + type=margin, + metavar="LENGTH", + help='Set sections near "loud" as "loud" too if section is less than LENGTH away.', ) parser.add_argument( - "--audio-bitrate", - "-b:a", - help="Set the number of bits per second for audio.", + "--min-clip-length", + "--min-clip", + "-minclip", + "-mclip", + type=time, + metavar="LENGTH", + help="Set the minimum length a clip can be. If a clip is too short, cut it", ) parser.add_argument( - "--video-quality-scale", - "-qscale:v", - "-q:v", - help="Set a value to the ffmpeg option -qscale:v", + "--min-cut-length", + "--min-cut", + "-mincut", + "-mcut", + type=time, + metavar="LENGTH", + help="Set the minimum length a cut can be. If a cut is too short, don't cut", ) parser.add_argument( - "--scale", - type=number, - help="Scale the input video's resolution by the given factor.", + "--edit-based-on", + "--edit", + metavar="[METHOD:[ATTRS?] OPERAND? ...]", + help="Decide which method to use when making edits", ) parser.add_argument( - "--extras", - help="Add extra options for ffmpeg for video rendering. Must be in quotes.", + "--silent-speed", + "-s", + type=speed, + metavar="NUM", + help='Set speed of sections marked "silent" to NUM', ) parser.add_argument( - "--no-seek", - flag=True, - help="Disable file seeking when rendering video. Helpful for debugging desync issues.", + "--video-speed", + "--sounded-speed", + "-v", + type=speed, + metavar="NUM", + help='Set speed of sections marked "loud" to NUM', ) - parser.add_text("Manual Editing Options") parser.add_argument( "--cut-out", type=time_range, nargs="*", + metavar="[START,STOP ...]", help="The range of media that will be removed completely, regardless of the " - "value of silent speed.", + "value of silent speed", ) parser.add_argument( "--add-in", type=time_range, nargs="*", + metavar="[START,STOP ...]", help="The range of media that will be added in, opposite of --cut-out", ) - parser.add_blank() parser.add_argument( "--mark-as-loud", type=time_range, nargs="*", - help='The range that will be marked as "loud".', + metavar="[START,STOP ...]", + help='The range that will be marked as "loud"', ) parser.add_argument( "--mark-as-silent", type=time_range, nargs="*", - help='The range that will be marked as "silent".', + metavar="[START,STOP ...]", + help='The range that will be marked as "silent"', ) parser.add_argument( "--set-speed-for-range", "--set-speed", type=speed_range, nargs="*", - help="SPEED,START,STOP - Set an arbitrary speed for a given range.", + metavar="[SPEED,START,STOP ...]", + help="Set an arbitrary speed for a given range", ) - parser.add_text("Timeline Options") + parser.add_text("Timeline Options:") parser.add_argument( "--frame-rate", "-fps", "-r", - type=number, - help="Set the frame rate for the timeline and output media.", + "--time-base", + "-tb", + type=frame_rate, + metavar="NUM", + help="Set timeline frame rate", ) parser.add_argument( "--sample-rate", "-ar", type=sample_rate, - help="Set the sample rate for the timeline and output media.", + metavar="NAT", + help="Set timeline sample rate", ) parser.add_argument( - "--resolution", "-res", type=resolution, help="Set timeline width and height." + "--resolution", + "-res", + type=resolution, + metavar="WIDTH,HEIGHT", + help="Set timeline width and height", ) parser.add_argument( "--background", "-b", type=color, - help="Set the color of the background that is visible when the video is moved.", + metavar="COLOR", + help="Set the background as a solid RGB color", ) - parser.add_text("Select Editing Source Options") parser.add_argument( - "--edit-based-on", - "--edit", - help="Decide which method to use when making edits.", + "--add", + nargs="*", + metavar="OBJ:START,DUR,ATTRS?", + help="Insert an timeline object to the timeline", ) parser.add_argument( - "--keep-tracks-separate", - flag=True, - help="Don't combine audio tracks when exporting.", + "--source", + "-src", + nargs="*", + metavar="LABEL:PATH", + help="Add a source and associate it with a label", ) + parser.add_text("URL Download Options:") + parser.add_argument( + "--yt-dlp-location", + metavar="PATH", + help="Set a custom path to yt-dlp", + ) + parser.add_argument( + "--download-format", + metavar="FORMAT", + help="Set the yt-dlp download format (--format, -f)", + ) + parser.add_argument( + "--output-format", + metavar="TEMPLATE", + help="Set the yt-dlp output file template (--output, -o)", + ) + parser.add_argument( + "--yt-dlp-extras", + metavar="CMD", + help="Add extra options for yt-dlp. Must be in quotes", + ) + parser.add_text("Utility Options:") parser.add_argument( "--export", "-ex", - choices=[ - "default", - "premiere", - "final-cut-pro", - "shotcut", - "json", - "audio", - "clip-sequence", - ], - help="Choose the export mode.", + metavar="EXPORT:ATTRS?", + help="Choose the export mode", + ) + parser.add_argument( + "--output-file", + "--output", + "-o", + metavar="FILE", + help="Set the name/path of the new output file.", + ) + parser.add_argument( + "--player", + "-p", + metavar="CMD", + help="Set player to open output media files", ) - parser.add_text("Utility Options") - parser.add_argument("--player", "-p", help="Set player to open output media files.") parser.add_argument( "--no-open", flag=True, - help="Do not open the output file after editing is done.", + help="Do not open the output file after editing is done", ) parser.add_argument( "--temp-dir", - help="Set where the temporary directory is located.", + metavar="PATH", + help="Set where the temporary directory is located", ) parser.add_argument( "--ffmpeg-location", - help="Set a custom path to the ffmpeg location.", + metavar="PATH", + help="Set a custom path to the ffmpeg location", ) parser.add_argument( "--my-ffmpeg", flag=True, - help="Use the ffmpeg on your PATH instead of the one packaged.", + help="Use the ffmpeg on your PATH instead of the one packaged", ) - parser.add_text("Display Options") + parser.add_text("Display Options:") parser.add_argument( "--progress", - choices=["modern", "classic", "ascii", "machine", "none"], - help="Set what type of progress bar to use.", + metavar="PROGRESS", + choices=("modern", "classic", "ascii", "machine", "none"), + help="Set what type of progress bar to use", ) + parser.add_argument("--debug", flag=True, help="Show debugging messages and values") parser.add_argument( - "--version", flag=True, help="Display the program's version and halt." + "--show-ffmpeg-debug", flag=True, help="Show ffmpeg progress and output" ) + parser.add_argument("--quiet", "-q", flag=True, help="Display less output") parser.add_argument( - "--debug", flag=True, help="Show debugging messages and values." + "--preview", + "--stats", + flag=True, + help="Show stats on how the input will be cut and halt", ) + parser.add_text("Video Rendering:") parser.add_argument( - "--show-ffmpeg-debug", flag=True, help="Show ffmpeg progress and output." + "--video-codec", + "-vcodec", + "-c:v", + metavar="ENCODER", + help="Set video codec for output media", ) - parser.add_argument("--quiet", "-q", flag=True, help="Display less output.") parser.add_argument( - "--preview", flag=True, help="Show stats on how the input will be cut and halt." + "--video-bitrate", + "-b:v", + metavar="BITRATE", + help="Set the number of bits per second for video", ) parser.add_argument( - "--timeline", flag=True, help="Show auto-editor JSON timeline file and halt." + "--video-quality-scale", + "-qscale:v", + "-q:v", + metavar="SCALE", + help="Set a value to the ffmpeg option -qscale:v", ) - parser.add_argument("--api", help="Set what version of the timeline to use.") - parser.add_text("Global Editing Options") parser.add_argument( - "--silent-threshold", - "-t", + "--scale", type=number, - help="Set the volume that frames audio needs to surpass to be marked loud.", - ) - parser.add_argument( - "--frame-margin", - "--margin", - "-m", - type=margin, - help='Set how many "silent" frames on either side of the "loud" sections to include.', + metavar="NUM", + help="Scale the output video's resolution by NUM factor", ) parser.add_argument( - "--silent-speed", - "-s", - type=number, - help='Set the speed that "silent" sections should be played at.', + "--no-seek", + flag=True, + help="Disable file seeking when rendering video. Helpful for debugging desync issues", ) + parser.add_text("Audio Rendering:") parser.add_argument( - "--video-speed", - "--sounded-speed", - "-v", - type=number, - help='Set the speed that "loud" sections should be played at.', + "--audio-codec", + "-acodec", + "-c:a", + metavar="ENCODER", + help="Set audio codec for output media", ) parser.add_argument( - "--min-clip-length", - "-minclip", - "-mclip", - type=time, - help="Set the minimum length a clip can be. If a clip is too short, cut it.", + "--audio-bitrate", + "-b:a", + metavar="BITRATE", + help="Set the number of bits per second for audio", ) parser.add_argument( - "--min-cut-length", - "-mincut", - "-mcut", - type=time, - help="Set the minimum length a cut can be. If a cut is too short, don't cut.", + "--keep-tracks-separate", + flag=True, + help="Don't mix all audio tracks into one when exporting", ) - parser.add_blank() + parser.add_text("Miscellaneous:") parser.add_argument( - "--output-file", - "--output", - "-o", - help="Set the name/path of the new output file.", - ) - parser.add_required( - "input", nargs="*", help="File(s) or URL(s) that will be edited." + "--extras", + metavar="CMD", + help="Add extra options for ffmpeg. Must be in quotes", ) - + parser.add_argument("--version", "-V", flag=True, help="Display version and halt") return parser def main() -> None: - subcommands = ("test", "info", "levels", "grep", "subdump", "desc") + subcommands = ("test", "info", "levels", "grep", "subdump", "desc", "repl") if len(sys.argv) > 1 and sys.argv[1] in subcommands: obj = __import__( @@ -293,26 +304,26 @@ ) obj.main(sys.argv[2:]) sys.exit() - else: - args = main_options(ArgumentParser("Auto-Editor")).parse_args( - MainArgs, - sys.argv[1:], - macros=[ - ({"--export-to-premiere", "-exp"}, ["--export", "premiere"]), - ({"--export-to-final-cut-pro", "-exf"}, ["--export", "final-cut-pro"]), - ({"--export-to-shotcut", "-exs"}, ["--export", "shotcut"]), - ({"--export-as-json"}, ["--export", "json"]), - ({"--export-as-clip-sequence", "-excs"}, ["--export", "clip-sequence"]), - ({"--combine-files"}, []), - ({"--keep-tracks-seperate"}, ["--keep-tracks-separate"]), - ], - ) - log = Log(args.debug, args.quiet) - timer = Timer(args.quiet) + args = main_options(ArgumentParser("Auto-Editor")).parse_args( + Args, + sys.argv[1:], + macros=[ + ({"--frame-margin"}, ["--margin"]), + ({"--export-to-premiere", "-exp"}, ["--export", "premiere"]), + ({"--export-to-final-cut-pro", "-exf"}, ["--export", "final-cut-pro"]), + ({"--export-to-shotcut", "-exs"}, ["--export", "shotcut"]), + ({"--export-as-json"}, ["--export", "json"]), + ({"--export-as-clip-sequence", "-excs"}, ["--export", "clip-sequence"]), + ({"--keep-tracks-seperate"}, ["--keep-tracks-separate"]), + ], + ) - exporting_to_editor = args.export in ("premiere", "final-cut-pro", "shotcut") + if args.version: + print(f"{auto_editor.version} ({auto_editor.__version__})") + sys.exit() + log = Log(args.debug, args.quiet) ffmpeg = FFmpeg(args.ffmpeg_location, args.my_ffmpeg, args.show_ffmpeg_debug) if args.debug and args.input == []: @@ -325,77 +336,17 @@ print(f"Auto-Editor Version: {auto_editor.version}") sys.exit() - if args.version: - print(f"{auto_editor.version} ({auto_editor.__version__})") - sys.exit() - - if args.timeline: - args.quiet = True - if args.input == []: log.error("You need to give auto-editor an input file.") - if args.temp_dir is None: - temp = tempfile.mkdtemp() - else: - temp = args.temp_dir - if os.path.isfile(temp): - log.error("Temp directory cannot be an already existing file.") - if os.path.isdir(temp): - if len(os.listdir(temp)) != 0: - log.error("Temp directory should be empty!") - else: - os.mkdir(temp) - + temp = setup_tempdir(args.temp_dir, Log()) log = Log(args.debug, args.quiet, temp=temp) log.debug(f"Temp Directory: {temp}") - log.conwrite("Starting") - - if args.preview or args.export not in ("audio", "default"): - args.no_open = True - - if args.silent_speed <= 0 or args.silent_speed > 99999: - args.silent_speed = 99999 - - if args.video_speed <= 0 or args.video_speed > 99999: - args.video_speed = 99999 - paths = valid_input(args.input, ffmpeg, args, log) - if exporting_to_editor and len(paths) > 1: - cmd = [] - for path in paths: - cmd.extend(["-i", path]) - cmd.extend( - [ - "-filter_complex", - f"[0:v]concat=n={len(paths)}:v=1:a=1", - "-codec:v", - "h264", - "-pix_fmt", - "yuv420p", - "-strict", - "-2", - "combined.mp4", - ] - ) - ffmpeg.run(cmd) - paths = ["combined.mp4"] try: - output = edit_media(paths, ffmpeg, args, temp, log) - - if not args.preview and not args.timeline: - timer.stop() - - if not args.no_open and output is not None: - if args.player is None: - usefulfunctions.open_with_system_default(output, log) - else: - import subprocess - from shlex import split - - subprocess.run(split(args.player) + [output]) + edit_media(paths, ffmpeg, args, temp, log) except KeyboardInterrupt: log.error("Keyboard Interrupt") log.cleanup() diff -Nru auto-editor-22w28a+ds/auto_editor/make_layers.py auto-editor-22w52a+ds/auto_editor/make_layers.py --- auto-editor-22w28a+ds/auto_editor/make_layers.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/make_layers.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,341 @@ +from __future__ import annotations + +import os +from fractions import Fraction +from typing import TYPE_CHECKING, Any, NamedTuple + +import numpy as np + +from auto_editor.ffwrapper import FFmpeg, FileInfo +from auto_editor.interpreter import ( + FileSetup, + Interpreter, + Lexer, + MyError, + Parser, + cook, + is_boolarr, +) +from auto_editor.objs.util import _Vars, parse_dataclass +from auto_editor.timeline import ( + ASpace, + Timeline, + TlAudio, + TlVideo, + Visual, + VSpace, + audio_objects, + visual_objects, +) +from auto_editor.utils.chunks import Chunks, chunkify, chunks_len, merge_chunks +from auto_editor.utils.func import mut_margin +from auto_editor.utils.types import Args, time + +if TYPE_CHECKING: + from numpy.typing import NDArray + + from auto_editor.ffwrapper import FileInfo + from auto_editor.output import Ensure + from auto_editor.utils.bar import Bar + from auto_editor.utils.log import Log + from auto_editor.utils.types import Margin + + BoolList = NDArray[np.bool_] + + +class Clip(NamedTuple): + start: int + dur: int + offset: int + speed: float + src: str + + +def clipify(chunks: Chunks, src: str, start: Fraction = Fraction(0)) -> list[Clip]: + clips: list[Clip] = [] + i = 0 + for chunk in chunks: + if chunk[2] != 99999: + if i == 0: + dur = chunk[1] - chunk[0] + offset = chunk[0] + else: + dur = chunk[1] - chunk[0] + offset = chunk[0] + 1 + + if not (clips and clips[-1].start == round(start)): + clips.append(Clip(round(start), dur, offset, chunk[2], src)) + start += Fraction(dur, Fraction(chunk[2])) + i += 1 + + return clips + + +def make_av( + all_clips: list[list[Clip]], sources: dict[str, FileInfo], _inputs: list[int] +) -> tuple[VSpace, ASpace]: + + if len(_inputs) > 1000: + raise ValueError("Number of file inputs can't be greater than 1000") + + inputs = [str(i) for i in _inputs] + vtl: VSpace = [] + atl: ASpace = [[] for _ in range(max(len(sources[i].audios) for i in inputs))] + + for clips, inp in zip(all_clips, inputs): + src = sources[inp] + if src.videos: + vtl.append( + [TlVideo(c.start, c.dur, c.src, c.offset, c.speed, 0) for c in clips] + ) + + for c in clips: + for a in range(len(src.audios)): + atl[a].append(TlAudio(c.start, c.dur, c.src, c.offset, c.speed, 1, a)) + + return vtl, atl + + +def run_interpreter( + text: str, + filesetup: FileSetup, + log: Log, +) -> NDArray[np.bool_]: + + try: + lexer = Lexer(text) + parser = Parser(lexer) + if log.is_debug: + log.debug(f"edit: {parser}") + + interpreter = Interpreter(parser, filesetup) + results = interpreter.interpret() + except (MyError, ZeroDivisionError) as e: + log.error(e) + + if len(results) == 0: + log.error("Expression in --edit must return a bool-array") + + result = results[-1] + if not is_boolarr(result): + log.error("Expression in --edit must return a bool-array") + + assert isinstance(result, np.ndarray) + return result + + +def make_timeline( + sources: dict[str, FileInfo], + inputs: list[int], + ffmpeg: FFmpeg, + ensure: Ensure, + args: Args, + sr: int, + bar: Bar, + temp: str, + log: Log, +) -> Timeline: + + inp = None if not inputs else sources[str(inputs[0])] + + if inp is None: + tb, res = Fraction(30), (1920, 1080) + else: + tb = inp.get_fps() if args.frame_rate is None else args.frame_rate + res = inp.get_res() if args.resolution is None else args.resolution + del inp + + chunks, vclips, aclips = make_layers( + sources, + inputs, + ensure, + tb, + args.edit_based_on, + args.margin, + args.min_cut_length, + args.min_clip_length, + args.cut_out, + args.add_in, + args.mark_as_silent, + args.mark_as_loud, + args.set_speed_for_range, + args.silent_speed, + args.video_speed, + bar, + temp, + log, + ) + + for raw in args.source: + exploded = raw.split(":") + if len(exploded) != 2: + log.error("source label:path must have one :") + label, path = exploded + if len(label) > 55: + log.error("Label must not exceed 55 characters.") + + for ill_char in ",.;()/\\[]}{'\"|#&<>^%$=@ ": + if ill_char in label: + log.error(f"Label '{label}' contains illegal character: {ill_char}") + + if label[0] in "0123456789": + log.error(f"Label '{label}' must not start with a digit") + if label[0] == "-": + log.error(f"Label '{label}' must not start with a dash") + + if not os.path.isfile(path): + log.error(f"Path '{path}' is not a file") + + sources[label] = FileInfo(path, ffmpeg, log, label) + + timeline = Timeline(sources, tb, sr, res, args.background, vclips, aclips, chunks) + + w, h = res + _vars: _Vars = { + "width": w, + "height": h, + "end": timeline.end, + "tb": timeline.timebase, + } + + OBJ_ATTRS_SEP = ":" + + pool: list[Visual] = [] + apool: list[TlAudio] = [] + + for obj_attrs_str in args.add: + exploded = obj_attrs_str.split(OBJ_ATTRS_SEP) + if len(exploded) > 2 or len(exploded) == 0: + log.error("Invalid object syntax") + + obj_s = exploded[0] + attrs = "" if len(exploded) == 1 else exploded[1] + + try: + if obj_s in visual_objects: + pool.append( + parse_dataclass(attrs, visual_objects[obj_s], log, _vars, True) + ) + elif obj_s in audio_objects: + apool.append( + parse_dataclass(attrs, audio_objects[obj_s], log, _vars, True) + ) + else: + log.error(f"Unknown timeline object: '{obj_s}'") + except TypeError as e: + log.error(e) + + for vobj in pool: + timeline.v.append([vobj]) + + for aobj in apool: + timeline.a.append([aobj]) + + return timeline + + +def make_layers( + sources: dict[str, FileInfo], + inputs: list[int], + ensure: Ensure, + tb: Fraction, + method: str, + margin: Margin, + _min_cut: str | int, + _min_clip: str | int, + cut_out: list[list[str]], + add_in: list[list[str]], + mark_silent: list[list[str]], + mark_loud: list[list[str]], + speed_ranges: list[tuple[float, str, str]], + silent_speed: float, + loud_speed: float, + bar: Bar, + temp: str, + log: Log, +) -> tuple[Chunks, VSpace, ASpace]: + start = Fraction(0) + all_clips: list[list[Clip]] = [] + all_chunks: list[Chunks] = [] + + def seconds_to_ticks(val: int | str) -> int: + if isinstance(val, str): + return round(float(val) * tb) + return val + + start_margin = seconds_to_ticks(margin[0]) + end_margin = seconds_to_ticks(margin[1]) + min_clip = seconds_to_ticks(_min_clip) + min_cut = seconds_to_ticks(_min_cut) + + speed_map = [silent_speed, loud_speed] + speed_hash = { + 0: silent_speed, + 1: loud_speed, + } + + def get_speed_index(speed: float) -> int: + if speed in speed_map: + return speed_map.index(speed) + speed_map.append(speed) + speed_hash[len(speed_map) - 1] = speed + return len(speed_map) - 1 + + def parse_time(val: str, arr: NDArray) -> int: + if val == "start": + return 0 + if val == "end": + return len(arr) + try: + num = seconds_to_ticks(time(val)) + return num if num >= 0 else num + len(arr) + except TypeError as e: + log.error(e) + + def mut_set_range(arr: NDArray, _ranges: list[list[str]], index: Any) -> None: + for _range in _ranges: + assert len(_range) == 2 + pair = [parse_time(val, arr) for val in _range] + arr[pair[0] : pair[1]] = index + + for i in map(str, inputs): + filesetup = FileSetup(sources[i], ensure, len(inputs) < 2, tb, bar, temp, log) + has_loud = run_interpreter(method, filesetup, log) + + if len(mark_loud) > 0: + mut_set_range(has_loud, mark_loud, loud_speed) + + if len(mark_silent) > 0: + mut_set_range(has_loud, mark_silent, silent_speed) + + has_loud = cook(min_clip, min_cut, has_loud) + mut_margin(has_loud, start_margin, end_margin) + + # Remove small clips/cuts created by applying other rules. + has_loud = cook(min_clip, min_cut, has_loud) + + # Setup for handling custom speeds + has_loud = has_loud.astype(np.uint) + + if len(cut_out) > 0: + # always cut out even if 'silent_speed' is not 99,999 + mut_set_range(has_loud, cut_out, get_speed_index(99_999)) + + if len(add_in) > 0: + # set to 'video_speed' index + mut_set_range(has_loud, add_in, 1) + + for speed_range in speed_ranges: + speed = speed_range[0] + _range = list(speed_range[1:]) + mut_set_range(has_loud, [_range], get_speed_index(speed)) + + chunks = chunkify(has_loud, speed_hash) + + all_chunks.append(chunks) + all_clips.append(clipify(chunks, i, start)) + start += round(chunks_len(chunks)) + + vclips, aclips = make_av(all_clips, sources, inputs) + + return merge_chunks(all_chunks), vclips, aclips diff -Nru auto-editor-22w28a+ds/auto_editor/method.py auto-editor-22w52a+ds/auto_editor/method.py --- auto-editor-22w28a+ds/auto_editor/method.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/method.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,340 +0,0 @@ -import os -import random -import re -from dataclasses import asdict, dataclass, fields -from typing import Any, Callable, Dict, List, Optional, TypeVar, Union - -import numpy as np -from numpy.typing import NDArray - -from auto_editor.analyze.audio import audio_detection -from auto_editor.analyze.motion import motion_detection -from auto_editor.analyze.pixeldiff import pixel_difference -from auto_editor.ffwrapper import FileInfo -from auto_editor.utils.func import ( - apply_margin, - apply_mark_as, - cook, - parse_dataclass, - seconds_to_frames, - set_range, - to_speed_list, -) -from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar -from auto_editor.utils.types import Args, Stream, number, stream -from auto_editor.wavfile import read - -T = TypeVar("T", bound=type) -BoolList = NDArray[np.bool_] -BoolOperand = Callable[[BoolList, BoolList], BoolList] - - -@dataclass -class Audio: - stream: Stream = 0 - threshold: float = -1 - - -@dataclass -class Motion: - threshold: float = 0.02 - blur: int = 9 - width: int = 400 - - -@dataclass -class Pixeldiff: - threshold: int = 1 - - -@dataclass -class Random: - cutchance: float = 0.5 - seed: int = -1 - - -def get_attributes(attrs_str: str, dataclass: T, log: Log) -> T: - attrs = parse_dataclass(attrs_str, dataclass, log) - - dic_value = asdict(attrs) - dic_type: Dict[str, Union[type, Callable[[Any], Any]]] = {} - for field in fields(attrs): - dic_type[field.name] = field.type - - # Convert to the correct types - for k, _type in dic_type.items(): - - if _type == float: - _type = number - elif _type == Stream: - _type = stream - - try: - attrs.__setattr__(k, _type(dic_value[k])) - except (ValueError, TypeError) as e: - log.error(e) - - return attrs - - -def get_media_duration(path: str, i: int, fps: float, temp: str, log: Log) -> int: - audio_path = os.path.join(temp, f"{i}-0.wav") - if os.path.isfile(audio_path): - sample_rate, audio_samples = read(audio_path) - sample_rate_per_frame = sample_rate / fps - - dur = round(audio_samples.shape[0] / sample_rate_per_frame) - log.debug(f"Dur (audio): {dur}") - return dur - - import av - - cn = av.open(path, "r") - - if len(cn.streams.video) < 1: - log.error("Could not get media duration") - - video = cn.streams.video[0] - dur = int(float(video.duration * video.time_base) * fps) - log.debug(f"Dur (video): {dur}") - return dur - - -def get_audio_list( - i: int, - stream: int, - threshold: float, - fps: float, - progress: ProgressBar, - temp: str, - log: Log, -) -> BoolList: - if os.path.isfile(path := os.path.join(temp, f"{i}-{stream}.wav")): - sample_rate, audio_samples = read(path) - else: - raise TypeError(f"Audio stream '{stream}' does not exist.") - - audio_list = audio_detection(audio_samples, sample_rate, fps, progress, log) - return np.fromiter((x > threshold for x in audio_list), dtype=np.bool_) - - -def operand_combine(a: BoolList, b: BoolList, call: BoolOperand) -> BoolList: - if len(a) > len(b): - b = np.resize(b, len(a)) - if len(b) > len(a): - a = np.resize(a, len(b)) - - return call(a, b) - - -def get_all_list(path: str, i: int, fps: float, temp: str, log: Log) -> BoolList: - return np.zeros(get_media_duration(path, i, fps, temp, log) - 1, dtype=np.bool_) - - -def get_stream_data( - method: str, - attrs_str: str, - args: Args, - i: int, - inputs: List[FileInfo], - fps: float, - progress: ProgressBar, - temp: str, - log: Log, -) -> BoolList: - - inp = inputs[0] - strict = len(inputs) < 2 - - if method == "none": - return np.ones( - get_media_duration(inp.path, i, fps, temp, log) - 1, dtype=np.bool_ - ) - if method == "all": - return get_all_list(inp.path, i, fps, temp, log) - if method == "random": - robj = get_attributes(attrs_str, Random, log) - if robj.cutchance > 1 or robj.cutchance < 0: - log.error(f"random:cutchance must be between 0 and 1") - if robj.seed == -1: - robj.seed = random.randint(0, 2147483647) - l = get_media_duration(inp.path, i, fps, temp, log) - 1 - - random.seed(robj.seed) - log.debug(f"Seed: {robj.seed}") - - a = random.choices((0, 1), weights=(robj.cutchance, 1 - robj.cutchance), k=l) - - return np.asarray(a, dtype=np.bool_) - if method == "audio": - audio = get_attributes(attrs_str, Audio, log) - if audio.threshold == -1: - audio.threshold = args.silent_threshold - if audio.stream == "all": - total_list: Optional[NDArray[np.bool_]] = None - for s in range(len(inp.audios)): - try: - audio_list = get_audio_list( - i, s, audio.threshold, fps, progress, temp, log - ) - if total_list is None: - total_list = audio_list - else: - total_list = operand_combine( - total_list, audio_list, np.logical_or - ) - except TypeError as e: - if not strict: - return get_all_list(inp.path, i, fps, temp, log) - log.error(e) - - if total_list is None: - if not strict: - return get_all_list(inp.path, i, fps, temp, log) - log.error("Input has no audio streams.") - return total_list - else: - try: - return get_audio_list( - i, audio.stream, audio.threshold, fps, progress, temp, log - ) - except TypeError as e: - if not strict: - return get_all_list(inp.path, i, fps, temp, log) - log.error(e) - if method == "motion": - if len(inp.videos) == 0: - if not strict: - return get_all_list(inp.path, i, fps, temp, log) - log.error("Video stream '0' does not exist.") - - mobj = get_attributes(attrs_str, Motion, log) - motion_list = motion_detection(inp.path, fps, progress, mobj.width, mobj.blur) - return np.fromiter((x >= mobj.threshold for x in motion_list), dtype=np.bool_) - - if method == "pixeldiff": - if len(inp.videos) == 0: - if not strict: - return get_all_list(inp.path, i, fps, temp, log) - log.error("Video stream '0' does not exist.") - - pobj = get_attributes(attrs_str, Pixeldiff, log) - pixel_list = pixel_difference(inp.path, fps, progress) - return np.fromiter((x >= pobj.threshold for x in pixel_list), dtype=np.bool_) - - raise ValueError(f"Unreachable. {method=}") - - -def get_has_loud( - token_str: str, - i: int, - inputs: List[FileInfo], - fps: float, - progress: ProgressBar, - temp: str, - log: Log, - args: Args, -) -> NDArray[np.bool_]: - - METHOD_ATTRS_SEP = ":" - METHODS = ("audio", "motion", "pixeldiff", "random", "none", "all") - - result_array: Optional[NDArray[np.bool_]] = None - operand: Optional[str] = None - - logic_funcs: Dict[str, BoolOperand] = { - "and": np.logical_and, - "or": np.logical_or, - "xor": np.logical_xor, - } - - # See: https://stackoverflow.com/questions/1059559/ - for token in filter(None, re.split(r"[ _]+", token_str)): - if METHOD_ATTRS_SEP in token: - token, attrs_str = token.split(METHOD_ATTRS_SEP) - if token not in METHODS: - log.error(f"'{token}': Token not allowed to have attributes.") - else: - attrs_str = "" - - if token in METHODS: - if result_array is not None and operand is None: - log.error("Logic operator must be between two editing methods.") - - stream_data = get_stream_data( - token, attrs_str, args, i, inputs, fps, progress, temp, log - ) - - if operand == "not": - result_array = np.logical_not(stream_data) - operand = None - elif result_array is None: - result_array = stream_data - elif operand is not None and operand in ("and", "or", "xor"): - result_array = operand_combine( - result_array, stream_data, logic_funcs[operand] - ) - operand = None - - elif token in ("and", "or", "xor"): - if operand is not None: - log.error("Invalid Editing Syntax.") - if result_array is None: - log.error(f"'{token}' operand needs two arguments.") - operand = token - elif token == "not": - if operand is not None: - log.error("Invalid Editing Syntax.") - operand = token - else: - log.error(f"Unknown method/operator: '{token}'") - - if operand is not None: - log.error(f"Dangling operand: '{operand}'") - - assert result_array is not None - return result_array - - -def get_speed_list( - i: int, - inputs: List[FileInfo], - fps: float, - args: Args, - progress: ProgressBar, - temp: str, - log: Log, -) -> NDArray[np.float_]: - - start_margin, end_margin = args.frame_margin - - start_margin = seconds_to_frames(start_margin, fps) - end_margin = seconds_to_frames(end_margin, fps) - min_clip = seconds_to_frames(args.min_clip_length, fps) - min_cut = seconds_to_frames(args.min_cut_length, fps) - - has_loud = get_has_loud( - args.edit_based_on, i, inputs, fps, progress, temp, log, args - ) - has_loud_length = len(has_loud) - - has_loud = apply_mark_as(has_loud, has_loud_length, fps, args, log) - has_loud = cook(has_loud, min_clip, min_cut) - has_loud = apply_margin(has_loud, has_loud_length, start_margin, end_margin) - - # Remove small clips/cuts created by applying other rules. - has_loud = cook(has_loud, min_clip, min_cut) - - speed_list = to_speed_list(has_loud, args.video_speed, args.silent_speed) - - if len(args.cut_out) > 0: - speed_list = set_range(speed_list, args.cut_out, fps, 99999, log) - - if len(args.add_in) > 0: - speed_list = set_range(speed_list, args.add_in, fps, args.video_speed, log) - - for item in args.set_speed_for_range: - speed_list = set_range(speed_list, [list(item[1:])], fps, item[0], log) - - return speed_list diff -Nru auto-editor-22w28a+ds/auto_editor/objects.py auto-editor-22w52a+ds/auto_editor/objects.py --- auto-editor-22w28a+ds/auto_editor/objects.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/objects.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,82 +0,0 @@ -from dataclasses import dataclass - -from auto_editor.utils.types import Align - -# start - When the clip starts in the timeline -# dur - The duration of the clip in the timeline before speed is applied -# offset - When from the source to start playing the media at - - -@dataclass -class VideoObj: - start: int - dur: int - offset: int - speed: float - src: int - stream: int = 0 - - -@dataclass -class AudioObj: - start: int - dur: int - offset: int - speed: float - src: int - stream: int = 0 - - -@dataclass -class TextObj: - start: int - dur: int - content: str - x: int = "50%" # type: ignore - y: int = "50%" # type: ignore - font: str = "Arial" - size: int = 55 - fill: str = "#FFF" - align: Align = "left" - stroke: int = 0 - strokecolor: str = "#000" - - -@dataclass -class ImageObj: - start: int - dur: int - src: str - x: int = "50%" # type: ignore - y: int = "50%" # type: ignore - opacity: float = 1 - anchor: str = "ce" - rotate: float = 0 # in degrees - - -@dataclass -class RectangleObj: - start: int - dur: int - x: int - y: int - width: int - height: int - anchor: str = "ce" - fill: str = "#c4c4c4" - stroke: int = 0 - strokecolor: str = "#000" - - -@dataclass -class EllipseObj: - start: int - dur: int - x: int - y: int - width: int - height: int - anchor: str = "ce" - fill: str = "#c4c4c4" - stroke: int = 0 - strokecolor: str = "#000" diff -Nru auto-editor-22w28a+ds/auto_editor/objs/edit.py auto-editor-22w52a+ds/auto_editor/objs/edit.py --- auto-editor-22w28a+ds/auto_editor/objs/edit.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/objs/edit.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,43 @@ +from __future__ import annotations + +from dataclasses import dataclass + +from auto_editor.utils.types import Stream, db_threshold, natural, stream, threshold + +from .util import Attr + + +@dataclass +class Audio: + threshold: float + stream: Stream + + +@dataclass +class Motion: + threshold: float + stream: int + blur: int + width: int + + +@dataclass +class Pixeldiff: + threshold: int + stream: int + + +audio_builder = [ + Attr(("threshold",), db_threshold, 0.04), + Attr(("stream", "track"), stream, 0), +] +motion_builder = [ + Attr(("threshold",), threshold, 0.02), + Attr(("stream", "track"), natural, 0), + Attr(("blur",), natural, 9), + Attr(("width",), natural, 400), +] +pixeldiff_builder = [ + Attr(("threshold",), natural, 1), + Attr(("stream", "track"), natural, 0), +] diff -Nru auto-editor-22w28a+ds/auto_editor/objs/export.py auto-editor-22w52a+ds/auto_editor/objs/export.py --- auto-editor-22w28a+ds/auto_editor/objs/export.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/objs/export.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,56 @@ +from __future__ import annotations + +from dataclasses import dataclass +from typing import Union + + +@dataclass +class ExDefault: + pass + + +@dataclass +class ExPremiere: + pass + + +@dataclass +class ExFinalCutPro: + pass + + +@dataclass +class ExShotCut: + pass + + +@dataclass +class ExJson: + api: str + + +@dataclass +class ExTimeline: + api: str + + +@dataclass +class ExAudio: + pass + + +@dataclass +class ExClipSequence: + pass + + +Exports = Union[ + ExDefault, + ExPremiere, + ExFinalCutPro, + ExShotCut, + ExJson, + ExTimeline, + ExAudio, + ExClipSequence, +] diff -Nru auto-editor-22w28a+ds/auto_editor/objs/util.py auto-editor-22w52a+ds/auto_editor/objs/util.py --- auto-editor-22w28a+ds/auto_editor/objs/util.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/objs/util.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,146 @@ +from __future__ import annotations + +from fractions import Fraction +from typing import Any, NamedTuple, TypedDict, TypeVar + +from auto_editor.utils.log import Log +from auto_editor.utils.types import pos, time + +T = TypeVar("T") + + +class _Vars(TypedDict, total=False): + width: int + height: int + end: int + tb: Fraction + + +class Attr(NamedTuple): + names: tuple[str, ...] + coerce: Any + default: Any + + +def parse_dataclass( + attrs_str: str, + definition: tuple[type[T], list[Attr]], + log: Log, + _vars: _Vars | None = {}, + coerce_default: bool = False, +) -> T: + + dataclass, builder = definition + + KEYWORD_SEP = "=" + + # Positional Arguments + # --rectangle 0,end,10,20,20,30,#000, ... + # Keyword Arguments + # --rectangle start=0,dur=end,x1=10, ... + + def _values(name: str, val: Any, _type: Any, _vars: _Vars | None, log: Log) -> Any: + if val is None: + return None + + if _vars is not None: + if name in ("start", "dur", "offset"): + assert "tb" in _vars and "end" in _vars + if isinstance(val, int): + return val + + assert isinstance(val, str) + + if val == "start": + return 0 + if val == "end": + return _vars["end"] + + try: + _val = time(val) + except TypeError as e: + log.error(e) + + if isinstance(_val, str): + return round(float(_val) * _vars["tb"]) + return _val + + if name in ("x", "width"): + assert "width" in _vars + return pos((val, _vars["width"])) + + if name in ("y", "height"): + assert "height" in _vars + return pos((val, _vars["height"])) + + try: + _type(val) + except TypeError as e: + log.error(e) + except Exception: + log.error(f"{name}: variable '{val}' is not defined.") + + return _type(val) + + kwargs: dict[str, Any] = {} + for attr in builder: + key = attr.names[0] + if coerce_default: + kwargs[key] = _values(key, attr.default, attr.coerce, _vars, log) + else: + kwargs[key] = attr.default + + if attrs_str == "": + for k, v in kwargs.items(): + if v is None: + log.error(f"'{k}' must be specified.") + return dataclass(**kwargs) + + d_name = dataclass.__name__ + allow_positional_args = True + + for i, arg in enumerate(attrs_str.split(",")): + if i + 1 > len(builder): + log.error(f"{d_name} has too many arguments, starting with '{arg}'.") + + if KEYWORD_SEP in arg: + allow_positional_args = False + + parameters = arg.split(KEYWORD_SEP) + if len(parameters) > 2: + log.error(f"{d_name} invalid syntax: '{arg}'.") + + key, val = parameters + found = False + for attr in builder: + if key in attr.names: + kwargs[attr.names[0]] = _values( + attr.names[0], val, attr.coerce, _vars, log + ) + found = True + break + + if not found: + from difflib import get_close_matches + + keys = set() + for attr in builder: + for name in attr.names: + keys.add(name) + + more = "" + if matches := get_close_matches(key, keys): + more = f"\n Did you mean:\n {', '.join(matches)}" + + log.error(f"{d_name} got an unexpected keyword '{key}'\n{more}") + + elif allow_positional_args: + key = builder[i].names[0] + kwargs[key] = _values(key, arg, builder[i].coerce, _vars, log) + else: + log.error(f"{d_name} positional argument follows keyword argument.") + + for k, v in kwargs.items(): + if v is None: + log.error(f"'{k}' must be specified.") + return dataclass(**kwargs) diff -Nru auto-editor-22w28a+ds/auto_editor/output.py auto-editor-22w52a+ds/auto_editor/output.py --- auto-editor-22w28a+ds/auto_editor/output.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/output.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,5 +1,7 @@ +from __future__ import annotations + import os.path -from typing import List, Optional, Tuple +from fractions import Fraction from auto_editor.ffwrapper import FFmpeg, FileInfo from auto_editor.utils.container import Container @@ -7,46 +9,65 @@ from auto_editor.utils.types import Args -def fset(cmd: List[str], option: str, value: Optional[str]) -> List[str]: - if value is None or value == "unset": - return cmd - return cmd + [option] + [value] - - -def video_quality( - cmd: List[str], args: Args, inp: FileInfo, ctr: Container -) -> List[str]: - cmd = fset(cmd, "-b:v", args.video_bitrate) - - qscale = args.video_quality_scale - if args.video_codec == "mpeg4" and qscale == "unset": - qscale = "1" - - cmd.extend(["-c:v", args.video_codec]) - cmd = fset(cmd, "-qscale:v", qscale) - cmd.extend(["-movflags", "faststart"]) - return cmd +class Ensure: + __slots__ = ("_ffmpeg", "_sr", "temp", "log") + + def __init__(self, ffmpeg: FFmpeg, sr: int, temp: str, log: Log): + self._ffmpeg = ffmpeg + self._sr = sr + self.temp = temp + self.log = log + + def audio(self, path: str, label: str, stream: int) -> str: + out_path = os.path.join(self.temp, f"{label}-{stream}.wav") + + if not os.path.isfile(out_path): + self.log.conwrite("Extracting audio") + + cmd = ["-i", path, "-map", f"0:a:{stream}"] + cmd += ["-ac", "2", "-ar", f"{self._sr}", "-rf64", "always", out_path] + self._ffmpeg.run(cmd) + + return out_path + + +def _ffset(option: str, value: str | None) -> list[str]: + if value is None or value == "unset" or value == "reserved": + return [] + return [option] + [value] + + +def video_quality(args: Args, ctr: Container) -> list[str]: + return ( + _ffset("-b:v", args.video_bitrate) + + ["-c:v", args.video_codec] + + _ffset("-qscale:v", args.video_quality_scale) + + ["-movflags", "faststart"] + ) def mux_quality_media( ffmpeg: FFmpeg, - visual_output: List[Tuple[bool, str]], - audio_output: List[str], + visual_output: list[tuple[bool, str]], + audio_output: list[str], + sub_output: list[str], apply_v: bool, ctr: Container, output_path: str, + tb: Fraction, args: Args, - inp: FileInfo, + src: FileInfo, temp: str, log: Log, ) -> None: - s_tracks = 0 if not ctr.allow_subtitle else len(inp.subtitles) - a_tracks = 0 if not ctr.allow_audio else len(audio_output) - v_tracks = 0 if not ctr.allow_video else len(visual_output) - cmd = ["-hide_banner", "-y", "-i", inp.path] + v_tracks = len(visual_output) + a_tracks = len(audio_output) + s_tracks = len(sub_output) - same_container = inp.ext == os.path.splitext(output_path)[1] + cmd = ["-hide_banner", "-y", "-i", f"{src.path}"] + + same_container = src.path.suffix == os.path.splitext(output_path)[1] for is_video, path in visual_output: if is_video or ctr.allow_image: @@ -80,13 +101,10 @@ new_a_file = audio_output[0] cmd.extend(["-i", new_a_file]) - if s_tracks > 0: - for s, sub in enumerate(inp.subtitles): - cmd.extend(["-i", os.path.join(temp, f"new{s}s.{sub.ext}")]) - - total_streams = v_tracks + s_tracks + a_tracks + for subfile in sub_output: + cmd.extend(["-i", subfile]) - for i in range(total_streams): + for i in range(v_tracks + s_tracks + a_tracks): cmd.extend(["-map", f"{i+1}:0"]) cmd.extend(["-map_metadata", "0"]) @@ -95,35 +113,39 @@ for is_video, path in visual_output: if is_video: if apply_v: - cmd = video_quality(cmd, args, inp, ctr) + cmd += video_quality(args, ctr) else: - cmd.extend([f"-c:v:{track}", "copy"]) + # Real video is only allowed on track 0 + cmd += ["-c:v:0", "copy"] + + if float(tb).is_integer(): + cmd += ["-video_track_timescale", f"{tb}"] + elif ctr.allow_image: ext = os.path.splitext(path)[1][1:] - cmd.extend( - [f"-c:v:{track}", ext, f"-disposition:v:{track}", "attached_pic"] - ) + cmd += [f"-c:v:{track}", ext, f"-disposition:v:{track}", "attached_pic"] + track += 1 del track - for i, vstream in enumerate(inp.videos): + for i, vstream in enumerate(src.videos): if i > v_tracks: break if vstream.lang is not None: cmd.extend([f"-metadata:s:v:{i}", f"language={vstream.lang}"]) - for i, astream in enumerate(inp.audios): + for i, astream in enumerate(src.audios): if i > a_tracks: break if astream.lang is not None: cmd.extend([f"-metadata:s:a:{i}", f"language={astream.lang}"]) - for i, sstream in enumerate(inp.subtitles): + for i, sstream in enumerate(src.subtitles): if i > s_tracks: break if sstream.lang is not None: cmd.extend([f"-metadata:s:s:{i}", f"language={sstream.lang}"]) if s_tracks > 0: - scodec = inp.subtitles[0].codec + scodec = src.subtitles[0].codec if same_container: cmd.extend(["-c:s", scodec]) elif ctr.scodecs is not None: @@ -132,14 +154,15 @@ cmd.extend(["-c:s", scodec]) if a_tracks > 0: - cmd = fset(cmd, "-c:a", args.audio_codec) - cmd = fset(cmd, "-b:a", args.audio_bitrate) + cmd += _ffset("-c:a", args.audio_codec) + _ffset("-b:a", args.audio_bitrate) if same_container and v_tracks > 0: - cmd = fset(cmd, "-color_range", inp.videos[0].color_range) - cmd = fset(cmd, "-colorspace", inp.videos[0].color_space) - cmd = fset(cmd, "-color_primaries", inp.videos[0].color_primaries) - cmd = fset(cmd, "-color_trc", inp.videos[0].color_transfer) + cmd += ( + _ffset("-color_range", src.videos[0].color_range) + + _ffset("-colorspace", src.videos[0].color_space) + + _ffset("-color_primaries", src.videos[0].color_primaries) + + _ffset("-color_trc", src.videos[0].color_transfer) + ) if args.extras is not None: cmd.extend(args.extras.split(" ")) diff -Nru auto-editor-22w28a+ds/auto_editor/preview.py auto-editor-22w52a+ds/auto_editor/preview.py --- auto-editor-22w28a+ds/auto_editor/preview.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/preview.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,49 +1,32 @@ +from __future__ import annotations + +from fractions import Fraction from statistics import fmean, median -from typing import List, Optional, Tuple -from auto_editor.method import get_media_duration +from auto_editor.analyze import get_media_length +from auto_editor.output import Ensure from auto_editor.timeline import Timeline from auto_editor.utils.func import to_timecode from auto_editor.utils.log import Log -def time_frame( - title: str, frames: float, fps: float, per: Optional[str] = None -) -> None: - tc = to_timecode(frames / fps, "ass") +def time_frame(title: str, ticks: float, tb: Fraction, per: str | None = None) -> None: + tc = to_timecode(ticks / tb, "ass") tp = 9 if tc.startswith("-") else 10 tcp = 12 if tc.startswith("-") else 11 - preci = 0 if int(frames) == frames else 2 + preci = 0 if int(ticks) == ticks else 2 end = "" if per is None else f" {per:>7}" - print(f" - {f'{title}:':<{tp}} {tc:<{tcp}} {f'({frames:.{preci}f})':<6}{end}") - - -def preview(timeline: Timeline, temp: str, log: Log) -> None: - log.conwrite("") - fps = timeline.fps - - # Calculate input videos length - in_len = 0 - for i, inp in enumerate(timeline.inputs): - in_len += get_media_duration(inp.path, i, inp.get_fps(), temp, log) - - out_len = timeline.out_len() - - diff = out_len - in_len - - print("\nlength:") - time_frame("input", in_len, fps, per="100.0%") - time_frame("output", out_len, fps, per=f"{round((out_len / in_len) * 100, 2)}%") - time_frame("diff", diff, fps, per=f"{round((diff / in_len) * 100, 2)}%") + print(f" - {f'{title}:':<{tp}} {tc:<{tcp}} {f'({ticks:.{preci}f})':<6}{end}") - clip_lens = [clip.dur / clip.speed for clip in timeline.a[0]] +def all_cuts(tl: Timeline, in_len: int) -> list[int]: # Calculate cuts - oe: List[Tuple[int, int]] = [] + tb = tl.timebase + oe: list[tuple[int, int]] = [] # TODO: Make offset_end_pairs work on overlapping clips. - for clip in timeline.a[0]: + for clip in tl.a[0]: oe.append((clip.offset, clip.offset + clip.dur)) cut_lens = [] @@ -55,26 +38,47 @@ cut_lens.append(oe[i + 1][0] - oe[i][1]) i += 1 - if len(oe) > 0 and oe[-1][1] < round(in_len * fps): + if len(oe) > 0 and oe[-1][1] < round(in_len * tb): cut_lens.append(in_len - oe[-1][1]) + return cut_lens + +def preview(ensure: Ensure, tl: Timeline, temp: str, log: Log) -> None: + log.conwrite("") + tb = tl.timebase + + # Calculate input videos length + in_len = 0 + for src in tl.sources.values(): + in_len += get_media_length(ensure, src, tl.timebase, temp, log) + + out_len = tl.out_len() + + diff = out_len - in_len + + print("\nlength:") + time_frame("input", in_len, tb, per="100.0%") + time_frame("output", out_len, tb, per=f"{round((out_len / in_len) * 100, 2)}%") + time_frame("diff", diff, tb, per=f"{round((diff / in_len) * 100, 2)}%") + + clip_lens = [clip.dur / clip.speed for clip in tl.a[0]] log.debug(clip_lens) - if len(clip_lens) == 0: - clip_lens = [0] + print(f"clips:\n - amount: {len(clip_lens)}") - time_frame("smallest", min(clip_lens), fps) - time_frame("largest", max(clip_lens), fps) + if len(clip_lens) > 0: + time_frame("smallest", min(clip_lens), tb) + time_frame("largest", max(clip_lens), tb) if len(clip_lens) > 1: - time_frame("median", median(clip_lens), fps) - time_frame("average", fmean(clip_lens), fps) + time_frame("median", median(clip_lens), tb) + time_frame("average", fmean(clip_lens), tb) + cut_lens = all_cuts(tl, in_len) log.debug(cut_lens) - if len(cut_lens) == 0: - cut_lens = [0] print(f"cuts:\n - amount: {len(clip_lens)}") - time_frame("smallest", min(cut_lens), fps) - time_frame("largest", max(cut_lens), fps) + if len(cut_lens) > 0: + time_frame("smallest", min(cut_lens), tb) + time_frame("largest", max(cut_lens), tb) if len(cut_lens) > 1: - time_frame("median", median(cut_lens), fps) - time_frame("average", fmean(cut_lens), fps) + time_frame("median", median(cut_lens), tb) + time_frame("average", fmean(cut_lens), tb) print("") diff -Nru auto-editor-22w28a+ds/auto_editor/render/audio.py auto-editor-22w52a+ds/auto_editor/render/audio.py --- auto-editor-22w28a+ds/auto_editor/render/audio.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/audio.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,56 +1,128 @@ +from __future__ import annotations + import os -import wave -from typing import List -from auto_editor.render.tsm.phasevocoder import phasevocoder +import numpy as np + +from auto_editor.ffwrapper import FFmpeg +from auto_editor.output import Ensure from auto_editor.timeline import Timeline +from auto_editor.utils.bar import Bar from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar -from auto_editor.wavfile import read +from auto_editor.wavfile import AudioData, read, write def make_new_audio( - timeline: Timeline, progress: ProgressBar, temp: str, log: Log -) -> List[str]: - samplerate = timeline.samplerate - fps = timeline.fps + timeline: Timeline, ensure: Ensure, ffmpeg: FFmpeg, bar: Bar, temp: str, log: Log +) -> list[str]: + sr = timeline.samplerate + tb = timeline.timebase output = [] samples = {} + af_tick = 0 + + if len(timeline.a) == 0 or len(timeline.a[0]) == 0: + log.error("Trying to render empty audio timeline") + for l, layer in enumerate(timeline.a): - progress.start(len(layer), "Creating new audio") + bar.start(len(layer), "Creating new audio") - # See: https://github.com/python/cpython/blob/3.10/Lib/wave.py path = os.path.join(temp, f"new{l}.wav") output.append(path) - writer = wave.open(path, "wb") - writer.setnchannels(2) - writer.setframerate(samplerate) - writer.setsampwidth(2) + arr: AudioData | None = None for c, clip in enumerate(layer): if f"{clip.src}-{clip.stream}" not in samples: - audio_path = os.path.join(temp, f"{clip.src}-{clip.stream}.wav") - assert os.path.exists(audio_path), f"{audio_path} Not found" + audio_path = ensure.audio( + f"{timeline.sources[clip.src].path.resolve()}", + clip.src, + clip.stream, + ) samples[f"{clip.src}-{clip.stream}"] = read(audio_path)[1] + if arr is None: + leng = max( + round( + (layer[-1].start + (layer[-1].dur / layer[-1].speed)) * sr / tb + ), + sr // tb, + ) + + dtype = np.int32 + for _samp_arr in samples.values(): + dtype = _samp_arr.dtype + break + + arr = np.memmap( + os.path.join(temp, "asdf.map"), + mode="w+", + dtype=dtype, + shape=(leng, 2), + ) + del leng + samp_list = samples[f"{clip.src}-{clip.stream}"] - samp_start = int(clip.offset / fps * samplerate) - samp_end = int((clip.offset + clip.dur) / fps * samplerate) + samp_start = clip.offset * sr // tb + samp_end = (clip.offset + clip.dur) * sr // tb if samp_end > len(samp_list): samp_end = len(samp_list) - if clip.speed == 1: - writer.writeframesraw(samp_list[samp_start:samp_end]) # type: ignore + filters: list[str] = [] + + if clip.speed != 1: + if clip.speed > 10_000: + filters.extend([f"atempo={clip.speed}^.33333"] * 3) + elif clip.speed > 100: + filters.extend( + [f"atempo=sqrt({clip.speed})", f"atempo=sqrt({clip.speed})"] + ) + elif clip.speed >= 0.5: + filters.append(f"atempo={clip.speed}") + else: + start = 0.5 + while start * 0.5 > clip.speed: + start *= 0.5 + filters.append("atempo=0.5") + filters.append(f"atempo={clip.speed / start}") + + if clip.volume != 1: + filters.append(f"volume={clip.volume}") + + if not filters: + clip_arr = samp_list[samp_start:samp_end] + else: + af = os.path.join(temp, f"af{af_tick}.wav") + af_out = os.path.join(temp, f"af{af_tick}_out.wav") + + # Windows can't replace a file that's already in use, so we have to + # cycle through file names. + af_tick = (af_tick + 1) % 3 + + write(af, sr, samp_list[samp_start:samp_end]) + ffmpeg.run(["-i", af, "-af", ",".join(filters), af_out]) + clip_arr = read(af_out)[1] + + # Mix numpy arrays + start = clip.start * sr // tb + car_len = clip_arr.shape[0] + + if start + car_len > len(arr): + # Clip 'clip_arr' if bigger than expected. + arr[start:] += clip_arr[: len(arr) - start] else: - data = phasevocoder(2, clip.speed, samp_list[samp_start:samp_end]) - if data.shape[0] != 0: - writer.writeframesraw(data) # type: ignore + arr[start : start + car_len] += clip_arr - progress.tick(c) + bar.tick(c) - writer.close() - progress.end() + if arr is not None: + write(path, sr, arr) + bar.end() + + try: + os.remove(os.path.join(temp, "asdf.map")) + except Exception: + pass return output diff -Nru auto-editor-22w28a+ds/auto_editor/render/image.py auto-editor-22w52a+ds/auto_editor/render/image.py --- auto-editor-22w28a+ds/auto_editor/render/image.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/image.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,130 @@ +from __future__ import annotations + +from typing import Union + +import av +from PIL import Image, ImageChops, ImageDraw, ImageFont, ImageOps + +from auto_editor.ffwrapper import FileInfo +from auto_editor.timeline import TlEllipse, TlImage, TlRect, TlText, Visual, VSpace +from auto_editor.utils.log import Log + +av.logging.set_level(av.logging.PANIC) + + +def apply_anchor(x: int, y: int, w: int, h: int, anchor: str) -> tuple[int, int]: + if anchor == "ce": + x = (x * 2 - w) // 2 + y = (y * 2 - h) // 2 + if anchor == "tr": + x -= w + if anchor == "bl": + y -= h + if anchor == "br": + x -= w + y -= h + # Pillow uses 'tl' by default + return x, y + + +FontCache = dict[tuple[str, int], Union[ImageFont.FreeTypeFont, ImageFont.ImageFont]] +ImgCache = dict[str, Image.Image] + + +def make_caches( + vtl: VSpace, sources: dict[str, FileInfo], log: Log +) -> tuple[FontCache, ImgCache]: + font_cache: FontCache = {} + img_cache: ImgCache = {} + for layer in vtl: + for obj in layer: + if isinstance(obj, TlText) and (obj.font, obj.size) not in font_cache: + try: + if obj.font == "default": + font_cache[(obj.font, obj.size)] = ImageFont.load_default() + else: + font_cache[(obj.font, obj.size)] = ImageFont.truetype( + obj.font, obj.size + ) + except OSError: + log.error(f"Font '{obj.font}' not found.") + + if isinstance(obj, TlImage) and obj.src not in img_cache: + img_cache[obj.src] = Image.open(f"{sources[obj.src].path}").convert( + "RGBA" + ) + + return font_cache, img_cache + + +def render_image( + frame: av.VideoFrame, obj: Visual, font_cache: FontCache, img_cache: ImgCache +) -> av.VideoFrame: + img = frame.to_image().convert("RGBA") + + if isinstance(obj, TlEllipse): + # Adding +1 to width makes Ellipse look better. + obj_img = Image.new("RGBA", (obj.width + 1, obj.height), (255, 255, 255, 0)) + if isinstance(obj, TlRect): + obj_img = Image.new("RGBA", (obj.width, obj.height), (255, 255, 255, 0)) + if isinstance(obj, TlImage): + obj_img = img_cache[obj.src] + if obj.stroke > 0: + obj_img = ImageOps.expand(obj_img, border=obj.stroke, fill=obj.strokecolor) + + if isinstance(obj, TlText): + obj_img = Image.new("RGBA", img.size) + _draw = ImageDraw.Draw(obj_img) + text_w, text_h = _draw.textsize( + obj.content, font=font_cache[(obj.font, obj.size)], stroke_width=obj.stroke + ) + obj_img = Image.new("RGBA", (text_w, text_h), (255, 255, 255, 0)) + + draw = ImageDraw.Draw(obj_img) + + if isinstance(obj, TlText): + draw.text( + (0, 0), + obj.content, + font=font_cache[(obj.font, obj.size)], + fill=obj.fill, + align=obj.align, + stroke_width=obj.stroke, + stroke_fill=obj.strokecolor, + ) + + if isinstance(obj, TlRect): + draw.rectangle( + (0, 0, obj.width, obj.height), + fill=obj.fill, + width=obj.stroke, + outline=obj.strokecolor, + ) + + if isinstance(obj, TlEllipse): + draw.ellipse( + (0, 0, obj.width, obj.height), + fill=obj.fill, + width=obj.stroke, + outline=obj.strokecolor, + ) + + # Do Anti-Aliasing + obj_img = obj_img.resize((obj_img.size[0] * 3, obj_img.size[1] * 3)) + obj_img = obj_img.resize( + (obj_img.size[0] // 3, obj_img.size[1] // 3), resample=Image.BICUBIC + ) + + obj_img = obj_img.rotate( + obj.rotate, expand=True, resample=Image.BICUBIC, fillcolor=(255, 255, 255, 0) + ) + obj_img = ImageChops.multiply( + obj_img, + Image.new("RGBA", obj_img.size, (255, 255, 255, int(obj.opacity * 255))), + ) + img.paste( + obj_img, + apply_anchor(obj.x, obj.y, obj_img.size[0], obj_img.size[1], obj.anchor), + obj_img, + ) + return frame.from_image(img) diff -Nru auto-editor-22w28a+ds/auto_editor/render/subtitle.py auto-editor-22w52a+ds/auto_editor/render/subtitle.py --- auto-editor-22w28a+ds/auto_editor/render/subtitle.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/subtitle.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,10 +1,13 @@ +from __future__ import annotations + import os import re from dataclasses import dataclass -from typing import List, Tuple +from fractions import Fraction from auto_editor.ffwrapper import FFmpeg from auto_editor.timeline import Timeline +from auto_editor.utils.chunks import Chunks from auto_editor.utils.func import to_timecode from auto_editor.utils.log import Log @@ -22,14 +25,13 @@ def __init__(self) -> None: self.supported_codecs = ("ass", "webvtt", "mov_text") - def parse(self, text, fps: float, codec: str) -> None: - + def parse(self, text: str, timebase: Fraction, codec: str) -> None: if codec not in self.supported_codecs: raise ValueError(f"codec {codec} not supported.") - self.fps = fps + self.timebase = timebase self.codec = codec - self.contents: List[SerialSub] = [] + self.contents: list[SerialSub] = [] if codec == "ass": time_code = re.compile(r"(.*)(\d+:\d+:[\d.]+)(.*)(\d+:\d+:[\d.]+)(.*)") @@ -60,7 +62,7 @@ else: self.footer = text[reg.span()[1] :] - def edit(self, chunks: List[Tuple[int, int, float]]) -> None: + def edit(self, chunks: Chunks) -> None: for cut in reversed(chunks): the_speed = cut[2] speed_factor = 1 if the_speed == 99999 else 1 - (1 / the_speed) @@ -95,8 +97,8 @@ file.write(self.header) for c in self.contents: file.write( - f"{c.before}{to_timecode(c.start / self.fps, self.codec)}" - f"{c.middle}{to_timecode(c.end / self.fps, self.codec)}" + f"{c.before}{to_timecode(c.start / self.timebase, self.codec)}" + f"{c.middle}{to_timecode(c.end / self.timebase, self.codec)}" f"{c.after}" ) file.write(self.footer) @@ -113,23 +115,17 @@ hours, minutes, seconds = nums.groups() seconds = seconds.replace(",", ".", 1) return round( - (int(hours) * 3600 + int(minutes) * 60 + float(seconds)) * self.fps + (int(hours) * 3600 + int(minutes) * 60 + float(seconds)) * self.timebase ) -def cut_subtitles( - ffmpeg: FFmpeg, - timeline: Timeline, - temp: str, - log: Log, -) -> None: - inp = timeline.inp - chunks = timeline.chunks - - for s, sub in enumerate(inp.subtitles): - if chunks is None: - log.error("Timeline too complex for subtitles") +def make_new_subtitles(tl: Timeline, ffmpeg: FFmpeg, temp: str, log: Log) -> list[str]: + if tl.chunks is None: + return [] + new_paths = [] + + for s, sub in enumerate(tl.sources["0"].subtitles): file_path = os.path.join(temp, f"{s}s.{sub.ext}") new_path = os.path.join(temp, f"new{s}s.{sub.ext}") @@ -137,12 +133,15 @@ if sub.codec in parser.supported_codecs: with open(file_path) as file: - parser.parse(file.read(), timeline.fps, sub.codec) + parser.parse(file.read(), tl.timebase, sub.codec) else: convert_path = os.path.join(temp, f"{s}s_convert.vtt") ffmpeg.run(["-i", file_path, convert_path]) with open(convert_path) as file: - parser.parse(file.read(), timeline.fps, "webvtt") + parser.parse(file.read(), tl.timebase, "webvtt") - parser.edit(chunks) + parser.edit(tl.chunks) parser.write(new_path) + new_paths.append(new_path) + + return new_paths diff -Nru auto-editor-22w28a+ds/auto_editor/render/tsm/analysis_synthesis.py auto-editor-22w52a+ds/auto_editor/render/tsm/analysis_synthesis.py --- auto-editor-22w28a+ds/auto_editor/render/tsm/analysis_synthesis.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/tsm/analysis_synthesis.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,293 +0,0 @@ -from typing import Tuple - -import numpy as np -from numpy.typing import NDArray - -from .array import ArrReader, ArrWriter -from .cbuffer import CBuffer -from .normalizebuffer import NormalizeBuffer - -EPSILON = 0.0001 - - -def find_peaks(amplitude: NDArray[np.float_]) -> NDArray[np.bool_]: - # To avoid overflows - padded = np.concatenate((-np.ones(2), amplitude, -np.ones(2))) - - # Shift the array by one/two values to the left/right - shifted_l2 = padded[:-4] - shifted_l1 = padded[1:-3] - shifted_r1 = padded[3:-1] - shifted_r2 = padded[4:] - - # Compare the original array with the shifted versions. - peaks = ( - (amplitude >= shifted_l2) - & (amplitude >= shifted_l1) - & (amplitude >= shifted_r1) - & (amplitude >= shifted_r2) - ) - - return peaks - - -def get_closest_peaks(peaks: NDArray[np.bool_]) -> NDArray[np.int_]: - """ - Returns an array containing the index of the closest peak of each index. - """ - closest_peak = np.empty_like(peaks, dtype=int) - previous = -1 - for i, is_peak in enumerate(peaks): - if is_peak: - if previous >= 0: - closest_peak[previous : (previous + i) // 2 + 1] = previous - closest_peak[(previous + i) // 2 + 1 : i] = i - else: - closest_peak[:i] = i - previous = i - closest_peak[previous:] = previous - - return closest_peak - - -class PhaseVocoderConverter: - def __init__( - self, channels: int, frame_length: int, analysis_hop: int, synthesis_hop: int - ) -> None: - self.channels = channels - self._frame_length = frame_length - self._synthesis_hop = synthesis_hop - self._analysis_hop = analysis_hop - - self._center_frequency = np.fft.rfftfreq(frame_length) * 2 * np.pi # type: ignore - fft_length = len(self._center_frequency) - - self._first = True - - self._previous_phase = np.empty((channels, fft_length)) - self._output_phase = np.empty((channels, fft_length)) - - # Buffer used to compute the phase increment and the instantaneous frequency - self._buffer = np.empty(fft_length) - - def clear(self) -> None: - self._first = True - - def convert_frame(self, frame: np.ndarray) -> np.ndarray: - for k in range(self.channels): - # Compute the FFT of the analysis frame - stft = np.fft.rfft(frame[k]) - amplitude = np.abs(stft) - - phase: NDArray[np.float_] - phase = np.angle(stft) # type: ignore - del stft - - peaks = find_peaks(amplitude) - closest_peak = get_closest_peaks(peaks) - - if self._first: - # Leave the first frame unchanged - self._output_phase[k, :] = phase - else: - # Compute the phase increment - self._buffer[peaks] = ( - phase[peaks] - - self._previous_phase[k, peaks] - - self._analysis_hop * self._center_frequency[peaks] - ) - - # Unwrap the phase increment - self._buffer[peaks] += np.pi - self._buffer[peaks] %= 2 * np.pi - self._buffer[peaks] -= np.pi - - # Compute the instantaneous frequency (in the same buffer, - # since the phase increment wont be required after that) - self._buffer[peaks] /= self._analysis_hop - self._buffer[peaks] += self._center_frequency[peaks] - - self._buffer[peaks] *= self._synthesis_hop - self._output_phase[k][peaks] += self._buffer[peaks] - - # Phase locking - self._output_phase[k] = ( - self._output_phase[k][closest_peak] + phase - phase[closest_peak] - ) - - # Compute the new stft - output_stft = amplitude * np.exp(1j * self._output_phase[k]) - - frame[k, :] = np.fft.irfft(output_stft).real - - # Save the phase for the next analysis frame - self._previous_phase[k, :] = phase - del phase - del amplitude - - self._first = False - - return frame - - -class AnalysisSynthesisTSM: - def run(self, reader: ArrReader, writer: ArrWriter, flush: bool = True) -> None: - finished = False - while not (finished and reader.empty): - self.read_from(reader) - _, finished = self.write_to(writer) - - if flush: - finished = False - while not finished: - _, finished = self.flush_to(writer) - - self.clear() - - def __init__( - self, - channels: int, - frame_length: int, - analysis_hop: int, - synthesis_hop: int, - analysis_window: np.ndarray, - synthesis_window: np.ndarray, - ) -> None: - - self._converter = PhaseVocoderConverter( - channels, frame_length, analysis_hop, synthesis_hop - ) - - self._channels = channels - self._frame_length = frame_length - self._analysis_hop = analysis_hop - self._synthesis_hop = synthesis_hop - - self._analysis_window = analysis_window - self._synthesis_window = synthesis_window - - # When the analysis hop is larger than the frame length, some samples - # from the input need to be skipped. - self._skip_input_samples = 0 - - # Used to start the output signal in the middle of a frame, which should - # be the peek of the window function - self._skip_output_samples = 0 - - self._normalize_window = self._analysis_window * self._synthesis_window - - # Initialize the buffers - self._in_buffer = CBuffer(self._channels, self._frame_length) - self._analysis_frame = np.empty((self._channels, self._frame_length)) - self._out_buffer = CBuffer(self._channels, self._frame_length) - self._normalize_buffer = NormalizeBuffer(self._frame_length) - - self.clear() - - def clear(self) -> None: - self._in_buffer.remove(self._in_buffer.length) - self._out_buffer.remove(self._out_buffer.length) - self._out_buffer.right_pad(self._frame_length) - self._normalize_buffer.remove(self._normalize_buffer.length) - - # Left pad the input with half a frame of zeros, and ignore that half - # frame in the output. This makes the output signal start in the middle - # of a frame, which should be the peak of the window function. - self._in_buffer.write(np.zeros((self._channels, self._frame_length // 2))) - self._skip_output_samples = self._frame_length // 2 - - self._converter.clear() - - def flush_to(self, writer: ArrWriter) -> Tuple[int, bool]: - if self._in_buffer.remaining_length == 0: - raise RuntimeError( - "There is still data to process in the input buffer, flush_to method " - "should only be called when write_to returns True." - ) - - n = self._out_buffer.write_to(writer) - if self._out_buffer.ready == 0: - # The output buffer is empty - self.clear() - return n, True - - return n, False - - def get_max_output_length(self, input_length: int) -> int: - input_length -= self._skip_input_samples - if input_length <= 0: - return 0 - - n_frames = input_length // self._analysis_hop + 1 - return n_frames * self._synthesis_hop - - def _process_frame(self) -> None: - """Read an analysis frame from the input buffer, process it, and write - the result to the output buffer.""" - # Generate the analysis frame and discard the input samples that will - # not be needed anymore - self._in_buffer.peek(self._analysis_frame) - self._in_buffer.remove(self._analysis_hop) - - for channel in self._analysis_frame: - channel *= self._analysis_window - - synthesis_frame = self._converter.convert_frame(self._analysis_frame) - - for channel in synthesis_frame: - channel *= self._synthesis_window - - # Overlap and add the synthesis frame in the output buffer - self._out_buffer.add(synthesis_frame) - - # The overlap and add step changes the volume of the signal. The - # normalize_buffer is used to keep track of "how much of the input - # signal was added" to each part of the output buffer, allowing to - # normalize it. - self._normalize_buffer.add(self._normalize_window) - - # Normalize the samples that are ready to be written to the output - normalize = self._normalize_buffer.to_array(end=self._synthesis_hop) - normalize[normalize < EPSILON] = 1 - self._out_buffer.divide(normalize) - self._out_buffer.set_ready(self._synthesis_hop) - self._normalize_buffer.remove(self._synthesis_hop) - - def read_from(self, reader: ArrReader) -> int: - n = reader.skip(self._skip_input_samples) - self._skip_input_samples -= n - if self._skip_input_samples > 0: - return n - - n += self._in_buffer.read_from(reader) - - if ( - self._in_buffer.remaining_length == 0 - and self._out_buffer.remaining_length >= self._synthesis_hop - ): - # The input buffer has enough data to process, and there is enough - # space in the output buffer to store the output - self._process_frame() - - # Skip output samples if necessary - skipped = self._out_buffer.remove(self._skip_output_samples) - self._out_buffer.right_pad(skipped) - self._skip_output_samples -= skipped - - # Set the number of input samples to be skipped - self._skip_input_samples = self._analysis_hop - self._frame_length - if self._skip_input_samples < 0: - self._skip_input_samples = 0 - - return n - - def write_to(self, writer: ArrWriter) -> Tuple[int, bool]: - n = self._out_buffer.write_to(writer) - self._out_buffer.right_pad(n) - - if self._in_buffer.remaining_length > 0 and self._out_buffer.ready == 0: - # There is not enough data to process in the input buffer, and the - # output buffer is empty - return n, True - - return n, False diff -Nru auto-editor-22w28a+ds/auto_editor/render/tsm/array.py auto-editor-22w52a+ds/auto_editor/render/tsm/array.py --- auto-editor-22w28a+ds/auto_editor/render/tsm/array.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/tsm/array.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,43 +0,0 @@ -import numpy as np -from numpy.typing import NDArray - - -class ArrReader: - __slots__ = ("samples", "pointer") - - def __init__(self, arr: np.ndarray) -> None: - self.samples = arr - self.pointer = 0 - - @property - def empty(self) -> bool: - return self.samples.shape[0] <= self.pointer - - def read(self, buffer: np.ndarray) -> int: - end = self.pointer + buffer.shape[1] - frames = self.samples[self.pointer : end].T.astype(np.float32) - n = frames.shape[1] - np.copyto(buffer[:, :n], frames) - del frames - self.pointer = end - return n - - def skip(self, n: int) -> int: - self.pointer += n - return n - - -class ArrWriter: - __slots__ = ("output", "pointer") - - def __init__(self, arr: NDArray[np.int16]) -> None: - self.output = arr - self.pointer = 0 - - def write(self, buffer: np.ndarray) -> int: - end = self.pointer + buffer.shape[1] - changed_buffer: NDArray[np.int16] = buffer.T.astype(np.int16) - self.output = np.concatenate((self.output, changed_buffer)) - self.pointer = end - - return buffer.shape[1] diff -Nru auto-editor-22w28a+ds/auto_editor/render/tsm/cbuffer.py auto-editor-22w52a+ds/auto_editor/render/tsm/cbuffer.py --- auto-editor-22w28a+ds/auto_editor/render/tsm/cbuffer.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/tsm/cbuffer.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,187 +0,0 @@ -import numpy as np - -from .array import ArrReader, ArrWriter - - -class CBuffer: - def __init__(self, channels: int, max_length: int) -> None: - self._data = np.zeros((channels, max_length), dtype=np.float32) - self._channels = channels - self._max_length = max_length - - self._offset = 0 - self._ready = 0 - self.length = 0 - - def add(self, buffer: np.ndarray) -> None: - """Adds a buffer element-wise to the CBuffer.""" - if buffer.shape[0] != self._data.shape[0]: - raise ValueError("the two buffers should have the same number of channels") - - n = buffer.shape[1] - if n > self.length: - raise ValueError("not enough space remaining in CBuffer") - - # Compute the slice of data where the values will be added - start = self._offset - end = self._offset + n - - if end <= self._max_length: - self._data[:, start:end] += buffer[:, :n] - else: - end -= self._max_length - self._data[:, start:] += buffer[:, : self._max_length - start] - self._data[:, :end] += buffer[:, self._max_length - start : n] - - def divide(self, array: np.ndarray) -> None: - n = len(array) - if n > self.length: - raise ValueError("not enough space remaining in the CBuffer") - - start = self._offset - end = self._offset + n - - if end <= self._max_length: - self._data[:, start:end] /= array[:n] - else: - end -= self._max_length - self._data[:, start:] /= array[: self._max_length - start] - self._data[:, :end] /= array[self._max_length - start : n] - - def peek(self, buffer: np.ndarray) -> int: - if buffer.shape[0] != self._data.shape[0]: - raise ValueError("the two buffers should have the same number of channels") - - n = min(buffer.shape[1], self._ready) - - start = self._offset - end = self._offset + n - - if end <= self._max_length: - np.copyto(buffer[:, :n], self._data[:, start:end]) - else: - end -= self._max_length - np.copyto(buffer[:, : self._max_length - start], self._data[:, start:]) - np.copyto(buffer[:, self._max_length - start : n], self._data[:, :end]) - - return n - - def read(self, buffer: np.ndarray) -> int: - n = self.peek(buffer) - self.remove(n) - return n - - def read_from(self, reader: ArrReader) -> int: - # Compute the slice of data that will be written to - start = (self._offset + self.length) % self._max_length - end = start + self._max_length - self.length - - if end <= self._max_length: - n = reader.read(self._data[:, start:end]) - else: - # There is not enough space to copy the whole buffer, it has to be - # split into two parts, one of which will be copied at the end of - # _data, and the other at the beginning. - end -= self._max_length - - n = reader.read(self._data[:, start:]) - n += reader.read(self._data[:, :end]) - - self.length += n - self._ready = self.length - return n - - @property - def ready(self): - return self._ready - - @property - def remaining_length(self): - return self._max_length - self._ready - - def remove(self, n: int) -> int: - """ - Removes the first n samples of the CBuffer, preventing - them to be read again, and leaving more space for new samples to be - written. - """ - if n >= self.length: - n = self.length - - # Compute the slice of data that will be reset to 0 - start = self._offset - end = self._offset + n - - if end <= self._max_length: - self._data[:, start:end] = 0 - else: - end -= self._max_length - self._data[:, start:] = 0 - self._data[:, :end] = 0 - - self._offset += n - self._offset %= self._max_length - self.length -= n - - self._ready -= n - if self._ready < 0: - self._ready = 0 - - return n - - def right_pad(self, n: int) -> None: - if n > self._max_length - self.length: - raise ValueError("not enough space remaining in CBuffer") - - self.length += n - - def set_ready(self, n: int) -> None: - """Mark the next n samples as ready to be read.""" - if self._ready + n > self.length: - raise ValueError("not enough samples to be marked as ready") - - self._ready += n - - def to_array(self): - out = np.empty((self._channels, self._ready)) - self.peek(out) - return out - - def write(self, buffer: np.ndarray) -> int: - if buffer.shape[0] != self._data.shape[0]: - raise ValueError("the two buffers should have the same number of channels") - - n = min(buffer.shape[1], self._max_length - self.length) - - # Compute the slice of data that will be written to - start = (self._offset + self.length) % self._max_length - end = start + n - - if end <= self._max_length: - np.copyto(self._data[:, start:end], buffer[:, :n]) - else: - # There is not enough space to copy the whole buffer, it has to be - # split into two parts, one of which will be copied at the end of - # _data, and the other at the beginning. - end -= self._max_length - - np.copyto(self._data[:, start:], buffer[:, : self._max_length - start]) - np.copyto(self._data[:, :end], buffer[:, self._max_length - start : n]) - - self.length += n - self._ready = self.length - return n - - def write_to(self, writer: ArrWriter) -> int: - start = self._offset - end = self._offset + self._ready - - if end <= self._max_length: - n = writer.write(self._data[:, start:end]) - else: - end -= self._max_length - n = writer.write(self._data[:, start:]) - n += writer.write(self._data[:, :end]) - - self.remove(n) - return n diff -Nru auto-editor-22w28a+ds/auto_editor/render/tsm/normalizebuffer.py auto-editor-22w52a+ds/auto_editor/render/tsm/normalizebuffer.py --- auto-editor-22w28a+ds/auto_editor/render/tsm/normalizebuffer.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/tsm/normalizebuffer.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,66 +0,0 @@ -from typing import Optional - -import numpy as np - -# A NormalizeBuffer is a mono-channel circular buffer, used to normalize audio buffers. - - -class NormalizeBuffer: - def __init__(self, length: int) -> None: - self._data = np.zeros(length) - self._offset = 0 - self.length = length - - def add(self, window: np.ndarray) -> None: - # Adds a window element-wise to the NormalizeBuffer. - n = len(window) - if n > self.length: - raise ValueError("the window should be smaller than the NormalizeBuffer") - - # Compute the slice of data where the values will be added - start = self._offset - end = self._offset + n - - if end <= self.length: - self._data[start:end] += window - else: - end -= self.length - self._data[start:] += window[: self.length - start] - self._data[:end] += window[self.length - start :] - - def remove(self, n: int) -> None: - if n >= self.length: - n = self.length - if n == 0: - return - - # Compute the slice of data to reset - start = self._offset - end = self._offset + n - - if end <= self.length: - self._data[start:end] = 0 - else: - end -= self.length - self._data[start:] = 0 - self._data[:end] = 0 - - self._offset += n - self._offset %= self.length - - def to_array(self, start: int = 0, end: Optional[int] = None) -> np.ndarray: - if end is None: - end = self.length - - start += self._offset - end += self._offset - - if end <= self.length: - return np.copy(self._data[start:end]) - - end -= self.length - if start < self.length: - return np.concatenate((self._data[start:], self._data[:end])) - - start -= self.length - return np.copy(self._data[start:end]) diff -Nru auto-editor-22w28a+ds/auto_editor/render/tsm/phasevocoder.py auto-editor-22w52a+ds/auto_editor/render/tsm/phasevocoder.py --- auto-editor-22w28a+ds/auto_editor/render/tsm/phasevocoder.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/tsm/phasevocoder.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,37 +0,0 @@ -import numpy as np -from numpy.typing import NDArray - -from .analysis_synthesis import AnalysisSynthesisTSM -from .array import ArrReader, ArrWriter - - -def hanning(length: int) -> np.ndarray: - time = np.arange(length) - return 0.5 * (1 - np.cos(2 * np.pi * time / length)) - - -def phasevocoder( - channels: int, speed: float, arr: np.ndarray, frame_length: int = 2048 -) -> NDArray[np.int16]: - - # Frame length should be a power of two for maximum performance. - - synthesis_hop = frame_length // 4 - analysis_hop = int(synthesis_hop * speed) - - analysis_window = hanning(frame_length) - synthesis_window = hanning(frame_length) - - writer = ArrWriter(np.zeros((0, channels), dtype=np.int16)) - reader = ArrReader(arr) - - AnalysisSynthesisTSM( - channels, - frame_length, - analysis_hop, - synthesis_hop, - analysis_window, - synthesis_window, - ).run(reader, writer) - - return writer.output diff -Nru auto-editor-22w28a+ds/auto_editor/render/video.py auto-editor-22w52a+ds/auto_editor/render/video.py --- auto-editor-22w28a+ds/auto_editor/render/video.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/render/video.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,21 +1,28 @@ +from __future__ import annotations + import os.path from dataclasses import dataclass from math import ceil from subprocess import DEVNULL, PIPE -from typing import Dict, List, Tuple, Union +from typing import TYPE_CHECKING import av -from PIL import Image, ImageChops, ImageDraw, ImageFont, ImageOps +from PIL import Image, ImageOps -from auto_editor.ffwrapper import FFmpeg -from auto_editor.objects import EllipseObj, ImageObj, RectangleObj, TextObj, VideoObj from auto_editor.output import video_quality -from auto_editor.timeline import Timeline, Visual -from auto_editor.utils.container import Container +from auto_editor.render.image import make_caches, render_image +from auto_editor.timeline import TlVideo, Visual from auto_editor.utils.encoder import encoders -from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar -from auto_editor.utils.types import Args + +if TYPE_CHECKING: + from typing import Any + + from auto_editor.ffwrapper import FFmpeg, FileInfo + from auto_editor.timeline import Timeline + from auto_editor.utils.bar import Bar + from auto_editor.utils.container import Container + from auto_editor.utils.log import Log + from auto_editor.utils.types import Args av.logging.set_level(av.logging.PANIC) @@ -23,7 +30,7 @@ @dataclass class VideoFrame: index: int - src: int + src: str # From: github.com/PyAV-Org/PyAV/blob/main/av/video/frame.pyx @@ -50,125 +57,64 @@ } -def apply_anchor( - x: int, y: int, width: int, height: int, anchor: str -) -> Tuple[int, int]: - if anchor == "ce": - x = int((x * 2 - width) / 2) - y = int((y * 2 - height) / 2) - if anchor == "tr": - x -= width - if anchor == "bl": - y -= height - if anchor == "br": - x -= width - y -= height - # Pillow uses 'tl' by default - return x, y - - -def one_pos_two_pos( - x: int, y: int, width: int, height: int, anchor: str -) -> Tuple[int, int, int, int]: - """Convert: x, y, width, height -> x1, y1, x2, y2""" - - if anchor == "ce": - x1 = x - int(width / 2) - x2 = x + int(width / 2) - y1 = y - int(height / 2) - y2 = y + int(height / 2) - - return x1, y1, x2, y2 - - if anchor in ("tr", "br"): - x1 = x - width - x2 = x - else: - x1 = x - x2 = x + width - - if anchor in ("tl", "tr"): - y1 = y - y2 = y + height - else: - y1 = y - y2 = y - height - - return x1, y1, x2, y2 - - def render_av( ffmpeg: FFmpeg, timeline: Timeline, args: Args, - progress: ProgressBar, + bar: Bar, ctr: Container, temp: str, log: Log, -) -> Tuple[str, bool]: +) -> tuple[str, bool]: - FontCache = Dict[ - str, Tuple[Union[ImageFont.FreeTypeFont, ImageFont.ImageFont], float] - ] + if not timeline.sources: + if "0" in timeline.sources: + src: FileInfo | None = timeline.sources["0"] + else: + src = next(iter(timeline.sources.items()))[1] + else: + src = None + + font_cache, img_cache = make_caches(timeline.v, timeline.sources, log) + + cns: dict[str, Any] = {} + decoders: dict[str, Any] = {} + seek_cost: dict[str, int] = {} + tous: dict[str, int] = {} + + target_pix_fmt = "yuv420p" # Reasonable default + + for key, src in timeline.sources.items(): + cns[key] = av.open(f"{src.path}") - font_cache: FontCache = {} - img_cache: Dict[str, Image.Image] = {} - for layer in timeline.v: - for vobj in layer: - if isinstance(vobj, TextObj) and (vobj.font, vobj.size) not in font_cache: - try: - if vobj.font == "default": - font_cache[(vobj.font, vobj.size)] = ImageFont.load_default() - else: - font_cache[(vobj.font, vobj.size)] = ImageFont.truetype( - vobj.font, vobj.size - ) - except OSError: - log.error(f"Font '{vobj.font}' not found.") - - if isinstance(vobj, ImageObj) and vobj.src not in img_cache: - source = Image.open(vobj.src) - source = source.convert("RGBA") - source = source.rotate(vobj.rotate, expand=True) - source = ImageChops.multiply( - source, - Image.new( - "RGBA", source.size, (255, 255, 255, int(vobj.opacity * 255)) - ), - ) - img_cache[vobj.src] = source - - inp = timeline.inp - cns = [av.open(inp.path, "r") for inp in timeline.inputs] - - decoders = [] - tous = [] - pix_fmts = [] - seek_cost = [] - for cn in cns: - if len(cn.streams.video) == 0: - decoders.append(None) - tous.append(None) - pix_fmts.append(None) - seek_cost.append(None) + if len(cns[key].streams.video) == 0: + decoders[key] = None + tous[key] = 0 + seek_cost[key] = 0 else: - stream = cn.streams.video[0] + stream = cns[key].streams.video[0] stream.thread_type = "AUTO" - # Keyframes are usually spread out every 5 seconds or less. - seek_cost.append( - 4294967295 - if args.no_seek - else int(cn.streams.video[0].average_rate * 5) - ) - tous.append(int(stream.time_base.denominator / stream.average_rate)) - pix_fmts.append(stream.pix_fmt) - decoders.append(cn.decode(stream)) + if args.no_seek or stream.average_rate is None: + sc_val = 4294967295 # 2 ** 32 - 1 + tou = 0 + else: + # Keyframes are usually spread out every 5 seconds or less. + sc_val = int(stream.average_rate * 5) + tou = int(stream.time_base.denominator / stream.average_rate) + + seek_cost[key] = sc_val + tous[key] = tou + decoders[key] = cns[key].decode(stream) + + if key == "0": + target_pix_fmt = stream.pix_fmt log.debug(f"Tous: {tous}") log.debug(f"Clips: {timeline.v}") - target_pix_fmt = pix_fmts[0] if pix_fmts[0] in allowed_pix_fmt else "yuv420p" + target_pix_fmt = target_pix_fmt if target_pix_fmt in allowed_pix_fmt else "yuv420p" + log.debug(f"Target pix_fmt: {target_pix_fmt}") apply_video_later = True @@ -197,7 +143,7 @@ "-s", f"{width}*{height}", "-framerate", - f"{timeline.fps}", + f"{timeline.timebase}", "-i", "-", "-pix_fmt", @@ -205,9 +151,15 @@ ] if apply_video_later: - cmd.extend(["-c:v", "mpeg4", "-qscale:v", "1"]) + cmd += ["-c:v", "mpeg4", "-qscale:v", "1"] else: - cmd = video_quality(cmd, args, inp, ctr) + cmd += video_quality(args, ctr) + + # Setting SAR requires re-encoding so we do it here. + if src is not None and src.videos: + if (sar := src.videos[0].sar) is not None: + cmd.extend(["-vf", f"setsar={sar.replace(':', '/')}"]) + cmd.append(spedup) process2 = ffmpeg.Popen(cmd, stdin=PIPE, stdout=DEVNULL, stderr=DEVNULL) @@ -219,19 +171,20 @@ seek_frame = None frames_saved = 0 - progress.start(timeline.end, "Creating new video") + bar.start(timeline.end, "Creating new video") null_img = Image.new("RGB", (width, height), args.background) null_frame = av.VideoFrame.from_image(null_img).reformat(format=target_pix_fmt) frame_index = -1 + frame = null_frame try: for index in range(timeline.end): # Add objects to obj_list - obj_list: List[Union[VideoFrame, Visual]] = [] + obj_list: list[VideoFrame | Visual] = [] for layer in timeline.v: for lobj in layer: - if isinstance(lobj, VideoObj): + if isinstance(lobj, TlVideo): if index >= lobj.start and index < lobj.start + ceil( lobj.dur / lobj.speed ): @@ -246,15 +199,16 @@ obj_list.append(lobj) # Render obj_list - frame = null_frame for obj in obj_list: if isinstance(obj, VideoFrame): + my_stream = cns[obj.src].streams.video[0] if frame_index > obj.index: log.debug(f"Seek: {frame_index} -> 0") cns[obj.src].seek(0) try: + assert decoders[obj.src] is not None frame = next(decoders[obj.src]) - frame_index = round(frame.time * timeline.fps) + frame_index = round(frame.time * timeline.timebase) except StopIteration: pass @@ -269,12 +223,12 @@ log.debug(f"Seek: {frame_index} -> {obj.index}") cns[obj.src].seek( obj.index * tous[obj.src], - stream=cns[obj.src].streams.video[0], + stream=my_stream, ) try: frame = next(decoders[obj.src]) - frame_index = round(frame.time * timeline.fps) + frame_index = round(frame.time * timeline.timebase) except StopIteration: log.debug(f"No source frame at {index=}. Using null frame") frame = null_frame @@ -297,65 +251,25 @@ img = ImageOps.pad(img, (width, height), color=args.background) frame = frame.from_image(img).reformat(format=target_pix_fmt) - # Render visual objects else: - img = frame.to_image().convert("RGBA") - obj_img = Image.new("RGBA", img.size, (255, 255, 255, 0)) - draw = ImageDraw.Draw(obj_img) - - if isinstance(obj, TextObj): - text_w, text_h = draw.textsize( - obj.content, font=font_cache[(obj.font, obj.size)] - ) - pos = apply_anchor(obj.x, obj.y, text_w, text_h, "ce") - draw.text( - pos, - obj.content, - font=font_cache[(obj.font, obj.size)], - fill=obj.fill, - align=obj.align, - stroke_width=obj.stroke, - stroke_fill=obj.strokecolor, - ) - - if isinstance(obj, RectangleObj): - draw.rectangle( - one_pos_two_pos( - obj.x, obj.y, obj.width, obj.height, obj.anchor - ), - fill=obj.fill, - width=obj.stroke, - outline=obj.strokecolor, - ) - - if isinstance(obj, EllipseObj): - draw.ellipse( - one_pos_two_pos( - obj.x, obj.y, obj.width, obj.height, obj.anchor - ), - fill=obj.fill, - width=obj.stroke, - outline=obj.strokecolor, - ) - - if isinstance(obj, ImageObj): - img_w, img_h = img_cache[obj.src].size - pos = apply_anchor(obj.x, obj.y, img_w, img_h, obj.anchor) - obj_img.paste(img_cache[obj.src], pos) + frame = render_image(frame, obj, font_cache, img_cache) - img = Image.alpha_composite(img, obj_img) - frame = frame.from_image(img).reformat(format=target_pix_fmt) + if frame.format.name != target_pix_fmt: + frame = frame.reformat(format=target_pix_fmt) + bar.tick(index) + elif index % 3 == 0: + bar.tick(index) - process2.stdin.write(frame.to_ndarray().tobytes()) + # if frame == null_frame: + # raise ValueError("no null frame allowed") - if index % 3 == 0: - progress.tick(index) + process2.stdin.write(frame.to_ndarray().tobytes()) - progress.end() + bar.end() process2.stdin.close() process2.wait() except (OSError, BrokenPipeError): - progress.end() + bar.end() ffmpeg.run_check_errors(cmd, log, True) log.error("FFmpeg Error!") diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/desc.py auto-editor-22w52a+ds/auto_editor/subcommands/desc.py --- auto-editor-22w28a+ds/auto_editor/subcommands/desc.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/desc.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,6 +1,7 @@ +from __future__ import annotations + import sys from dataclasses import dataclass, field -from typing import List, Optional from auto_editor.ffwrapper import FFmpeg, FileInfo from auto_editor.utils.log import Log @@ -9,23 +10,23 @@ @dataclass class DescArgs: - ffmpeg_location: Optional[str] = None + ffmpeg_location: str | None = None help: bool = False - input: List[str] = field(default_factory=list) + input: list[str] = field(default_factory=list) def desc_options(parser: ArgumentParser) -> ArgumentParser: - parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file.") - parser.add_required("input", nargs="*", help="Path to file(s)") + parser.add_required("input", nargs="*") + parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file") return parser -def main(sys_args=sys.argv[1:]) -> None: +def main(sys_args: list[str] = sys.argv[1:]) -> None: args = desc_options(ArgumentParser("desc")).parse_args(DescArgs, sys_args) - for input_file in args.input: - inp = FileInfo(input_file, FFmpeg(args.ffmpeg_location, debug=False), Log()) - if inp.description is not None: - sys.stdout.write(f"\n{inp.description}\n\n") + for path in args.input: + src = FileInfo(path, FFmpeg(args.ffmpeg_location, debug=False), Log()) + if src.description is not None: + sys.stdout.write(f"\n{src.description}\n\n") else: sys.stdout.write("\nNo description.\n\n") diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/grep.py auto-editor-22w52a+ds/auto_editor/subcommands/grep.py --- auto-editor-22w28a+ds/auto_editor/subcommands/grep.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/grep.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,9 +1,10 @@ +from __future__ import annotations + import os import re import sys import tempfile from dataclasses import dataclass, field -from typing import List, Optional, Type from auto_editor.ffwrapper import FFmpeg from auto_editor.utils.log import Log @@ -13,51 +14,49 @@ @dataclass class GrepArgs: no_filename: bool = False - max_count: Optional[int] = None + max_count: int | None = None count: bool = False ignore_case: bool = False timecode: bool = False time: bool = False - ffmpeg_location: Optional[str] = None + ffmpeg_location: str | None = None my_ffmpeg: bool = False help: bool = False - input: List[str] = field(default_factory=list) + input: list[str] = field(default_factory=list) def grep_options(parser: ArgumentParser) -> ArgumentParser: + parser.add_required("input", nargs="*", metavar="pattern [file ...]") parser.add_argument( - "--no-filename", flag=True, help="Never print filenames with output lines." + "--no-filename", flag=True, help="Never print filenames with output lines" ) parser.add_argument( "--max-count", "-m", type=int, - help="Stop reading a file after NUM matching lines.", + help="Stop reading a file after NUM matching lines", ) parser.add_argument( "--count", "-c", flag=True, - help="Suppress normal output; instead print count of matching lines for each file.", + help="Suppress normal output; instead print count of matching lines for each file", ) parser.add_argument( "--ignore-case", "-i", flag=True, - help="Ignore case distinctions for the PATTERN.", + help="Ignore case distinctions for the PATTERN", ) - parser.add_argument("--timecode", flag=True, help="Print the match's timecode.") + parser.add_argument("--timecode", flag=True, help="Print the match's timecode") parser.add_argument( - "--time", flag=True, help="Print when the match happens. (Ignore ending)." + "--time", flag=True, help="Print when the match happens. (Ignore ending)" ) - parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file.") + parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file") parser.add_argument( "--my-ffmpeg", flag=True, - help="Use the ffmpeg on your PATH instead of the one packaged.", - ) - parser.add_required( - "input", nargs="*", help="The path to a file you want inspected." + help="Use the ffmpeg on your PATH instead of the one packaged", ) return parser @@ -73,7 +72,7 @@ media_file: str, add_prefix: bool, ffmpeg: FFmpeg, - args: Type[GrepArgs], + args: GrepArgs, log: Log, TEMP: str, ) -> None: @@ -84,21 +83,24 @@ (hh:mm:ss.sss) instead of (dd:hh:mm:ss,sss) """ + try: + flags = re.IGNORECASE if args.ignore_case else 0 + pattern = re.compile(args.input[0], flags) + except re.error as e: + log.error(e) + out_file = os.path.join(TEMP, "media.vtt") ffmpeg.run(["-i", media_file, out_file]) count = 0 - flags = 0 - if args.ignore_case: - flags = re.IGNORECASE - prefix = "" if add_prefix: prefix = f"{os.path.splitext(os.path.basename(media_file))[0]}:" timecode = "" line_number = -1 + with open(out_file) as file: while True: line = file.readline() @@ -120,22 +122,20 @@ continue line = cleanhtml(line) - match = re.search(args.input[0], line, flags) - line = line.strip() - if match: + if re.search(pattern, line): count += 1 if not args.count: if args.timecode or args.time: - print(prefix + timecode + line) + print(prefix + timecode + line.strip()) else: - print(prefix + line) + print(prefix + line.strip()) if args.count: print(prefix + str(count)) -def main(sys_args=sys.argv[1:]) -> None: +def main(sys_args: list[str] = sys.argv[1:]) -> None: args = grep_options(ArgumentParser("grep")).parse_args(GrepArgs, sys_args) ffmpeg = FFmpeg(args.ffmpeg_location, args.my_ffmpeg, debug=False) @@ -161,7 +161,7 @@ elif os.path.isfile(path): grep_file(path, add_prefix, ffmpeg, args, log, TEMP) else: - log.error(f"{path}: File does not exist.") + log.nofile(path) log.cleanup() diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/info.py auto-editor-22w52a+ds/auto_editor/subcommands/info.py --- auto-editor-22w28a+ds/auto_editor/subcommands/info.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/info.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,8 +1,10 @@ +from __future__ import annotations + import json import os.path import sys from dataclasses import dataclass, field -from typing import Any, Dict, List, Literal, Optional, TypedDict +from typing import Any, Literal, TypedDict from auto_editor.ffwrapper import FFmpeg, FileInfo from auto_editor.utils.func import aspect_ratio @@ -14,108 +16,133 @@ class InfoArgs: json: bool = False include_vfr: bool = False - ffmpeg_location: Optional[str] = None + ffmpeg_location: str | None = None my_ffmpeg: bool = False help: bool = False - input: List[str] = field(default_factory=list) + input: list[str] = field(default_factory=list) def info_options(parser: ArgumentParser) -> ArgumentParser: - parser.add_argument("--json", flag=True, help="Export info in JSON format.") + parser.add_required("input", nargs="*") + parser.add_argument("--json", flag=True, help="Export info in JSON format") parser.add_argument( "--include-vfr", "--has-vfr", flag=True, - help="Display the number of Variable Frame Rate (VFR) frames.", + help="Display the number of Variable Frame Rate (VFR) frames", ) - parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file.") + parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file") parser.add_argument( "--my-ffmpeg", flag=True, - help="Use the ffmpeg on your PATH instead of the one packaged.", - ) - parser.add_required( - "input", nargs="*", help="The path to a file you want inspected." + help="Use the ffmpeg on your PATH instead of the one packaged", ) return parser class VideoJson(TypedDict): codec: str - fps: float - resolution: List[int] - aspect_ratio: List[int] + fps: str + resolution: list[int] + aspect_ratio: list[int] + pixel_aspect_ratio: str | None + duration: str | None pix_fmt: str - color_range: Optional[str] - color_space: Optional[str] - color_primaries: Optional[str] - color_transfer: Optional[str] + color_range: str | None + color_space: str | None + color_primaries: str | None + color_transfer: str | None timebase: str - bitrate: Optional[str] - lang: Optional[str] + bitrate: str | None + lang: str | None class AudioJson(TypedDict): codec: str samplerate: int - bitrate: Optional[str] - lang: Optional[str] + duration: str | None + bitrate: str | None + lang: str | None class SubtitleJson(TypedDict): codec: str - lang: Optional[str] + lang: str | None class ContainerJson(TypedDict): - bitrate: Optional[str] - fps_mode: Optional[str] + duration: str + bitrate: str | None + fps_mode: str | None class MediaJson(TypedDict, total=False): - video: List[VideoJson] - audio: List[AudioJson] - subtitle: List[SubtitleJson] + video: list[VideoJson] + audio: list[AudioJson] + subtitle: list[SubtitleJson] container: ContainerJson - media: Literal["invalid"] + type: Literal["media", "timeline", "unknown"] + version: Literal["v1", "v2"] + clips: int -def main(sys_args=sys.argv[1:]) -> None: +def main(sys_args: list[str] = sys.argv[1:]) -> None: args = info_options(ArgumentParser("info")).parse_args(InfoArgs, sys_args) ffmpeg = FFmpeg(args.ffmpeg_location, args.my_ffmpeg, False) log = Log(quiet=not args.json) - file_info: Dict[str, MediaJson] = {} + file_info: dict[str, MediaJson] = {} for file in args.input: if not os.path.isfile(file): - Log().error(f"Could not find file: {file}") + log.nofile(file) + + ext = os.path.splitext(file)[1] + if ext == ".json": + from auto_editor.formats.json import read_json + + tl = read_json(file, ffmpeg, log) + file_info[file] = {"type": "timeline"} + file_info[file]["version"] = "v2" if tl.chunks is None else "v1" - inp = FileInfo(file, ffmpeg, log) + clip_lens = [clip.dur / clip.speed for clip in tl.a[0]] + file_info[file]["clips"] = len(clip_lens) - if len(inp.videos) + len(inp.audios) + len(inp.subtitles) == 0: - file_info[file] = {"media": "invalid"} + continue + + if ext in (".xml", ".fcpxml", ".mlt"): + file_info[file] = {"type": "timeline"} + continue + + src = FileInfo(file, ffmpeg, log) + + if len(src.videos) + len(src.audios) + len(src.subtitles) == 0: + file_info[file] = {"type": "unknown"} continue file_info[file] = { + "type": "media", "video": [], "audio": [], "subtitle": [], - "container": {"bitrate": inp.bitrate, "fps_mode": None}, + "container": { + "duration": src.duration, + "bitrate": src.bitrate, + "fps_mode": None, + }, } - for track, v in enumerate(inp.videos): + for track, v in enumerate(src.videos): w, h = v.width, v.height - fps = v.fps - if fps is not None and int(fps) == float(fps): - fps = int(fps) vid: VideoJson = { "codec": v.codec, - "fps": fps, + "fps": str(v.fps), "resolution": [w, h], "aspect_ratio": list(aspect_ratio(w, h)), + "pixel_aspect_ratio": v.sar, + "duration": v.duration, "pix_fmt": v.pix_fmt, "color_range": v.color_range, "color_space": v.color_space, @@ -127,16 +154,17 @@ } file_info[file]["video"].append(vid) - for track, a in enumerate(inp.audios): + for track, a in enumerate(src.audios): aud: AudioJson = { "codec": a.codec, "samplerate": a.samplerate, + "duration": a.duration, "bitrate": a.bitrate, "lang": a.lang, } file_info[file]["audio"].append(aud) - for track, s_stream in enumerate(inp.subtitles): + for track, s_stream in enumerate(src.subtitles): sub: SubtitleJson = {"codec": s_stream.codec, "lang": s_stream.lang} file_info[file]["subtitle"].append(sub) @@ -163,7 +191,7 @@ print(json.dumps(file_info, indent=4)) return - def stream_to_text(text: str, label: str, streams: List[Dict[str, Any]]) -> str: + def stream_to_text(text: str, label: str, streams: list[dict[str, Any]]) -> str: if len(streams) > 0: text += f" - {label}:\n" @@ -182,19 +210,18 @@ text = "" for name, info in file_info.items(): text += f"{name}:\n" - if "media" in info: - text += " - invalid media\n\n" - continue for label, streams in info.items(): - if isinstance(streams, dict): + if isinstance(streams, list): + text = stream_to_text(text, label, streams) + continue + elif isinstance(streams, dict): text += " - container:\n" for key, value in streams.items(): if value is not None: text += f" - {key}: {value}\n" - else: - assert isinstance(streams, list) - text = stream_to_text(text, label, streams) + elif label != "type" or streams != "media": + text += f" - {label}: {streams}\n" text += "\n" sys.stdout.write(text) diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/levels.py auto-editor-22w52a+ds/auto_editor/subcommands/levels.py --- auto-editor-22w28a+ds/auto_editor/subcommands/levels.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/levels.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,112 +1,133 @@ -import os +from __future__ import annotations + import sys -import tempfile from dataclasses import dataclass, field -from typing import List, Optional +from fractions import Fraction import numpy as np from numpy.typing import NDArray +from auto_editor.analyze import audio_levels, motion_levels, pixeldiff_levels from auto_editor.ffwrapper import FFmpeg, FileInfo +from auto_editor.objs.edit import audio_builder, motion_builder, pixeldiff_builder +from auto_editor.objs.util import _Vars, parse_dataclass +from auto_editor.output import Ensure +from auto_editor.utils.bar import Bar +from auto_editor.utils.func import setup_tempdir from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar +from auto_editor.utils.types import frame_rate from auto_editor.vanparse import ArgumentParser @dataclass +class Audio: + stream: int + + +@dataclass +class Motion: + stream: int + blur: int + width: int + + +@dataclass +class Pixeldiff: + stream: int + + +@dataclass class LevelArgs: - kind: str = "audio" - track: int = 0 - ffmpeg_location: Optional[str] = None + input: list[str] = field(default_factory=list) + edit: str = "audio" + timebase: Fraction | None = None + ffmpeg_location: str | None = None my_ffmpeg: bool = False help: bool = False - input: List[str] = field(default_factory=list) def levels_options(parser: ArgumentParser) -> ArgumentParser: + parser.add_required("input", nargs="*") parser.add_argument( - "--kind", - choices=["audio", "motion", "pixeldiff"], - help="Select the kind of detection to analyze.", + "--edit", + metavar="METHOD:[ATTRS?]", + help="Select the kind of detection to analyze with attributes", ) parser.add_argument( - "--track", - type=int, - help="Select the track to get. If `--kind` is set to motion, track will look " - "at video tracks instead of audio.", + "--timebase", + "-tb", + metavar="NUM", + type=frame_rate, + help="Set custom timebase", ) - parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file.") + parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file") parser.add_argument( "--my-ffmpeg", flag=True, - help="Use the ffmpeg on your PATH instead of the one packaged.", - ) - parser.add_required( - "input", nargs="*", help="Path to the file to have its levels dumped." + help="Use the ffmpeg on your PATH instead of the one packaged", ) return parser -def print_float_list(arr: NDArray[np.float_]) -> None: +def print_floats(arr: NDArray[np.float_]) -> None: for a in arr: sys.stdout.write(f"{a:.20f}\n") -def print_int_list(arr: NDArray[np.uint64]) -> None: +def print_ints(arr: NDArray[np.uint64]) -> None: for a in arr: sys.stdout.write(f"{a}\n") -def main(sys_args=sys.argv[1:]) -> None: +def main(sys_args: list[str] = sys.argv[1:]) -> None: parser = levels_options(ArgumentParser("levels")) args = parser.parse_args(LevelArgs, sys_args) ffmpeg = FFmpeg(args.ffmpeg_location, args.my_ffmpeg, False) - progress = ProgressBar("none") - temp = tempfile.mkdtemp() + bar = Bar("none") + temp = setup_tempdir(None, Log()) log = Log(temp=temp) - inp = FileInfo(args.input[0], ffmpeg, log) - fps = inp.get_fps() - - if args.kind == "audio": - from auto_editor.analyze.audio import audio_detection - from auto_editor.wavfile import read - - if args.track >= len(inp.audios): - log.error(f"Audio track '{args.track}' does not exist.") - - read_track = os.path.join(temp, f"{args.track}.wav") - - ffmpeg.run( - ["-i", inp.path, "-ac", "2", "-map", f"0:a:{args.track}", read_track] - ) - - if not os.path.isfile(read_track): - log.error("Audio track file not found!") - - sample_rate, audio_samples = read(read_track) - - print_float_list( - audio_detection(audio_samples, sample_rate, fps, progress, log) - ) - - if args.kind == "motion": - if args.track >= len(inp.videos): - log.error(f"Video track '{args.track}' does not exist.") - - from auto_editor.analyze.motion import motion_detection - - print_float_list(motion_detection(inp.path, fps, progress, width=400, blur=9)) - - if args.kind == "pixeldiff": - if args.track >= len(inp.videos): - log.error(f"Video track '{args.track}' does not exist.") - - from auto_editor.analyze.pixeldiff import pixel_difference - - print_int_list(pixel_difference(inp.path, fps, progress)) + sources = {} + for i, path in enumerate(args.input): + sources[str(i)] = FileInfo(path, ffmpeg, log, str(i)) + + assert "0" in sources + src = sources["0"] + + tb = src.get_fps() if args.timebase is None else args.timebase + ensure = Ensure(ffmpeg, src.get_samplerate(), temp, log) + + strict = True + METHODS = ("audio", "motion", "pixeldiff") + + if ":" in args.edit: + method, attrs = args.edit.split(":") + else: + method, attrs = args.edit, "" + + if method not in METHODS: + log.error(f"Method: {method} not supported") + + for src in sources.values(): + if method == "audio": + aobj = parse_dataclass(attrs, (Audio, audio_builder[1:]), log) + print_floats( + audio_levels(ensure, src, aobj.stream, tb, bar, strict, temp, log) + ) + + if method == "motion": + if src.videos: + _vars: _Vars = {"width": src.videos[0].width} + else: + _vars = {"width": 1} + mobj = parse_dataclass(attrs, (Motion, motion_builder[1:]), log, _vars) + print_floats(motion_levels(ensure, src, mobj, tb, bar, strict, temp, log)) + + if method == "pixeldiff": + pobj = parse_dataclass(attrs, (Pixeldiff, pixeldiff_builder[1:]), log) + print_ints(pixeldiff_levels(ensure, src, pobj, tb, bar, strict, temp, log)) log.cleanup() diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/repl.py auto-editor-22w52a+ds/auto_editor/subcommands/repl.py --- auto-editor-22w28a+ds/auto_editor/subcommands/repl.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/repl.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,124 @@ +# type: ignore + +from __future__ import annotations + +import sys +from dataclasses import dataclass, field +from fractions import Fraction + +import auto_editor +from auto_editor.ffwrapper import FFmpeg, FileInfo +from auto_editor.interpreter import ( + Cons, + FileSetup, + Interpreter, + Lexer, + MyError, + Parser, + Symbol, + print_val, +) +from auto_editor.output import Ensure +from auto_editor.utils.bar import Bar +from auto_editor.utils.func import setup_tempdir +from auto_editor.utils.log import Log +from auto_editor.utils.types import frame_rate +from auto_editor.vanparse import ArgumentParser + +try: + import readline # noqa +except ImportError: + pass + + +@dataclass +class REPL_Args: + input: list[str] = field(default_factory=list) + timebase: Fraction | None = None + ffmpeg_location: str | None = None + my_ffmpeg: bool = False + temp_dir: str | None = None + help: bool = False + + +def repl_options(parser: ArgumentParser) -> ArgumentParser: + parser.add_required("input", nargs="*") + parser.add_argument( + "--timebase", + "-tb", + metavar="NUM", + type=frame_rate, + help="Set custom timebase", + ) + parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file") + parser.add_argument( + "--my-ffmpeg", + flag=True, + help="Use the ffmpeg on your PATH instead of the one packaged", + ) + parser.add_argument( + "--temp-dir", + metavar="PATH", + help="Set where the temporary directory is located", + ) + return parser + + +def display_val(val: Any) -> str: + if val is None: + return "" + if isinstance(val, (list, Cons, Symbol)): + return f"'{print_val(val)}\n" + if isinstance(val, Fraction): + return f"{val.numerator}/{val.denominator}\n" + + return f"{print_val(val)}\n" + + +def main(sys_args: list[str] = sys.argv[1:]) -> None: + parser = repl_options(ArgumentParser(None)) + args = parser.parse_args(REPL_Args, sys_args) + + if len(args.input) == 0: + filesetup = None + log = Log(quiet=True) + else: + temp = setup_tempdir(args.temp_dir, Log()) + log = Log(quiet=True, temp=temp) + ffmpeg = FFmpeg(args.ffmpeg_location, args.my_ffmpeg, False) + strict = len(args.input) < 2 + sources = {} + for i, path in enumerate(args.input): + sources[str(i)] = FileInfo(path, ffmpeg, log, str(i)) + + src = sources["0"] + tb = src.get_fps() if args.timebase is None else args.timebase + ensure = Ensure(ffmpeg, src.get_samplerate(), temp, log) + filesetup = FileSetup(src, ensure, strict, tb, Bar("none"), temp, log) + + print(f"Auto-Editor {auto_editor.version} ({auto_editor.__version__})") + + try: + while True: + text = input("> ") + + try: + lexer = Lexer(text) + parser = Parser(lexer) + except MyError as e: + print(f"error: {e}") + continue + + try: + interpreter = Interpreter(parser, filesetup) + for result in interpreter.interpret(): + sys.stdout.write(display_val(result)) + except (MyError, ZeroDivisionError) as e: + print(f"error: {e}") + + except (KeyboardInterrupt, EOFError): + print("") + + +if __name__ == "__main__": + main() diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/subdump.py auto-editor-22w52a+ds/auto_editor/subcommands/subdump.py --- auto-editor-22w28a+ds/auto_editor/subcommands/subdump.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/subdump.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,8 +1,9 @@ +from __future__ import annotations + import os import sys import tempfile from dataclasses import dataclass, field -from typing import List, Optional from auto_editor.ffwrapper import FFmpeg, FileInfo from auto_editor.utils.log import Log @@ -11,26 +12,24 @@ @dataclass class SubArgs: - ffmpeg_location: Optional[str] = None + ffmpeg_location: str | None = None my_ffmpeg: bool = False help: bool = False - input: List[str] = field(default_factory=list) + input: list[str] = field(default_factory=list) def subdump_options(parser: ArgumentParser) -> ArgumentParser: - parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file.") + parser.add_required("input", nargs="*") + parser.add_argument("--ffmpeg-location", help="Point to your custom ffmpeg file") parser.add_argument( "--my-ffmpeg", flag=True, - help="Use the ffmpeg on your PATH instead of the one packaged.", - ) - parser.add_required( - "input", nargs="*", help="Path to the file to have its subtitles dumped." + help="Use the ffmpeg on your PATH instead of the one packaged", ) return parser -def main(sys_args=sys.argv[1:]) -> None: +def main(sys_args: list[str] = sys.argv[1:]) -> None: args = subdump_options(ArgumentParser("subdump")).parse_args(SubArgs, sys_args) ffmpeg = FFmpeg(args.ffmpeg_location, args.my_ffmpeg, debug=False) @@ -39,14 +38,14 @@ log = Log(temp=temp) for i, input_file in enumerate(args.input): - inp = FileInfo(input_file, ffmpeg, log) + src = FileInfo(input_file, ffmpeg, log) cmd = ["-i", input_file] - for s, sub in enumerate(inp.subtitles): + for s, sub in enumerate(src.subtitles): cmd.extend(["-map", f"0:s:{s}", os.path.join(temp, f"{i}s{s}.{sub.ext}")]) ffmpeg.run(cmd) - for s, sub in enumerate(inp.subtitles): + for s, sub in enumerate(src.subtitles): print(f"file: {input_file} ({s}:{sub.lang}:{sub.ext})") with open(os.path.join(temp, f"{i}s{s}.{sub.ext}")) as file: print(file.read()) diff -Nru auto-editor-22w28a+ds/auto_editor/subcommands/test.py auto-editor-22w52a+ds/auto_editor/subcommands/test.py --- auto-editor-22w28a+ds/auto_editor/subcommands/test.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/subcommands/test.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,26 +1,36 @@ # type: ignore +from __future__ import annotations -import logging import os import platform import shutil import subprocess import sys from dataclasses import dataclass, field +from fractions import Fraction from time import perf_counter -from typing import Callable, List, NoReturn, Optional, Tuple +from typing import Callable -import av import numpy as np +from auto_editor.ffwrapper import FFmpeg, FileInfo +from auto_editor.interpreter import ( + Char, + Cons, + Interpreter, + Lexer, + MyError, + Null, + Parser, + Symbol, +) +from auto_editor.utils.log import Log from auto_editor.vanparse import ArgumentParser -av.logging.set_level(av.logging.PANIC) - @dataclass class TestArgs: - only: List[str] = field(default_factory=list) + only: list[str] = field(default_factory=list) help: bool = False category: str = "cli" @@ -30,261 +40,247 @@ parser.add_required( "category", nargs=1, - choices=["cli", "sub", "api", "unit", "all"], - help="Set what category of tests to run.", + choices=("unit", "cli", "sub", "all"), + metavar="category [options]", ) return parser -def pipe_to_console(cmd: List[str]) -> Tuple[int, str, str]: +def pipe_to_console(cmd: list[str]) -> tuple[int, str, str]: process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = process.communicate() return process.returncode, stdout.decode("utf-8"), stderr.decode("utf-8") -def cleanup(the_dir: str) -> None: - for item in os.listdir(the_dir): - item = os.path.join(the_dir, item) - if ( - "_ALTERED" in item - or item.endswith(".xml") - or item.endswith(".fcpxml") - or item.endswith(".mlt") - ): - os.remove(item) - if item.endswith("_tracks"): - shutil.rmtree(item) - - -def clean_all() -> None: - cleanup("resources") - cleanup(os.getcwd()) - - -def get_runner() -> List[str]: - if platform.system() == "Windows": - return ["py", "-m", "auto_editor"] - return ["python3", "-m", "auto_editor"] - - -def run_program(cmd: List[str]) -> None: - no_open = "." in cmd[0] - cmd = get_runner() + cmd - - if no_open: - cmd += ["--no_open"] - - returncode, stdout, stderr = pipe_to_console(cmd) - if returncode > 0: - raise Exception(f"{stdout}\n{stderr}\n") - - -def check_for_error(cmd: List[str], match=None) -> None: - returncode, stdout, stderr = pipe_to_console(get_runner() + cmd) - if returncode > 0: - if "Error!" in stderr: - if match is not None and match not in stderr: - raise Exception(f'Could\'t find "{match}"') - else: - raise Exception(f"Program crashed.\n{stdout}\n{stderr}") - else: - raise Exception("Program should not respond with a code 0.") - - -def make_np_list(in_file: str, compare_file: str, the_speed: float) -> None: - from auto_editor.render.tsm.phasevocoder import phasevocoder - from auto_editor.wavfile import read +class Checker: + def __init__(self, ffmpeg: FFmpeg, log: Log): + self.ffmpeg = ffmpeg + self.log = log + + def check(self, path: str) -> FileInfo: + return FileInfo(path, self.ffmpeg, self.log) - _, sped_chunk = read(in_file) - channels = 2 - spedup_audio = phasevocoder(channels, the_speed, sped_chunk) - loaded = np.load(compare_file) - - if not np.array_equal(spedup_audio, loaded["a"]): - if spedup_audio.shape == loaded["a"].shape: - print(f"Both shapes ({spedup_audio.shape}) are same") +class Runner: + def __init__(self) -> None: + if platform.system() == "Windows": + self.program = ["py", "-m", "auto_editor"] else: - print(spedup_audio.shape) - print(loaded["a"].shape) - - result = np.subtract(spedup_audio, loaded["a"]) - - print(f"result non-zero: {np.count_nonzero(result)}") - print(f"len of spedup_audio: {len(spedup_audio)}") + self.program = ["python3", "-m", "auto_editor"] - print( - np.count_nonzero(result) / spedup_audio.shape[0], - "difference between arrays", - ) + def main( + self, inputs: list[str], cmd: list[str], output: str | None = None + ) -> str | None: + cmd = self.program + inputs + cmd + ["--no-open"] + + if output is not None: + root, ext = os.path.splitext(output) + if inputs and ext == "": + output = root + os.path.splitext(inputs[0])[1] + cmd += ["--output", output] + + if output is None and inputs: + root, ext = os.path.splitext(inputs[0]) + + if "--export_as_json" in cmd: + ext = ".json" + elif "-exp" in cmd: + ext = ".xml" + elif "-exf" in cmd: + ext = ".fcpxml" + elif "-exs" in cmd: + ext = ".mlt" + + output = f"{root}_ALTERED{ext}" + + returncode, stdout, stderr = pipe_to_console(cmd) + if returncode > 0: + raise Exception(f"{stdout}\n{stderr}\n") + + return output + + def raw(self, cmd: list[str]) -> None: + returncode, stdout, stderr = pipe_to_console(self.program + cmd) + if returncode > 0: + raise Exception(f"{stdout}\n{stderr}\n") + + def check(self, cmd: list[str], match=None) -> None: + returncode, stdout, stderr = pipe_to_console(self.program + cmd) + if returncode > 0: + if "Error!" in stderr: + if match is not None and match not in stderr: + raise Exception(f'Could\'t find "{match}"') + else: + raise Exception(f"Program crashed.\n{stdout}\n{stderr}") + else: + raise Exception("Program should not respond with code 0 but did!") - raise Exception(f"file {compare_file} doesn't match array.") - # np.savez_compressed(out_file, a=spedup_audio) +def run_tests(tests: list[Callable], args: TestArgs) -> None: + def clean_all() -> None: + def clean(the_dir: str) -> None: + for item in os.listdir(the_dir): + if "_ALTERED" in item: + os.remove(os.path.join(the_dir, item)) + if item.endswith("_tracks"): + shutil.rmtree(os.path.join(the_dir, item)) + clean("resources") + clean(os.getcwd()) -class Tester: - def __init__(self, args: TestArgs) -> None: - self.passed_tests = 0 - self.failed_tests = 0 - self.args = args + if args.only != []: + tests = filter(lambda t: t.__name__ in args.only, tests) - def run(self, func: Callable, cleanup=None, allow_fail=False) -> None: - if self.args.only != [] and func.__name__ not in self.args.only: - return + total_time = 0 + for passed, test in enumerate(tests): start = perf_counter() + try: - func() - end = perf_counter() - start + outputs = test() + dur = perf_counter() - start + total_time += dur except KeyboardInterrupt: - print(f"Testing Interrupted by User.") + print("Testing Interrupted by User.") clean_all() sys.exit(1) except Exception as e: - self.failed_tests += 1 - print(f"Test '{func.__name__}' failed.\n{e}") - if not allow_fail: - logging.error("", exc_info=True) - clean_all() - sys.exit(1) - else: - self.passed_tests += 1 - print(f"Test '{func.__name__}' passed: {round(end, 2)} secs") - if cleanup is not None: - cleanup() - - def end(self) -> NoReturn: - print(f"{self.passed_tests}/{self.passed_tests + self.failed_tests}") - clean_all() - sys.exit(0) + print(f"Test '{test.__name__}' ({passed}/{len(tests)}) failed.\n{e}") + clean_all() + raise e + + print(f"{test.__name__:<25} {round(dur, 2)} secs") + if outputs is not None: + if isinstance(outputs, str): + outputs = [outputs] -def main(sys_args: Optional[List[str]] = None): + for out in outputs: + try: + os.remove(out) + except FileNotFoundError: + pass + + print(f"\nCompleted\n{passed+1}/{len(tests)}\n{round(total_time, 2)} secs") + + +def main(sys_args: list[str] | None = None): if sys_args is None: sys_args = sys.argv[1:] args = test_options(ArgumentParser("test")).parse_args(TestArgs, sys_args) + run = Runner() + checker = Checker(FFmpeg(), Log()) + ### Tests ### ## API Tests ## - def read_api_0_1(): - check_for_error( - ["resources/json/0.1-non-zero-start.json"], - "Error! First chunk must start with 0", - ) - check_for_error( - ["resources/json/0.1-disjoint.json"], "Error! Chunk disjointed at" - ) - def help_tests(): """check the help option, its short, and help on options and groups.""" - run_program(["--help"]) - run_program(["-h"]) - run_program(["--frame_margin", "--help"]) - run_program(["--frame_margin", "-h"]) - run_program(["--help", "--help"]) - run_program(["-h", "--help"]) - run_program(["--help", "-h"]) - run_program(["-h", "--help"]) + run.raw(["--help"]) + run.raw(["-h"]) + run.raw(["--margin", "--help"]) + run.raw(["--edit", "-h"]) + run.raw(["--help", "--help"]) + run.raw(["-h", "--help"]) + run.raw(["--help", "-h"]) def version_test(): """Test version flags and debug by itself.""" - run_program(["--version"]) - run_program(["-v"]) - run_program(["-V"]) - run_program(["--debug"]) + run.raw(["--version"]) + run.raw(["-V"]) def parser_test(): - check_for_error(["example.mp4", "--video-speed"], "needs argument") - - def tsm_1a5_test(): - make_np_list( - "resources/wav/example-cut-s16le.wav", - "resources/data/example_1.5_speed.npz", - 1.5, - ) - - def tsm_2a0_test(): - make_np_list( - "resources/wav/example-cut-s16le.wav", - "resources/data/example_2.0_speed.npz", - 2, - ) + run.check(["example.mp4", "--video-speed"], "needs argument") def info(): - run_program(["info", "example.mp4"]) - run_program(["info", "resources/only-video/man-on-green-screen.mp4"]) - run_program(["info", "resources/multi-track.mov"]) - run_program(["info", "resources/new-commentary.mp3"]) - run_program(["info", "resources/testsrc.mkv"]) + run.raw(["info", "example.mp4"]) + run.raw(["info", "resources/only-video/man-on-green-screen.mp4"]) + run.raw(["info", "resources/multi-track.mov"]) + run.raw(["info", "resources/new-commentary.mp3"]) + run.raw(["info", "resources/testsrc.mkv"]) def levels(): - run_program(["levels", "resources/multi-track.mov"]) - run_program(["levels", "resources/new-commentary.mp3"]) + run.raw(["levels", "resources/multi-track.mov"]) + run.raw(["levels", "resources/new-commentary.mp3"]) def subdump(): - run_program(["subdump", "resources/subtitle.mp4"]) + run.raw(["subdump", "resources/subtitle.mp4"]) def grep(): - run_program(["grep", "boop", "resources/subtitle.mp4"]) + run.raw(["grep", "boop", "resources/subtitle.mp4"]) def desc(): - run_program(["desc", "example.mp4"]) + run.raw(["desc", "example.mp4"]) - def example_tests(): - run_program(["example.mp4", "--video_codec", "uncompressed"]) - with av.open("example_ALTERED.mp4") as cn: - video = cn.streams.video[0] - assert video.average_rate == 30 - assert video.width == 1280 - assert video.height == 720 - assert video.codec.name == "mpeg4" - assert cn.streams.audio[0].codec.name == "aac" - assert cn.streams.audio[0].rate == 48000 - - run_program(["example.mp4"]) - with av.open("example_ALTERED.mp4") as cn: - video = cn.streams.video[0] - assert video.average_rate == 30 - assert video.width == 1280 - assert video.height == 720 - assert video.codec.name == "h264" - assert cn.streams.audio[0].codec.name == "aac" - assert cn.streams.audio[0].rate == 48000 - assert video.language == "eng" - assert cn.streams.audio[0].language == "eng" + def example(): + out = run.main(inputs=["example.mp4"], cmd=[]) + cn = checker.check(out) + video = cn.videos[0] + + assert video.fps == 30 + assert video.time_base == Fraction(1, 30) + assert video.width == 1280 + assert video.height == 720 + assert video.codec == "h264" + assert video.lang == "eng" + assert cn.audios[0].codec == "aac" + assert cn.audios[0].samplerate == 48000 + assert cn.audios[0].lang == "eng" + + return out + + def add_audio(): + run.main( + ["example.mp4"], + [ + "--source", + "snd:resources/wav/pcm-f32le.wav", + "--add", + "audio:0.3sec,end,snd,volume=0.3", + ], + ) + return run.main( + ["example.mp4"], + [ + "--source", + "snd:resources/wav/pcm-f32le.wav", + "--add", + "audio:2,40,snd,3sec", + ], + ) # PR #260 def high_speed_test(): - run_program(["example.mp4", "--video-speed", "99998"]) + return run.main(inputs=["example.mp4"], cmd=["--video-speed", "99998"]) # Issue #184 - def unit_tests(): - run_program( - ["example.mp4", "--mark_as_loud", "20s,22sec", "25secs,26.5seconds"] - ) - run_program(["example.mp4", "--edit", "all", "--set-speed", "125%,-30,end"]) - run_program(["example.mp4", "--sample_rate", "44_100"]) - run_program(["example.mp4", "--margin", "3_0"]) - run_program(["example.mp4", "--sample_rate", "44100 Hz"]) - run_program(["example.mp4", "--sample_rate", "44.1 kHz"]) - run_program(["example.mp4", "--silent_threshold", "4%"]) + def units(): + run.main(["example.mp4"], ["--mark_as_loud", "20s,22sec", "25secs,26.5seconds"]) + run.main(["example.mp4"], ["--edit", "all", "--set-speed", "125%,-30,end"]) + return run.main(["example.mp4"], ["--edit", "audio:threshold=4%"]) + + def sr_units(): + run.main(["example.mp4"], ["--sample_rate", "44100 Hz"]) + return run.main(["example.mp4"], ["--sample_rate", "44.1 kHz"]) + + def video_speed(): + return run.main(["example.mp4"], ["--video-speed", "1.5"]) - def backwards_range_test(): + def backwards_range(): """ Cut out the last 5 seconds of a media file by using negative number in the range. """ - run_program(["example.mp4", "--edit", "none", "--cut_out", "-5secs,end"]) - run_program(["example.mp4", "--edit", "all", "--add_in", "-5secs,end"]) + run.main(["example.mp4"], ["--edit", "none", "--cut_out", "-5secs,end"]) + return run.main(["example.mp4"], ["--edit", "all", "--add_in", "-5secs,end"]) - def cut_out_test(): - run_program( + def cut_out(): + run.main( + ["example.mp4"], [ - "example.mp4", "--edit", "none", "--video_speed", @@ -293,170 +289,167 @@ "3", "--cut_out", "2secs,10secs", - ] + ], ) - run_program( - [ - "example.mp4", - "--edit", - "all", - "--video_speed", - "2", - "--add_in", - "2secs,10secs", - ] + return run.main( + ["example.mp4"], + ["--edit", "all", "--video_speed", "2", "--add_in", "2secs,10secs"], ) - def gif_test(): + def gif(): """ - Feed auto-editor a gif file and make sure it can spit out a correctly formated + Feed auto-editor a gif file and make sure it can spit out a correctly formatted gif. No editing is requested. """ - run_program(["resources/only-video/man-on-green-screen.gif", "--edit", "none"]) - with av.open("resources/only-video/man-on-green-screen_ALTERED.gif") as cn: - assert cn.streams.video[0].codec.name == "gif" + out = run.main( + ["resources/only-video/man-on-green-screen.gif"], ["--edit", "none"] + ) + assert checker.check(out).videos[0].codec == "gif" + + return out def margin_tests(): - run_program(["example.mp4", "-m", "3"]) - run_program(["example.mp4", "--margin", "3"]) - run_program(["example.mp4", "-m", "0.3sec"]) - run_program(["example.mp4", "-m", "6f,-3secs"]) - run_program(["example.mp4", "-m", "3,5 frames"]) - run_program(["example.mp4", "-m", "0.4 seconds"]) + run.main(["example.mp4"], ["-m", "3"]) + run.main(["example.mp4"], ["--margin", "3"]) + run.main(["example.mp4"], ["-m", "0.3sec"]) + run.main(["example.mp4"], ["-m", "6,-3secs"]) + return run.main(["example.mp4"], ["-m", "0.4 seconds", "--stats"]) def input_extension(): """Input file must have an extension. Throw error if none is given.""" shutil.copy("example.mp4", "example") - check_for_error(["example", "--no_open"], "must have an extension.") - os.remove("example") + run.check(["example", "--no-open"], "must have an extension.") + + return "example" def output_extension(): # Add input extension to output name if no output extension is given. - run_program(["example.mp4", "-o", "out"]) - with av.open("out.mp4") as cn: - assert cn.streams.video[0].codec.name == "h264" - - os.remove("out.mp4") - - run_program(["resources/testsrc.mkv", "-o", "out"]) - with av.open("out.mkv") as cn: - assert cn.streams.video[0].codec.name == "h264" - - os.remove("out.mkv") - - def progress_ops_test(): - run_program(["example.mp4", "--progress", "machine"]) - run_program(["example.mp4", "--progress", "none"]) - run_program(["example.mp4", "--progress", "ascii"]) + out = run.main(inputs=["example.mp4"], cmd=[], output="out") + + assert out == "out.mp4" + assert checker.check(out).videos[0].codec == "h264" + + out = run.main(inputs=["resources/testsrc.mkv"], cmd=[], output="out") + assert out == "out.mkv" + assert checker.check(out).videos[0].codec == "h264" + + return "out.mp4", "out.mkv" + + def progress(): + run.main(["example.mp4"], ["--progress", "machine"]) + run.main(["example.mp4"], ["--progress", "none"]) + return run.main(["example.mp4"], ["--progress", "ascii"]) def silent_threshold(): - run_program(["resources/new-commentary.mp3", "--silent_threshold", "0.1"]) + return run.main( + ["resources/new-commentary.mp3"], ["--edit", "audio:threshold=0.1"] + ) def track_tests(): - run_program(["resources/multi-track.mov", "--keep_tracks_seperate"]) + return run.main(["resources/multi-track.mov"], ["--keep_tracks_seperate"]) def json_tests(): - run_program(["example.mp4", "--export_as_json"]) - run_program(["example.json"]) + out = run.main(["example.mp4"], ["--export_as_json"]) + out2 = run.main([out], []) + return out, out2 def resolution_and_scale(): - run_program(["example.mp4", "--scale", "1.5"]) - with av.open("example_ALTERED.mp4") as cn: - assert cn.streams.video[0].average_rate == 30 - assert cn.streams.video[0].width == 1920 - assert cn.streams.video[0].height == 1080 - assert cn.streams.audio[0].rate == 48000 - - run_program(["example.mp4", "--scale", "0.2"]) - with av.open("example_ALTERED.mp4") as cn: - assert cn.streams.video[0].average_rate == 30 - assert cn.streams.video[0].width == 256 - assert cn.streams.video[0].height == 144 - assert cn.streams.audio[0].rate == 48000 - - run_program(["example.mp4", "-res", "700,380", "-b", "darkgreen"]) - with av.open("example_ALTERED.mp4") as cn: - assert cn.streams.video[0].average_rate == 30 - assert cn.streams.video[0].width == 700 - assert cn.streams.video[0].height == 380 - assert cn.streams.audio[0].rate == 48000 + cn = checker.check(run.main(["example.mp4"], ["--scale", "1.5"])) + + assert cn.videos[0].fps == 30 + assert cn.videos[0].width == 1920 + assert cn.videos[0].height == 1080 + assert cn.audios[0].samplerate == 48000 + + cn = checker.check(run.main(["example.mp4"], ["--scale", "0.2"])) + + assert cn.videos[0].fps == 30 + assert cn.videos[0].width == 256 + assert cn.videos[0].height == 144 + assert cn.audios[0].samplerate == 48000 + + out = run.main(["example.mp4"], ["-res", "700,380", "-b", "darkgreen"]) + cn = checker.check(out) + + assert cn.videos[0].fps == 30 + assert cn.videos[0].width == 700 + assert cn.videos[0].height == 380 + assert cn.audios[0].samplerate == 48000 + + return out def obj_makes_video(): - run_program( - [ - "resources/new-commentary.mp3", - "--add-rectangle", - "0,30,0,0,300,300,fill=blue", - "-o", - "out.mp4", - ] - ) - with av.open("out.mp4") as cn: - assert len(cn.streams.video) == 1 - assert len(cn.streams.audio) == 1 - assert cn.streams.video[0].width == 1920 - assert cn.streams.video[0].height == 1080 - assert cn.streams.video[0].average_rate == 30 - - def various_errors_test(): - check_for_error( - ["example.mp4", "--add_rectangle", "0,60", "--cut_out", "60,end"] - ) + out = run.main( + ["resources/new-commentary.mp3"], + ["--add", "rectangle:0,30,0,0,300,300,fill=blue"], + "out.mp4", + ) + cn = checker.check(out) + assert len(cn.videos) == 1 + assert len(cn.audios) == 1 + assert cn.videos[0].width == 1920 + assert cn.videos[0].height == 1080 + assert cn.videos[0].fps == 30 + + return out + + def various_errors(): + run.check(["example.mp4", "--add", "rectangle:0,60", "--cut-out", "60,end"]) def render_video_objs(): - run_program( + out = run.main( + ["resources/testsrc.mp4"], [ - "resources/testsrc.mp4", "--mark_as_loud", "start,end", - "--add_rectangle", - "0,30,0,200,100,300,fill=#43FA56,stroke=10", - ] + "--add", + "rectangle:0,30,0,200,100,300,fill=#43FA56,stroke=10", + ], ) - os.remove("resources/testsrc_ALTERED.mp4") # Every element should be visible, order should be preserved. - run_program( + run.main( + ["example.mp4"], [ - "example.mp4", - "--add-ellipse", - "0,30,50%,50%,300,300,fill=red", - "--add-rectangle", - "0,30,500,440,400,200,fill=skyblue", - "--add-ellipse", - "0,30,50%,50%,100,100,fill=darkgreen", + "--add", + "ellipse:0,30,50%,50%,300,300,fill=red", + "rectangle:0,30,500,440,400,200,fill=skyblue", + "ellipse:0,30,50%,50%,100,100,fill=darkgreen", "--edit", "none", "--cut-out", "30,end", - ] + ], ) # Both ellipses should be visible - run_program( + out2 = run.main( + ["example.mp4"], [ - "example.mp4", - "--add-ellipse", - "0,60,50%,50%,300,300,fill=darkgreen", - "0,30,50%,50%,200,200,fill=green", + "--add", + "ellipse:0,60,50%,50%,300,300,fill=darkgreen", + "ellipse:0,30,50%,50%,200,200,fill=green", "--edit", "none", "--cut-out", "60,end", - ] + ], ) + return out, out2 + def render_text(): - run_program(["example.mp4", "--add-text", "0,30,This is my text,font=default"]) + return run.main( + ["example.mp4"], ["--add", "text:0,30,This is my text,font=default"] + ) def check_font_error(): - check_for_error( - ["example.mp4", "--add-text", "0,30,text,0,0,notafont"], "not found" - ) + run.check(["example.mp4", "--add", "text:0,30,text,0,0,notafont"], "not found") - def export_tests(): - for test_name in ( + def export(): + results = set() + all_files = ( "aac.m4a", "alac.m4a", "wav/pcm-f32le.wav", @@ -464,194 +457,307 @@ "multi-track.mov", "subtitle.mp4", "testsrc.mkv", - ): + ) + for test_name in all_files: test_file = f"resources/{test_name}" - run_program([test_file]) - run_program([test_file, "--edit", "none"]) - run_program([test_file, "-exp"]) - run_program([test_file, "-exf"]) - run_program([test_file, "-exs"]) - run_program([test_file, "--export_as_clip_sequence"]) - run_program([test_file, "--preview"]) - cleanup("resources") + results.add(run.main([test_file], [])) + run.main([test_file], ["--edit", "none"]) - def codec_tests(): - run_program(["example.mp4", "--video_codec", "h264"]) - run_program(["example.mp4", "--audio_codec", "ac3"]) + p_xml = run.main([test_file], ["-exp"]) + run.main([p_xml], []) + + run.main([test_file], ["-exf"]) + run.main([test_file], ["-exs"]) + run.main([test_file], ["--export_as_clip_sequence"]) + run.main([test_file], ["--stats"]) - def combine(): - run_program(["example.mp4", "--mark_as_silent", "0,171", "-o", "hmm.mp4"]) - run_program(["example.mp4", "hmm.mp4", "--combine-files", "--debug"]) - os.remove("hmm.mp4") + return tuple(results) + + def codec_tests(): + run.main(["example.mp4"], ["--video_codec", "h264"]) + return run.main(["example.mp4"], ["--audio_codec", "ac3"]) # Issue #241 def multi_track_edit(): - run_program( - [ - "example.mp4", - "resources/multi-track.mov", - "--edit", - "audio:stream=1", - "-o", - "out.mov", - ] - ) - with av.open("out.mov", "r") as cn: - assert len(cn.streams.audio) == 1 + out = run.main( + ["example.mp4", "resources/multi-track.mov"], + ["--edit", "audio:stream=1"], + "out.mov", + ) + assert len(checker.check(out).audios) == 1 + + return out + + def concat(): + out = run.main(["example.mp4"], ["--mark_as_silent", "0,171"], "hmm.mp4") + out2 = run.main(["example.mp4", "hmm.mp4"], ["--debug"]) + return out, out2 def concat_mux_tracks(): - run_program(["example.mp4", "resources/multi-track.mov", "-o", "out.mov"]) - with av.open("out.mov", "r") as cn: - assert len(cn.streams.audio) == 1 + out = run.main(["example.mp4", "resources/multi-track.mov"], [], "out.mov") + assert len(checker.check(out).audios) == 1 + + return out def concat_multiple_tracks(): - run_program( - [ - "resources/multi-track.mov", - "resources/multi-track.mov", - "--keep-tracks-separate", - "-o", - "out.mov", - ] + out = run.main( + ["resources/multi-track.mov", "resources/multi-track.mov"], + ["--keep-tracks-separate"], + "out.mov", + ) + assert len(checker.check(out).audios) == 2 + out = run.main( + ["example.mp4", "resources/multi-track.mov"], + ["--keep-tracks-separate"], + "out.mov", ) - with av.open("out.mov", "r") as cn: - assert len(cn.streams.audio) == 2, f"audio streams: {len(cn.streams.audio)}" + assert len(checker.check(out).audios) == 2 - run_program( - [ - "example.mp4", - "resources/multi-track.mov", - "--keep-tracks-separate", - "-o", - "out.mov", - ] - ) - with av.open("out.mov", "r") as cn: - assert len(cn.streams.audio) == 2 - os.remove("out.mov") + return out def frame_rate(): - run_program(["example.mp4", "-r", "15", "--no-seek"]) - with av.open("example_ALTERED.mp4", "r") as cn: - video = cn.streams.video[0] - assert video.average_rate == 15 - dur = float(video.duration * video.time_base) - assert dur - 17.33333333333333333333333 < 3 - - run_program(["example.mp4", "-r", "20"]) - with av.open("example_ALTERED.mp4", "r") as cn: - video = cn.streams.video[0] - assert video.average_rate == 20 - dur = float(video.duration * video.time_base) - assert dur - 17.33333333333333333333333 < 2 - - run_program(["example.mp4", "-r", "60"]) - with av.open("example_ALTERED.mp4", "r") as cn: - video = cn.streams.video[0] - assert video.average_rate == 60 - dur = float(video.duration * video.time_base) - assert dur - 17.33333333333333333333333 < 0.3 - - def image_test(): - run_program(["resources/embedded-image/h264-png.mp4"]) - with av.open("resources/embedded-image/h264-png_ALTERED.mp4", "r") as cn: - assert cn.streams.video[0].codec.name == "h264" - assert cn.streams.video[1].codec.name == "png" - - run_program(["resources/embedded-image/h264-mjpeg.mp4"]) - with av.open("resources/embedded-image/h264-mjpeg_ALTERED.mp4", "r") as cn: - assert cn.streams.video[0].codec.name == "h264" - assert cn.streams.video[1].codec.name == "mjpeg" - - run_program(["resources/embedded-image/h264-png.mkv"]) - with av.open("resources/embedded-image/h264-png_ALTERED.mkv", "r") as cn: - assert cn.streams.video[0].codec.name == "h264" - assert cn.streams.video[1].codec.name == "png" - - run_program(["resources/embedded-image/h264-mjpeg.mkv"]) - with av.open("resources/embedded-image/h264-mjpeg_ALTERED.mkv", "r") as cn: - assert cn.streams.video[0].codec.name == "h264" - assert cn.streams.video[1].codec.name == "mjpeg" - - def motion_tests(): - run_program( - [ - "resources/only-video/man-on-green-screen.mp4", - "--edit", - "motion", - "--debug", - "--frame_margin", - "0", - "-mcut", - "0", - "-mclip", - "0", - ] - ) - run_program( - [ - "resources/only-video/man-on-green-screen.mp4", - "--edit", - "motion:threshold=0", - ] + cn = checker.check(run.main(["example.mp4"], ["-r", "15", "--no-seek"])) + video = cn.videos[0] + assert video.fps == 15 + assert video.time_base == Fraction(1, 15) + assert float(video.duration) - 17.33333333333333333333333 < 3 + + cn = checker.check(run.main(["example.mp4"], ["-r", "20"])) + video = cn.videos[0] + assert video.fps == 20 + assert video.time_base == Fraction(1, 20) + assert float(video.duration) - 17.33333333333333333333333 < 2 + + cn = checker.check(out := run.main(["example.mp4"], ["-r", "60"])) + video = cn.videos[0] + + assert video.fps == 60 + assert video.time_base == Fraction(1, 60) + assert float(video.duration) - 17.33333333333333333333333 < 0.3 + + return out + + def embedded_image(): + out1 = run.main(["resources/embedded-image/h264-png.mp4"], []) + cn = checker.check(out1) + assert cn.videos[0].codec == "h264" + assert cn.videos[1].codec == "png" + + out2 = run.main(["resources/embedded-image/h264-mjpeg.mp4"], []) + cn = checker.check(out2) + assert cn.videos[0].codec == "h264" + assert cn.videos[1].codec == "mjpeg" + + out3 = run.main(["resources/embedded-image/h264-png.mkv"], []) + cn = checker.check(out3) + assert cn.videos[0].codec == "h264" + assert cn.videos[1].codec == "png" + + out4 = run.main(["resources/embedded-image/h264-mjpeg.mkv"], []) + cn = checker.check(out4) + assert cn.videos[0].codec == "h264" + assert cn.videos[1].codec == "mjpeg" + + return out1, out2, out3, out4 + + def motion(): + out = run.main( + ["resources/only-video/man-on-green-screen.mp4"], + ["--edit", "motion", "--margin", "0", "-mcut", "0", "-mclip", "0"], + ) + out2 = run.main( + ["resources/only-video/man-on-green-screen.mp4"], + ["--edit", "motion:threshold=0,width=200"], ) + return out, out2 def edit_positive_tests(): - run_program(["resources/multi-track.mov", "--edit", "audio:stream=all"]) - run_program(["resources/multi-track.mov", "--edit", "not audio:stream=all"]) - run_program( - [ - "resources/multi-track.mov", - "--edit", - "not audio:threshold=4% or audio:stream=1", - ] + run.main(["resources/multi-track.mov"], ["--edit", "audio:stream=all"]) + run.main(["resources/multi-track.mov"], ["--edit", "not audio:stream=all"]) + run.main( + ["resources/multi-track.mov"], + ["--edit", "(or (not audio:threshold=4%) audio:stream=1)"], + ) + out = run.main( + ["resources/multi-track.mov"], + ["--edit", "(or (not audio:threshold=4%) (not audio:stream=1))"], ) - # run_program(['resources/multi-track.mov', '--edit', 'not audio:threshold=4% or not audio:stream=1']) + return out def edit_negative_tests(): - check_for_error( + run.check( ["resources/wav/example-cut-s16le.wav", "--edit", "motion"], "Video stream '0' does not exist", ) - check_for_error( + run.check( ["resources/only-video/man-on-green-screen.gif", "--edit", "audio"], "Audio stream '0' does not exist", ) - check_for_error( - ["example.mp4", "--edit", "not"], "Error! Dangling operand: 'not'" - ) - check_for_error( - ["example.mp4", "--edit", "audio and"], "Error! Dangling operand: 'and'" - ) - check_for_error( - ["example.mp4", "--edit", "and"], - "Error! 'and' operand needs two arguments.", - ) - check_for_error( - ["example.mp4", "--edit", "and audio"], - "Error! 'and' operand needs two arguments.", - ) - check_for_error( - ["example.mp4", "--edit", "or audio"], - "Error! 'or' operand needs two arguments.", - ) - check_for_error( - ["example.mp4", "--edit", "audio four audio"], - "Error! Unknown method/operator: 'four'", - ) - check_for_error( - ["example.mp4", "--edit", "audio audio"], - "Logic operator must be between two editing methods", + + def yuv442p(): + return run.main(["resources/test_yuv422p.mp4"], []) + + # Issue 280 + def SAR(): + out = run.main(["resources/SAR-2by3.mp4"], []) + assert checker.check(out).videos[0].sar == "2:3" + + return out + + def palet(): + def cases(*cases: tuple[str, Any]) -> None: + for text, expected in cases: + try: + parser = Parser(Lexer(text)) + interpreter = Interpreter(parser, None) + interpreter.GLOBAL_SCOPE["timebase"] = Fraction(30) + results = interpreter.interpret() + except MyError as e: + raise ValueError(f"{text}\nMyError: {e}") + + if isinstance(expected, np.ndarray): + if not np.array_equal(expected, results[-1]): + raise ValueError(f"{text}: Numpy arrays don't match") + elif expected != results[-1]: + raise ValueError(f"{text}: Expected: {expected}, got {results[-1]}") + + cases( + ("345", 345), + ("238.5", 238.5), + ("-34", -34), + ("-98.3", -98.3), + ("+3i", 3j), + ("3sec", 90), + ("-3sec", -90), + ("0.2sec", 6), + ("(+ 4 3)", 7), + ("(+ 4 3 2)", 9), + ("(+ 10.5 3)", 13.5), + ("(+ 3+4i -2-2i)", 1 + 2j), + ("(+ 3+4i -2-2i 5)", 6 + 2j), + ("(- 4 3)", 1), + ("(- 3)", -3), + ("(- 10.5 3)", 7.5), + ("(* 11.5 3)", 34.5), + ("(/ 3/4 4)", Fraction(3, 16)), + ("(/ 5)", Fraction(1, 5)), + ("(/ 6 1)", 6), + ("30/1", Fraction(30)), + ("(sqrt -4)", 2j), + ("(expt 2 3)", 8), + ("(expt 4 0.5)", 2.0), + ("(abs 1.0)", 1.0), + ("(abs -1)", 1), + ("(round 3.5)", 4), + ("(round 2.5)", 2), + ("(ceil 2.1)", 3), + ("(ceil 2.9)", 3), + ("(floor 2.1)", 2), + ("(floor 2.9)", 2), + ("(boolean? #t)", True), + ("(boolean? #f)", True), + ("(boolean? 0)", False), + ("(boolean? 1)", False), + ("(boolean? false)", True), + ("(integer? 2)", True), + ("(integer? 3.0)", False), + ("(integer? #t)", False), + ("(integer? #f)", False), + ("(integer? 4/5)", False), + ("(integer? 0+2i)", False), + ('(integer? "hello")', False), + ('(integer? "3")', False), + ("(float? -23.4)", True), + ("(float? 3.0)", True), + ("(float? #f)", False), + ("(float? 4/5)", False), + ("(float? 21)", False), + ('(string-append "Hello" " World")', "Hello World"), + ('(define apple "Red Wood") apple', "Red Wood"), + ("(= 1 1.0)", True), + ("(= 1 2)", False), + ("(= 2+3i 2+3i 2+3i)", True), + ("(= 1)", True), + ("(+)", 0), + ("(*)", 1), + ('(define num 13) ; Set number to 13\n"Hello"', "Hello"), + ('(if #t "Hello" apple)', "Hello"), + ('(if #f mango "Hi")', "Hi"), + ('{if (= [+ 3 4] 7) "yes" "no"}', "yes"), + ("((if #t + -) 3 4)", 7), + ("((if #t + oops) 3+3i 4-2i)", 7 + 1j), + ("((if #f + -) 3 4)", -1), + ("(when (positive? 3) 17)", 17), + ("(string)", ""), + ("(string #\\a)", "a"), + ("(string #\\a #\\b)", "ab"), + ("(string #\\a #\\b #\\c)", "abc"), + ( + "(margin 0 (bool-array 0 0 0 1 0 0 0))", + np.array([0, 0, 0, 1, 0, 0, 0], dtype=np.bool_), + ), + ( + "(margin -2 2 (bool-array 0 0 1 1 0 0 0))", + np.array([0, 0, 0, 0, 1, 1, 0], dtype=np.bool_), + ), + ("(count-nonzero (bool-array 0 0 1 1 0 1))", 3), + ("(equal? 3 3)", True), + ("(equal? 3 3.0)", False), + ('(equal? 16.3 "Editor")', False), + ("(equal? (bool-array 1 1 0) (bool-array 1 1 0))", True), + ("(equal? (bool-array 0 1 0) (bool-array 1 1 0))", False), + ("(equal? (bool-array 0 1 0) (bool-array 0 1 0 0))", False), + ("(equal? #\\a #\\a)", True), + ('(equal? "a" #\\a)', False), + ("(equal? (list 1 2 3) (vector 1 2 3))", False), + ("(equal? (vector 1 2 3) (vector 1 2 3))", True), + ("(equal? (list 1 2 3) (list 1 2 3))", True), + ("(equal? (list 1 2 3) (cons 1 (cons 2 (cons 3 '()))))", True), + ("(equal? (list 1 2 3) (list 1 2 4))", False), + ( + "(or (bool-array 1 0 0) (bool-array 0 0 0 1))", + np.array([1, 0, 0, 1], dtype=np.bool_), + ), + ("(quote ())", Null()), + ("'()", Null()), + ("(quote hello)", Symbol("hello")), + ("'hello", Symbol("hello")), + ("(quote (3))", Cons(3, Null())), + ("'(3)", Cons(3, Null())), + ('(quote (3 2 "apple"))', Cons(3, Cons(2, Cons("apple", Null())))), + ('\'(3 2 "apple")', Cons(3, Cons(2, Cons("apple", Null())))), + ("(quote +3i)", 3j), + ("(quote 23.4)", 23.4), + ('(quote "hello")', "hello"), + ("(quote (1 (2 3)))", Cons(1, Cons(Cons(2, Cons(3, Null())), Null()))), + ("(list 1 2 3)", Cons(1, Cons(2, Cons(3, Null())))), + ("(list-ref '(0 10 20) 2)", 20), + ("(list-ref '(0 10 20) 0)", 0), + ("(car (cons 3 4))", 3), + ("(car (list 3 4))", 3), + ("(cdr (cons 3 4))", 4), + ("(cdr (list 3 4))", Cons(4, Null())), + ("(length (list 1 2 3))", 3), + ("(length '(1 2 3))", 3), + ("(length (vector 1 2 4))", 3), + ("(length (list))", 0), + ("(length '())", 0), + ("(length (bool-array 0 1 0))", 3), + ("(equal? (reverse '(0 1 2)) '(2 1 0))", True), + ("(equal? (reverse (vector 0 1 2)) (vector 2 1 0))", True), + ('(ref "Zyx" 1)', Char("y")), + ("(ref (vector 0.3 #\\a 2) 2)", 2), + ("(ref (in-range 0 10) 2)", 2), + ("(begin)", None), + ("(begin (define r 10) (* pi (* r r)))", 314.1592653589793), + ("(for/vector ([i (vector 0 1 2)]) i)", [0, 1, 2]), ) tests = [] if args.category in ("unit", "all"): - tests.extend([tsm_1a5_test, tsm_2a0_test]) - - if args.category in ("api", "all"): - tests.append(read_api_0_1) + tests.append(palet) if args.category in ("sub", "all"): tests.extend([info, levels, subdump, grep, desc]) @@ -659,48 +765,48 @@ if args.category in ("cli", "all"): tests.extend( [ - obj_makes_video, - edit_positive_tests, + SAR, + yuv442p, + check_font_error, edit_negative_tests, + edit_positive_tests, + json_tests, + high_speed_test, + video_speed, + obj_makes_video, multi_track_edit, concat_mux_tracks, concat_multiple_tracks, render_video_objs, - resolution_and_scale, - various_errors_test, + various_errors, render_text, - check_font_error, + add_audio, frame_rate, help_tests, version_test, parser_test, - combine, - example_tests, - export_tests, - high_speed_test, - unit_tests, - backwards_range_test, - cut_out_test, - image_test, - gif_test, + concat, + example, + units, + sr_units, + backwards_range, + cut_out, + embedded_image, + gif, margin_tests, input_extension, output_extension, - progress_ops_test, + progress, silent_threshold, track_tests, - json_tests, codec_tests, - motion_tests, + motion, + export, + resolution_and_scale, ] ) - tester = Tester(args) - - for test in tests: - tester.run(test) - - tester.end() + run_tests(tests, args) if __name__ == "__main__": diff -Nru auto-editor-22w28a+ds/auto_editor/timeline.py auto-editor-22w52a+ds/auto_editor/timeline.py --- auto-editor-22w28a+ds/auto_editor/timeline.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/timeline.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,111 +1,225 @@ -from dataclasses import asdict, dataclass, fields -from typing import Any, Callable, Dict, List, NamedTuple, Optional, Tuple, Type, Union +from __future__ import annotations + +from dataclasses import dataclass +from fractions import Fraction +from typing import Union from auto_editor.ffwrapper import FileInfo -from auto_editor.method import get_speed_list -from auto_editor.objects import ( - AudioObj, - EllipseObj, - ImageObj, - RectangleObj, - TextObj, - VideoObj, -) -from auto_editor.utils.func import chunkify, chunks_len, parse_dataclass -from auto_editor.utils.log import Log -from auto_editor.utils.progressbar import ProgressBar +from auto_editor.objs.util import Attr +from auto_editor.utils.chunks import Chunks, v2Chunks from auto_editor.utils.types import ( Align, - Args, - Chunks, align, anchor, color, + db_number, + natural, number, - pos, + src, + threshold, ) -class Clip(NamedTuple): +@dataclass +class v1: + """ + v1 timeline constructor + timebase is always the source's average fps + + """ + + source: FileInfo + chunks: Chunks + + def as_dict(self) -> dict: + return { + "version": "1.0", + "source": self.source.path.resolve(), + "chunks": self.chunks, + } + + +@dataclass +class v2: + """ + v2 timeline constructor + + Like v1 but allows for (nameless) multiple inputs and a custom timebase + """ + + sources: list[FileInfo] + tb: Fraction + chunks: v2Chunks + + def as_dict(self) -> dict: + return { + "version": "2.0", + "timebase": f"{self.tb.numerator}/{self.tb.denominator}", + "sources": [s.path.resolve() for s in self.sources], + "chunks": self.chunks, + } + + +""" +timeline v3 classes +""" + + +class Tl: + pass + + +@dataclass +class TlVideo(Tl): + start: int + dur: int + src: str + offset: int + speed: float + stream: int + name: str = "video" + + +@dataclass +class TlAudio(Tl): start: int dur: int + src: str offset: int speed: float - src: int + volume: float + stream: int + name: str = "audio" + + +@dataclass +class _Visual(Tl): + start: int + dur: int + x: int + y: int + anchor: str + opacity: float + rotate: float + stroke: int + strokecolor: str + + +@dataclass +class TlText(_Visual): + content: str + font: str + size: int + align: Align + fill: str + name: str = "text" + + +@dataclass +class TlImage(_Visual): + src: str + name: str = "image" + + +@dataclass +class TlRect(_Visual): + width: int + height: int + fill: str + name: str = "rectangle" -Visual = Type[Union[TextObj, ImageObj, RectangleObj, EllipseObj]] -VLayer = List[Union[VideoObj, Visual]] -VSpace = List[VLayer] - -ALayer = List[AudioObj] -ASpace = List[ALayer] - - -def merge_chunks(all_chunks: List[Chunks]) -> Chunks: - chunks = [] - start = 0 - for _chunks in all_chunks: - for chunk in _chunks: - chunks.append((chunk[0] + start, chunk[1] + start, chunk[2])) - if _chunks: - start += _chunks[-1][1] - - return chunks - - -def _values( - name: str, - val: Union[float, str], - _type: Union[type, Callable[[Any], Any]], - _vars: Dict[str, int], - log: Log, -) -> Any: - if name in ("x", "width"): - return pos((val, _vars["width"])) - elif name in ("y", "height"): - return pos((val, _vars["height"])) - elif name == "content": - assert isinstance(val, str) - return val.replace("\\n", "\n").replace("\\;", ",") - elif _type is float: - _type = number - elif _type == Align: - _type = align - elif name == "anchor": - _type = anchor - elif name in ("fill", "strokecolor"): - _type = color - - if _type is int: - for key, item in _vars.items(): - if val == key: - return item - - try: - _type(val) - except TypeError as e: - log.error(e) - except Exception: - log.error(f"{name}: variable '{val}' is not defined.") +@dataclass +class TlEllipse(_Visual): + width: int + height: int + fill: str + name: str = "ellipse" + + +video_builder = [ + Attr(("start",), natural, None), + Attr(("dur",), natural, None), + Attr(("src",), src, None), + Attr(("offset",), natural, 0), + Attr(("speed",), number, 1), + Attr(("stream", "track"), natural, 0), +] +audio_builder = [ + Attr(("start",), natural, None), + Attr(("dur",), natural, None), + Attr(("src",), src, None), + Attr(("offset",), natural, 0), + Attr(("speed",), number, 1), + Attr(("volume",), db_number, 1), + Attr(("stream", "track"), natural, 0), +] +text_builder = [ + Attr(("start",), natural, None), + Attr(("dur",), natural, None), + Attr(("content",), lambda val: val.replace("\\n", "\n").replace("\\;", ","), None), + Attr(("x",), int, "50%"), + Attr(("y",), int, "50%"), + Attr(("font",), str, "Arial"), + Attr(("size",), natural, 55), + Attr(("align",), align, "left"), + Attr(("opacity",), threshold, 1), + Attr(("anchor",), anchor, "ce"), + Attr(("rotate",), number, 0), + Attr(("fill", "color"), str, "#FFF"), + Attr(("stroke",), natural, 0), + Attr(("strokecolor",), color, "#000"), +] + +img_builder = [ + Attr(("start",), natural, None), + Attr(("dur",), natural, None), + Attr(("src",), src, None), + Attr(("x",), int, "50%"), + Attr(("y",), int, "50%"), + Attr(("opacity",), threshold, 1), + Attr(("anchor",), anchor, "ce"), + Attr(("rotate",), number, 0), + Attr(("stroke",), natural, 0), + Attr(("strokecolor",), color, "#000"), +] + +rect_builder = [ + Attr(("start",), natural, None), + Attr(("dur",), natural, None), + Attr(("x",), int, None), + Attr(("y",), int, None), + Attr(("width",), int, None), + Attr(("height",), int, None), + Attr(("opacity",), threshold, 1), + Attr(("anchor",), anchor, "ce"), + Attr(("rotate",), number, 0), + Attr(("fill", "color"), color, "#c4c4c4"), + Attr(("stroke",), natural, 0), + Attr(("strokecolor",), color, "#000"), +] +ellipse_builder = rect_builder + +timeline_builder = [Attr(("api",), str, "3.0.0")] + +Visual = Union[TlText, TlImage, TlRect, TlEllipse] +VLayer = list[Union[TlVideo, Visual]] +VSpace = list[VLayer] - return _type(val) +ALayer = list[TlAudio] +ASpace = list[ALayer] @dataclass class Timeline: - inputs: List[FileInfo] - fps: float + sources: dict[str, FileInfo] + timebase: Fraction samplerate: int - res: Tuple[int, int] + res: tuple[int, int] background: str v: VSpace a: ASpace - chunks: Optional[Chunks] = None - - @property - def inp(self) -> FileInfo: - return self.inputs[0] + chunks: Chunks | None = None @property def end(self) -> int: @@ -113,7 +227,7 @@ for vclips in self.v: if len(vclips) > 0: v = vclips[-1] - if isinstance(v, VideoObj): + if isinstance(v, TlVideo): end = max(end, max(1, round(v.start + (v.dur / v.speed)))) else: end = max(end, v.start + v.dur) @@ -129,7 +243,7 @@ for vclips in self.v: dur: float = 0 for v_obj in vclips: - if isinstance(v_obj, VideoObj): + if isinstance(v_obj, TlVideo): dur += v_obj.dur / v_obj.speed else: dur += v_obj.dur @@ -141,128 +255,47 @@ out_len = max(out_len, dur) return out_len - -def clipify(chunks: Chunks, src: int, start: float) -> List[Clip]: - clips: List[Clip] = [] - # Add "+1" to match how chunks are rendered in 22w18a - i = 0 - for chunk in chunks: - if chunk[2] != 99999: - if i == 0: - dur = chunk[1] - chunk[0] + 1 - offset = chunk[0] - else: - dur = chunk[1] - chunk[0] - offset = chunk[0] + 1 - - if not (len(clips) > 0 and clips[-1].start == round(start)): - clips.append(Clip(round(start), dur, offset, chunk[2], src)) - start += dur / chunk[2] - i += 1 - - return clips - - -def make_av( - all_clips: List[List[Clip]], inputs: List[FileInfo] -) -> Tuple[VSpace, ASpace]: - vclips: VSpace = [] - - max_a = 0 - for inp in inputs: - max_a = max(max_a, len(inp.audios)) - - aclips: ASpace = [[] for a in range(max_a)] - - for clips, inp in zip(all_clips, inputs): - if len(inp.videos) > 0: - for clip in clips: - vclip_ = VideoObj( - clip.start, clip.dur, clip.offset, clip.speed, clip.src - ) - if len(vclips) == 0: - vclips = [[vclip_]] - vclips[0].append(vclip_) - if len(inp.audios) > 0: - for clip in clips: - for a, _ in enumerate(inp.audios): - aclips[a].append( - AudioObj( - clip.start, clip.dur, clip.offset, clip.speed, clip.src, a - ) - ) - - return vclips, aclips - - -def make_timeline( - inputs: List[FileInfo], - args: Args, - sr: int, - progress: ProgressBar, - temp: str, - log: Log, -) -> Timeline: - - if inputs: - fps = inputs[0].get_fps() if args.frame_rate is None else args.frame_rate - res = inputs[0].get_res() if args.resolution is None else args.resolution - else: - fps, res = 30.0, (1920, 1080) - - def make_layers(inputs: List[FileInfo]) -> Tuple[Chunks, VSpace, ASpace]: - start = 0.0 - all_clips: List[List[Clip]] = [] - all_chunks: List[Chunks] = [] - - for i in range(len(inputs)): - _chunks = chunkify( - get_speed_list(i, inputs, fps, args, progress, temp, log) - ) - all_chunks.append(_chunks) - all_clips.append(clipify(_chunks, i, start)) - start += chunks_len(_chunks) - - vclips, aclips = make_av(all_clips, inputs) - return merge_chunks(all_chunks), vclips, aclips - - chunks, vclips, aclips = make_layers(inputs) - - timeline = Timeline(inputs, fps, sr, res, args.background, vclips, aclips, chunks) - - w, h = res - _vars: Dict[str, int] = { - "width": w, - "height": h, - "start": 0, - "end": timeline.end, - } - - pool: List[Visual] = [] - for key, obj_str in args.pool: - if key == "add_text": - pool.append(parse_dataclass(obj_str, TextObj, log)) - if key == "add_rectangle": - pool.append(parse_dataclass(obj_str, RectangleObj, log)) - if key == "add_ellipse": - pool.append(parse_dataclass(obj_str, EllipseObj, log)) - if key == "add_image": - pool.append(parse_dataclass(obj_str, ImageObj, log)) - - for obj in pool: - dic_value = asdict(obj) - dic_type: Dict[str, Callable[[Any], Any]] = {} - for field in fields(obj): - dic_type[field.name] = field.type - - # Convert to the correct types - for k, _type in dic_type.items(): - setattr(obj, k, _values(k, dic_value[k], _type, _vars, log)) - - if obj.dur < 1: - log.error(f"dur's value must be greater than 0. Was '{obj.dur}'.") - - # Higher layers are visually on top - timeline.v.append([obj]) - - return timeline + def as_dict(self) -> dict: + sources = {} + for key, src in self.sources.items(): + sources[key] = f"{src.path.resolve()}" + + v = [] + for i, vlayer in enumerate(self.v): + vb = [vobj.__dict__ for vobj in vlayer] + if vb: + v.append(vb) + + a = [] + for i, alayer in enumerate(self.a): + ab = [aobj.__dict__ for aobj in alayer] + if ab: + a.append(ab) + + tb = self.timebase + + return { + "version": "unstable:3.0", + "timeline": { + "resolution": self.res, + "timebase": f"{tb.numerator}/{tb.denominator}", + "samplerate": self.samplerate, + "sources": sources, + "background": self.background, + "v": v, + "a": a, + }, + } + + +visual_objects = { + "rectangle": (TlRect, rect_builder), + "ellipse": (TlEllipse, ellipse_builder), + "text": (TlText, text_builder), + "image": (TlImage, img_builder), + "video": (TlVideo, video_builder), +} + +audio_objects = { + "audio": (TlAudio, audio_builder), +} diff -Nru auto-editor-22w28a+ds/auto_editor/utils/bar.py auto-editor-22w52a+ds/auto_editor/utils/bar.py --- auto-editor-22w28a+ds/auto_editor/utils/bar.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/bar.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,130 @@ +from __future__ import annotations + +import sys +from math import floor +from platform import system +from shutil import get_terminal_size +from time import localtime, time + +from .func import get_stdout + + +class Bar: + def __init__(self, bar_type: str) -> None: + self.machine = False + self.hide = False + + self.icon = "⏳" + self.chars: tuple[str, ...] = (" ", "▏", "▎", "▍", "▌", "▋", "▊", "▉", "█") + self.brackets = ("|", "|") + + if bar_type == "classic": + self.icon = "⏳" + self.chars = ("░", "█") + self.brackets = ("[", "]") + if bar_type == "ascii": + self.icon = "& " + self.chars = ("-", "#") + self.brackets = ("[", "]") + if bar_type == "machine": + self.machine = True + if bar_type == "none": + self.hide = True + + self.part_width = len(self.chars) - 1 + + self.ampm = True + if system() == "Darwin" and bar_type in ("default", "classic"): + try: + date_format = get_stdout( + ["defaults", "read", "com.apple.menuextra.clock", "DateFormat"] + ) + self.ampm = "a" in date_format + except FileNotFoundError: + pass + + @staticmethod + def pretty_time(my_time: float, ampm: bool) -> str: + new_time = localtime(my_time) + + hours = new_time.tm_hour + minutes = new_time.tm_min + + if ampm: + if hours == 0: + hours = 12 + if hours > 12: + hours -= 12 + ampm_marker = "PM" if new_time.tm_hour >= 12 else "AM" + return f"{hours:02}:{minutes:02} {ampm_marker}" + return f"{hours:02}:{minutes:02}" + + def tick(self, index: float) -> None: + if self.hide: + return + + progress = 0.0 if self.total == 0 else min(1, max(0, index / self.total)) + rate = 0.0 if progress == 0 else (time() - self.begin_time) / progress + + if self.machine: + index = min(index, self.total) + raw = int(self.begin_time + rate) + print( + f"{self.title}~{index}~{self.total}~{self.begin_time}~{raw}", + end="\r", + flush=True, + ) + return + + new_time = self.pretty_time(self.begin_time + rate, self.ampm) + + percent = round(progress * 100, 1) + p_pad = " " * (4 - len(str(percent))) + columns = get_terminal_size().columns + bar_len = max(1, columns - (self.len_title + 32)) + bar_str = self._bar_str(progress, bar_len) + + bar = f" {self.icon}{self.title} {bar_str} {p_pad}{percent}% ETA {new_time}" + + if len(bar) > columns - 2: + bar = bar[: columns - 2] + else: + bar += " " * (columns - len(bar) - 4) + + sys.stdout.write(bar + "\r") + + def start(self, total: float, title: str = "Please wait") -> None: + self.title = title + self.len_title = len(title) + self.total = total + self.begin_time = time() + + try: + self.tick(0) + except UnicodeEncodeError: + self.icon = "& " + self.chars = ("-", "#") + self.brackets = ("[", "]") + self.part_width = 1 + + def _bar_str(self, progress: float, width: int) -> str: + whole_width = floor(progress * width) + remainder_width = (progress * width) % 1 + part_width = floor(remainder_width * self.part_width) + part_char = self.chars[part_width] + + if width - whole_width - 1 < 0: + part_char = "" + + line = ( + self.brackets[0] + + self.chars[-1] * whole_width + + part_char + + self.chars[0] * (width - whole_width - 1) + + self.brackets[1] + ) + return line + + @staticmethod + def end() -> None: + sys.stdout.write(" " * (get_terminal_size().columns - 2) + "\r") diff -Nru auto-editor-22w28a+ds/auto_editor/utils/chunks.py auto-editor-22w52a+ds/auto_editor/utils/chunks.py --- auto-editor-22w28a+ds/auto_editor/utils/chunks.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/chunks.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,48 @@ +from __future__ import annotations + +from fractions import Fraction +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from numpy.typing import NDArray + +Chunk = tuple[int, int, float] +Chunks = list[Chunk] + +v2Chunk = tuple[int, int, float, int] +v2Chunks = list[v2Chunk] + +# Turn long silent/loud array to formatted chunk list. +# Example: [1, 1, 1, 2, 2], {1: 1.0, 2: 1.5} => [(0, 3, 1.0), (3, 5, 1.5)] +def chunkify(arr: NDArray, smap: dict[int, float]) -> Chunks: + arr_length = len(arr) + + chunks = [] + start = 0 + for j in range(1, arr_length): + if arr[j] != arr[j - 1]: + chunks.append((start, j, smap[arr[j - 1]])) + start = j + chunks.append((start, arr_length, smap[arr[j]])) + return chunks + + +def chunks_len(chunks: Chunks) -> Fraction: + _len = Fraction(0) + for chunk in chunks: + if chunk[2] != 99999: + speed = Fraction(chunk[2]) + _len += Fraction(chunk[1] - chunk[0], speed) + return _len + + +def merge_chunks(all_chunks: list[Chunks]) -> Chunks: + chunks = [] + start = 0 + for _chunks in all_chunks: + for chunk in _chunks: + chunks.append((chunk[0] + start, chunk[1] + start, chunk[2])) + if _chunks: + start += _chunks[-1][1] + + return chunks diff -Nru auto-editor-22w28a+ds/auto_editor/utils/container.py auto-editor-22w52a+ds/auto_editor/utils/container.py --- auto-editor-22w28a+ds/auto_editor/utils/container.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/container.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,5 +1,49 @@ +from __future__ import annotations + from dataclasses import dataclass, field -from typing import Any, Dict, List, Optional +from typing import TypedDict + + +class DictContainer(TypedDict, total=False): + name: str | None + allow_video: bool + allow_audio: bool + allow_subtitle: bool + allow_image: bool + max_videos: int | None + max_audios: int | None + max_subtitles: int | None + vcodecs: list[str] | None + acodecs: list[str] | None + scodecs: list[str] | None + vstrict: bool + astrict: bool + sstrict: bool + disallow_v: list[str] + disallow_a: list[str] + samplerate: list[int] | None + + +@dataclass +class Container: + name: str | None = None + allow_video: bool = False + allow_audio: bool = False + allow_subtitle: bool = False + allow_image: bool = False + max_videos: int | None = None + max_audios: int | None = None + max_subtitles: int | None = None + vcodecs: list[str] | None = None + acodecs: list[str] | None = None + scodecs: list[str] | None = None + vstrict: bool = False + astrict: bool = False + sstrict: bool = False + disallow_v: list[str] = field(default_factory=list) + disallow_a: list[str] = field(default_factory=list) + samplerate: list[int] | None = None # Any samplerate is allowed + pcm_formats = [ "pcm_s16le", # default format @@ -26,31 +70,31 @@ ] # Define aliases -h265 = { +h265: DictContainer = { "name": "H.265 / High Efficiency Video Coding (HEVC) / MPEG-H Part 2", "allow_video": True, "vcodecs": ["hevc", "mpeg4", "h264"], } -h264 = { +h264: DictContainer = { "name": "H.264 / Advanced Video Coding (AVC) / MPEG-4 Part 10", "allow_video": True, "vcodecs": ["h264", "mpeg4", "hevc"], } -aac = { +aac: DictContainer = { "name": "Advanced Audio Coding", "allow_audio": True, "max_audios": 1, "acodecs": ["aac"], "astrict": True, } -ass = { +ass: DictContainer = { "name": "SubStation Alpha", "allow_subtitle": True, "scodecs": ["ass", "ssa"], "max_subtitles": 1, "sstrict": True, } -mp4 = { +mp4: DictContainer = { "name": "MP4 / MPEG-4 Part 14", "allow_video": True, "allow_audio": True, @@ -61,7 +105,7 @@ "disallow_v": ["prores", "apng", "gif", "msmpeg4v3", "flv1", "vp8", "rawvideo"], "disallow_a": pcm_formats, } -ogg = { +ogg: DictContainer = { "allow_video": True, "allow_audio": True, "allow_subtitle": True, @@ -71,7 +115,7 @@ "astrict": True, } -containers: Dict[str, Dict[str, Any]] = { +containers: dict[str, DictContainer] = { # Aliases section "aac": aac, "adts": aac, @@ -110,6 +154,7 @@ "ast": { "name": "AST / Audio Stream", "allow_audio": True, + "max_audios": 1, "acodecs": ["pcm_s16be_planar"], }, "mp3": { @@ -244,27 +289,6 @@ } -@dataclass -class Container: - name: Optional[str] = None - allow_video: bool = False - allow_audio: bool = False - allow_subtitle: bool = False - allow_image: bool = False - max_videos: Optional[int] = None - max_audios: Optional[int] = None - max_subtitles: Optional[int] = None - vcodecs: Optional[List[str]] = None - acodecs: Optional[List[str]] = None - scodecs: Optional[List[str]] = None - vstrict: bool = False - astrict: bool = False - sstrict: bool = False - disallow_v: List[str] = field(default_factory=list) - disallow_a: List[str] = field(default_factory=list) - samplerate: Optional[List[int]] = None # Any samplerate is allowed - - def container_constructor(key: str) -> Container: if key in containers: return Container(**containers[key]) diff -Nru auto-editor-22w28a+ds/auto_editor/utils/func.py auto-editor-22w52a+ds/auto_editor/utils/func.py --- auto-editor-22w28a+ds/auto_editor/utils/func.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/func.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,42 +1,51 @@ -from typing import List, Tuple, TypeVar, Union, overload +from __future__ import annotations + +from fractions import Fraction +from typing import Callable import numpy as np from numpy.typing import NDArray from auto_editor.utils.log import Log -from auto_editor.utils.types import Args, Chunks, time -""" -To prevent duplicate code being pasted between scripts, common functions should be -put here. Every function should be pure with no side effects. -""" - -T = TypeVar("T", bound=type) - -# Turn long silent/loud array to formatted chunk list. -# Example: [1, 1, 1, 2, 2] => [(0, 3, 1), (3, 5, 2)] -def chunkify(arr: Union[np.ndarray, List[int]]) -> Chunks: - arr_length = len(arr) - - chunks = [] - start = 0 - for j in range(1, arr_length): - if arr[j] != arr[j - 1]: - chunks.append((start, j, arr[j - 1])) - start = j - chunks.append((start, arr_length, arr[j])) - return chunks +BoolList = NDArray[np.bool_] +BoolOperand = Callable[[BoolList, BoolList], BoolList] + + +def boolop(a: BoolList, b: BoolList, call: BoolOperand) -> BoolList: + if len(a) > len(b): + k = np.copy(b) + k.resize(len(a)) + b = k + if len(b) > len(a): + k = np.copy(a) + k.resize(len(b)) + a = k + + return call(a, b) + + +def setup_tempdir(temp: str | None, log: Log) -> str: + if temp is None: + import tempfile + + return tempfile.mkdtemp() + + import os.path + from os import listdir, mkdir + if os.path.isfile(temp): + log.error("Temp directory cannot be an already existing file.") + if os.path.isdir(temp): + if len(listdir(temp)) != 0: + log.error("Temp directory should be empty!") + else: + mkdir(temp) -def chunks_len(chunks: Chunks) -> int: - _len = 0.0 - for chunk in chunks: - if chunk[2] != 99999: - _len += (chunk[1] - chunk[0]) / chunk[2] - return round(_len) + return temp -def to_timecode(secs: float, fmt: str) -> str: +def to_timecode(secs: float | Fraction, fmt: str) -> str: sign = "" if secs < 0: sign = "-" @@ -62,98 +71,14 @@ raise ValueError("to_timecode: Unreachable") -def remove_small( - has_loud: NDArray[np.bool_], lim: int, replace: int, with_: int -) -> NDArray[np.bool_]: - start_p = 0 - active = False - for j, item in enumerate(has_loud): - if item == replace: - if not active: - start_p = j - active = True - # Special case for end. - if j == len(has_loud) - 1: - if j - start_p < lim: - has_loud[start_p : j + 1] = with_ - else: - if active: - if j - start_p < lim: - has_loud[start_p:j] = with_ - active = False - return has_loud - - -@overload -def set_range( - arr: NDArray[np.float_], - range_syntax: List[List[str]], - fps: float, - with_: float, - log: Log, -) -> NDArray[np.float_]: - pass - - -@overload -def set_range( - arr: NDArray[np.bool_], - range_syntax: List[List[str]], - fps: float, - with_: float, - log: Log, -) -> NDArray[np.bool_]: - pass - - -def set_range(arr, range_syntax, fps, with_, log): - def replace_variables_to_values(val: str, fps: float, log: Log) -> int: - if val == "start": - return 0 - if val == "end": - return len(arr) - - try: - value = time(val) - except TypeError as e: - log.error(e) - if isinstance(value, int): - return value - return round(float(value) * fps) - - for _range in range_syntax: - pair = [] - for val in _range: - num = replace_variables_to_values(val, fps, log) - if num < 0: - num += len(arr) - pair.append(num) - arr[pair[0] : pair[1]] = with_ - return arr - - -def seconds_to_frames(value: Union[int, str], fps: float) -> int: - if isinstance(value, str): - return int(float(value) * fps) - return value - - -def cook(has_loud: NDArray[np.bool_], min_clip: int, min_cut: int) -> NDArray[np.bool_]: - has_loud = remove_small(has_loud, min_clip, replace=1, with_=0) - has_loud = remove_small(has_loud, min_cut, replace=0, with_=1) - return has_loud - - -def apply_margin( - has_loud: NDArray[np.bool_], has_loud_length: int, start_m: int, end_m: int -) -> NDArray[np.bool_]: - - # Find start and end indexes. +def mut_margin(arr: BoolList, start_m: int, end_m: int) -> None: + # Find start and end indexes start_index = [] end_index = [] - for j in range(1, has_loud_length): - if has_loud[j] != has_loud[j - 1]: - if has_loud[j]: + arrlen = len(arr) + for j in range(1, arrlen): + if arr[j] != arr[j - 1]: + if arr[j]: start_index.append(j) else: end_index.append(j) @@ -161,47 +86,20 @@ # Apply margin if start_m > 0: for i in start_index: - has_loud[max(i - start_m, 0) : i] = True + arr[max(i - start_m, 0) : i] = True if start_m < 0: for i in start_index: - has_loud[i : min(i - start_m, has_loud_length)] = False + arr[i : min(i - start_m, arrlen)] = False if end_m > 0: for i in end_index: - has_loud[i : min(i + end_m, has_loud_length)] = True + arr[i : min(i + end_m, arrlen)] = True if end_m < 0: for i in end_index: - has_loud[max(i + end_m, 0) : i] = False - - return has_loud - - -def apply_mark_as( - has_loud: NDArray[np.bool_], has_loud_length: int, fps: float, args: Args, log: Log -) -> NDArray[np.bool_]: - - if len(args.mark_as_loud) > 0: - has_loud = set_range(has_loud, args.mark_as_loud, fps, args.video_speed, log) - - if len(args.mark_as_silent) > 0: - has_loud = set_range(has_loud, args.mark_as_silent, fps, args.silent_speed, log) - return has_loud - - -def to_speed_list( - has_loud: NDArray[np.bool_], video_speed: float, silent_speed: float -) -> NDArray[np.float_]: + arr[max(i + end_m, 0) : i] = False - speed_list = has_loud.astype(float) - # WARN: This breaks if speed is allowed to be 0 - speed_list[speed_list == 1] = video_speed - speed_list[speed_list == 0] = silent_speed - - return speed_list - - -def merge(start_list: np.ndarray, end_list: np.ndarray) -> NDArray[np.bool_]: +def merge(start_list: np.ndarray, end_list: np.ndarray) -> BoolList: result = np.zeros((len(start_list)), dtype=np.bool_) for i, item in enumerate(start_list): @@ -212,66 +110,14 @@ return result -def parse_dataclass(unsplit_arguments: str, dataclass: T, log: Log) -> T: - from dataclasses import fields - - # Positional Arguments - # --rectangle 0,end,10,20,20,30,#000, ... - # Keyword Arguments - # --rectangle start=0,end=end,x1=10, ... - - ARG_SEP = "," - KEYWORD_SEP = "=" - - d_name = dataclass.__name__ - - keys = [field.name for field in fields(dataclass)] - kwargs = {} - args = [] - - allow_positional_args = True - - if unsplit_arguments == "": - dataclass_instance: T = dataclass() - return dataclass_instance - - for i, arg in enumerate(unsplit_arguments.split(ARG_SEP)): - if i + 1 > len(keys): - log.error(f"{d_name} has too many arguments, starting with '{arg}'.") - - if KEYWORD_SEP in arg: - allow_positional_args = False - - parameters = arg.split(KEYWORD_SEP) - if len(parameters) > 2: - log.error(f"{d_name} invalid syntax: '{arg}'.") - key, val = parameters - if key not in keys: - log.error(f"{d_name} got an unexpected keyword '{key}'") - - kwargs[key] = val - elif allow_positional_args: - args.append(arg) - else: - log.error(f"{d_name} positional argument follows keyword argument.") - - try: - dataclass_instance = dataclass(*args, **kwargs) - except TypeError as err: - err_list = [d_name] + str(err).split(" ")[1:] - log.error(f"'{unsplit_arguments}' : " + " ".join(err_list)) - - return dataclass_instance - - -def get_stdout(cmd: List[str]) -> str: - from subprocess import PIPE, STDOUT, Popen +def get_stdout(cmd: list[str]) -> str: + from subprocess import PIPE, Popen - stdout, _ = Popen(cmd, stdout=PIPE, stderr=STDOUT).communicate() + stdout, _ = Popen(cmd, stdout=PIPE, stderr=PIPE).communicate() return stdout.decode("utf-8", "replace") -def aspect_ratio(width: int, height: int) -> Tuple[int, int]: +def aspect_ratio(width: int, height: int) -> tuple[int, int]: if height == 0: return (0, 0) diff -Nru auto-editor-22w28a+ds/auto_editor/utils/log.py auto-editor-22w52a+ds/auto_editor/utils/log.py --- auto-editor-22w28a+ds/auto_editor/utils/log.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/log.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,14 +1,17 @@ +from __future__ import annotations + import sys from datetime import timedelta +from pathlib import Path from shutil import get_terminal_size, rmtree from time import perf_counter, sleep -from typing import NoReturn, Optional, Union +from typing import NoReturn class Timer: __slots__ = ("start_time", "quiet") - def __init__(self, quiet: bool = False) -> None: + def __init__(self, quiet: bool = False): self.start_time = perf_counter() self.quiet = quiet @@ -24,8 +27,8 @@ __slots__ = ("is_debug", "quiet", "temp") def __init__( - self, show_debug: bool = False, quiet: bool = False, temp: Optional[str] = None - ) -> None: + self, show_debug: bool = False, quiet: bool = False, temp: str | None = None + ): self.is_debug = show_debug self.quiet = quiet self.temp = temp @@ -56,7 +59,7 @@ buffer = " " * (get_terminal_size().columns - len(message) - 3) sys.stdout.write(f" {message}{buffer}\r") - def error(self, message: Union[str, Exception]) -> NoReturn: + def error(self, message: str | Exception) -> NoReturn: self.conwrite("") # if isinstance(message, Exception): # raise message @@ -74,8 +77,8 @@ os._exit(1) - def import_error(self, lib: str) -> NoReturn: - self.error(f"Python module '{lib}' not installed. Run: pip install {lib}") + def nofile(self, path: str | Path) -> NoReturn: + self.error(f"Could not find '{path}'") def warning(self, message: str) -> None: if not self.quiet: diff -Nru auto-editor-22w28a+ds/auto_editor/utils/progressbar.py auto-editor-22w52a+ds/auto_editor/utils/progressbar.py --- auto-editor-22w28a+ds/auto_editor/utils/progressbar.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/progressbar.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,129 +0,0 @@ -import sys -from math import floor -from platform import system -from shutil import get_terminal_size -from time import localtime, time - -from .func import get_stdout - - -class ProgressBar: - def __init__(self, bar_type: str) -> None: - - self.machine = False - self.hide = False - - self.icon = "⏳" - self.chars = [" ", "▏", "▎", "▍", "▌", "▋", "▊", "▉", "█"] - self.brackets = ("|", "|") - - if bar_type == "classic": - self.icon = "⏳" - self.chars = ["░", "█"] - self.brackets = ("[", "]") - if bar_type == "ascii": - self.icon = "& " - self.chars = ["-", "#"] - self.brackets = ("[", "]") - if bar_type == "machine": - self.machine = True - if bar_type == "none": - self.hide = True - - self.part_width = len(self.chars) - 1 - - self.ampm = True - if system() == "Darwin" and bar_type in ("default", "classic"): - try: - date_format = get_stdout( - ["defaults", "read", "com.apple.menuextra.clock", "DateFormat"] - ) - self.ampm = "a" in date_format - except FileNotFoundError: - pass - - @staticmethod - def pretty_time(my_time: float, ampm: bool) -> str: - new_time = localtime(my_time) - - hours = new_time.tm_hour - minutes = new_time.tm_min - - if ampm: - if hours == 0: - hours = 12 - if hours > 12: - hours -= 12 - ampm_marker = "PM" if new_time.tm_hour >= 12 else "AM" - return f"{hours:02}:{minutes:02} {ampm_marker}" - return f"{hours:02}:{minutes:02}" - - def tick(self, index: float) -> None: - if self.hide: - return - - progress = min(1, max(0, index / self.total)) - rate = 0.0 if progress == 0 else (time() - self.begin_time) / progress - - if self.machine: - index = min(index, self.total) - raw = int(self.begin_time + rate) - print( - f"{self.title}~{index}~{self.total}~{self.begin_time}~{raw}", - end="\r", - flush=True, - ) - return - - new_time = self.pretty_time(self.begin_time + rate, self.ampm) - - percent = round(progress * 100, 1) - p_pad = " " * (4 - len(str(percent))) - columns = get_terminal_size().columns - bar_len = max(1, columns - (self.len_title + 32)) - bar_str = self.progress_bar_str(progress, bar_len) - - bar = f" {self.icon}{self.title} {bar_str} {p_pad}{percent}% ETA {new_time}" - - if len(bar) > columns - 2: - bar = bar[: columns - 2] - else: - bar += " " * (columns - len(bar) - 4) - - sys.stdout.write(bar + "\r") - - def start(self, total: float, title: str = "Please wait") -> None: - self.title = title - self.len_title = len(title) - self.total = total - self.begin_time = time() - - try: - self.tick(0) - except UnicodeEncodeError: - self.icon = "& " - self.chars = ["-", "#"] - self.brackets = ("[", "]") - self.part_width = 1 - - def progress_bar_str(self, progress: float, width: int) -> str: - whole_width = floor(progress * width) - remainder_width = (progress * width) % 1 - part_width = floor(remainder_width * self.part_width) - part_char = self.chars[part_width] - - if width - whole_width - 1 < 0: - part_char = "" - - line = ( - self.brackets[0] - + self.chars[-1] * whole_width - + part_char - + self.chars[0] * (width - whole_width - 1) - + self.brackets[1] - ) - return line - - @staticmethod - def end() -> None: - sys.stdout.write(" " * (get_terminal_size().columns - 2) + "\r") diff -Nru auto-editor-22w28a+ds/auto_editor/utils/types.py auto-editor-22w52a+ds/auto_editor/utils/types.py --- auto-editor-22w28a+ds/auto_editor/utils/types.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/utils/types.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,9 +1,12 @@ +from __future__ import annotations + import re from dataclasses import dataclass, field -from typing import List, Literal, Optional, Tuple, Type, Union +from fractions import Fraction +from typing import Literal, Union -def _comma_coerce(name: str, val: str, num_args: int) -> List[str]: +def _comma_coerce(name: str, val: str, num_args: int) -> list[str]: vals = val.strip().split(",") if num_args > len(vals): raise TypeError(f"Too few arguments for {name}.") @@ -12,7 +15,7 @@ return vals -def _split_num_str(val: Union[str, float]) -> Tuple[float, str]: +def _split_num_str(val: str | float) -> tuple[float, str]: if isinstance(val, (float, int)): return val, "" @@ -30,17 +33,13 @@ return float(num), unit -def _unit_check(unit: str, allowed_units: Tuple[str, ...]) -> None: +def _unit_check(unit: str, allowed_units: tuple[str, ...]) -> None: if unit not in allowed_units: raise TypeError(f"Unknown unit: '{unit}'") -Chunk = Tuple[int, int, float] -Chunks = List[Chunk] - - # Numbers: 0, 1, 2, 3, ... -def natural(val: Union[str, float]) -> int: +def natural(val: str | float) -> int: num, unit = _split_num_str(val) if unit != "": raise TypeError(f"'{val}': Natural does not allow units.") @@ -51,7 +50,7 @@ return int(num) -def number(val: Union[str, float]) -> float: +def number(val: str | float) -> float: if isinstance(val, str) and "/" in val: nd = val.split("/") if len(nd) != 2: @@ -62,15 +61,71 @@ vs.append(int(v)) except ValueError: raise TypeError(f"'{val}': Numerator and Denominator must be integers.") + if vs[1] == 0: + raise TypeError(f"'{val}': Denominator must not be zero.") return vs[0] / vs[1] num, unit = _split_num_str(val) - _unit_check(unit, ("", "%")) if unit == "%": return num / 100 + _unit_check(unit, ("",)) + return num + + +def speed(val: str) -> float: + _s = number(val) + if _s <= 0 or _s > 99999: + return 99999.0 + return _s + + +def db_number(val: str) -> float | str: + num, unit = _split_num_str(val) + if unit == "dB": + return val + + return number(val) + + +def src(val: str) -> int | str: + try: + if int(val) > 0: + return int(val) + except ValueError: + pass + + return val + + +def threshold(val: str | float) -> float: + num = number(val) + if num > 1 or num < 0: + raise TypeError(f"'{val}': Threshold must be between 0 and 1 (0%-100%)") return num +def db_threshold(val: str) -> str | float: + num, unit = _split_num_str(val) + if unit == "dB": + if num > 0: + raise TypeError("dB only goes up to 0") + return 10 ** (num / 20) + + return threshold(val) + + +def frame_rate(val: str) -> Fraction: + if val == "ntsc": + return Fraction(30000, 1001) + if val == "ntsc_film": + return Fraction(24000, 1001) + if val == "pal": + return Fraction(25) + if val == "film": + return Fraction(24) + return Fraction(val) + + def sample_rate(val: str) -> int: num, unit = _split_num_str(val) if unit in ("kHz", "KHz"): @@ -79,7 +134,15 @@ return natural(num) -def time(val: str) -> Union[int, str]: +def time(val: str) -> int | str: + if ":" in val: + boxes = val.split(":") + if len(boxes) == 2: + return str(int(boxes[0]) * 60 + float(boxes[1])) + if len(boxes) == 3: + return str(int(boxes[0]) * 3600 + int(boxes[1]) * 60 + float(boxes[2])) + raise TypeError(f"'{val}': Invalid time format") + num, unit = _split_num_str(val) if unit in ("s", "sec", "secs", "second", "seconds"): return str(num) @@ -88,9 +151,11 @@ if unit in ("h", "hour", "hours"): return str(num * 3600) - _unit_check(unit, ("", "f", "frame", "frames")) + _unit_check(unit, ("",)) if not isinstance(num, int) and not num.is_integer(): - raise TypeError(f"'{val}': Frame unit doesn't accept non-ints.") + raise TypeError( + f"'{val}': Time uses ticks by default and ticks only accept ints." + ) return int(num) @@ -101,7 +166,7 @@ return val -Margin = Tuple[Union[int, str], Union[int, str]] +Margin = tuple[Union[int, str], Union[int, str]] def margin(val: str) -> Margin: @@ -113,11 +178,11 @@ return time(vals[0]), time(vals[1]) -def time_range(val: str) -> List[str]: +def time_range(val: str) -> list[str]: return _comma_coerce("time_range", val, 2) -def speed_range(val: str) -> Tuple[float, str, str]: +def speed_range(val: str) -> tuple[float, str, str]: a = _comma_coerce("speed_range", val, 3) return number(a[0]), a[1], a[2] @@ -168,7 +233,7 @@ raise ValueError(f"Invalid Color: '{color}'") -def resolution(val: Optional[str]) -> Optional[Tuple[int, int]]: +def resolution(val: str | None) -> tuple[int, int] | None: if val is None: return None vals = val.strip().split(",") @@ -178,7 +243,7 @@ return natural(vals[0]), natural(vals[1]) -def pos(val: Tuple[Union[float, str], int]) -> int: +def pos(val: tuple[float | str, int]) -> int: num, unit = _split_num_str(val[0]) if unit == "%": return round((num / 100) * val[1]) @@ -187,36 +252,37 @@ @dataclass -class MainArgs: - pool: List[Tuple[str, str]] = field(default_factory=list) +class Args: + add: list[str] = field(default_factory=list) + source: list[str] = field(default_factory=list) yt_dlp_location: str = "yt-dlp" - download_format: Optional[str] = None - output_format: Optional[str] = None - yt_dlp_extras: Optional[str] = None + download_format: str | None = None + output_format: str | None = None + yt_dlp_extras: str | None = None video_codec: str = "auto" audio_codec: str = "auto" video_bitrate: str = "10m" audio_bitrate: str = "unset" video_quality_scale: str = "unset" scale: float = 1.0 - extras: Optional[str] = None + extras: str | None = None no_seek: bool = False - cut_out: List[List[str]] = field(default_factory=list) - add_in: List[List[str]] = field(default_factory=list) - mark_as_loud: List[List[str]] = field(default_factory=list) - mark_as_silent: List[List[str]] = field(default_factory=list) - set_speed_for_range: List[Tuple[float, str, str]] = field(default_factory=list) - frame_rate: Optional[float] = None - sample_rate: Optional[int] = None - resolution: Optional[Tuple[int, int]] = None + cut_out: list[list[str]] = field(default_factory=list) + add_in: list[list[str]] = field(default_factory=list) + mark_as_loud: list[list[str]] = field(default_factory=list) + mark_as_silent: list[list[str]] = field(default_factory=list) + set_speed_for_range: list[tuple[float, str, str]] = field(default_factory=list) + frame_rate: Fraction | None = None + sample_rate: int | None = None + resolution: tuple[int, int] | None = None background: str = "#000" edit_based_on: str = "audio" keep_tracks_separate: bool = False - export: str = "default" - player: Optional[str] = None + export: str | None = None + player: str | None = None no_open: bool = False - temp_dir: Optional[str] = None - ffmpeg_location: Optional[str] = None + temp_dir: str | None = None + ffmpeg_location: str | None = None my_ffmpeg: bool = False progress: str = "modern" version: bool = False @@ -224,20 +290,14 @@ show_ffmpeg_debug: bool = False quiet: bool = False preview: bool = False - timeline: bool = False - api: str = "1.0.0" - silent_threshold: float = 0.04 - frame_margin: Margin = (6, 6) + margin: Margin = ("0.2", "0.2") silent_speed: float = 99999.0 video_speed: float = 1.0 - min_clip_length: Union[int, str] = 3 - min_cut_length: Union[int, str] = 6 - output_file: Optional[str] = None + min_clip_length: int | str = 3 + min_cut_length: int | str = 6 + output_file: str | None = None help: bool = False - input: List[str] = field(default_factory=list) - - -Args = Type[MainArgs] + input: list[str] = field(default_factory=list) colormap = { diff -Nru auto-editor-22w28a+ds/auto_editor/validate_input.py auto-editor-22w52a+ds/auto_editor/validate_input.py --- auto-editor-22w28a+ds/auto_editor/validate_input.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/validate_input.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,8 +1,9 @@ +from __future__ import annotations + import os import re import subprocess from platform import system -from typing import List from auto_editor.ffwrapper import FFmpeg from auto_editor.utils.func import get_stdout @@ -65,7 +66,7 @@ return location -def valid_input(inputs: List[str], ffmpeg: FFmpeg, args: Args, log: Log) -> List[str]: +def valid_input(inputs: list[str], ffmpeg: FFmpeg, args: Args, log: Log) -> list[str]: new_inputs = [] for my_input in inputs: @@ -80,6 +81,6 @@ else: if os.path.isdir(my_input): log.error("Input must be a file or a URL, not a directory.") - log.error(f"Could not find file: '{my_input}'") + log.nofile(my_input) return new_inputs diff -Nru auto-editor-22w28a+ds/auto_editor/vanparse.py auto-editor-22w52a+ds/auto_editor/vanparse.py --- auto-editor-22w28a+ds/auto_editor/vanparse.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/vanparse.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,57 +1,43 @@ +from __future__ import annotations + import difflib import re import sys import textwrap +from collections.abc import Iterator from dataclasses import dataclass from shutil import get_terminal_size -from typing import ( - Any, - Callable, - Dict, - Iterator, - List, - Literal, - Optional, - Sequence, - Set, - Tuple, - TypeVar, - Union, -) +from typing import Any, Callable, Literal, TypeVar, Union -import auto_editor from auto_editor.utils.log import Log -T = TypeVar("T", bound=type) +T = TypeVar("T") Nargs = Union[int, Literal["*"]] @dataclass class Required: - names: Sequence[str] + names: tuple[str, ...] nargs: Nargs = "*" type: type = str - choices: Optional[Sequence[str]] = None - help: str = "" - _type: str = "required" + choices: tuple[str, ...] | None = None + metavar: str = "[file ...] [options]" @dataclass class Options: - names: Sequence[str] + names: tuple[str, ...] nargs: Nargs = 1 type: type = str flag: bool = False - choices: Optional[Sequence[str]] = None - pool: bool = False + choices: tuple[str, ...] | None = None + metavar: str | None = None help: str = "" - _type: str = "option" @dataclass class OptionText: text: str - _type: str def indent(text: str, prefix: str) -> str: @@ -82,32 +68,47 @@ print("\n".join(wrapped_lines)) -def print_program_help( - reqs: List[Required], args: List[Union[Options, OptionText]] -) -> None: - text = "" - for arg in args: - if isinstance(arg, OptionText): - text += f"\n {arg.text}\n" if arg._type == "text" else "\n" - else: - text += " " + ", ".join(arg.names) + f": {arg.help}\n" - text += "\n" - for req in reqs: - text += " " + ", ".join(req.names) + f": {req.help}\n" - out(text) - - -def get_help_data() -> Dict[str, Dict[str, str]]: - import json - import os.path +def print_program_help(reqs: list[Required], args: list[Options | OptionText]) -> None: + text = f"Usage: {' '.join([req.metavar for req in reqs])}\n\nOptions:" - dirpath = os.path.dirname(os.path.realpath(__file__)) - - with open(os.path.join(dirpath, "help.json")) as fileobj: - data = json.load(fileobj) + width = get_terminal_size().columns - 3 + split = int(width * 0.44) + 3 + indent = " " - assert isinstance(data, dict) - return data + for i, arg in enumerate(args): + if isinstance(arg, OptionText): + if i == 0: + text += f"\n {arg.text}" + indent = " " + else: + text += f"\n\n {arg.text}" + else: + text += "\n" + line = f"{indent}{', '.join(reversed(arg.names))}" + if arg.metavar is not None: + line += f" {arg.metavar}" + + if arg.help == "": + pass + elif len(line) < split: + line = textwrap.fill( + arg.help, + width=width, + initial_indent=f"{line}{' ' * (split - len(line))}", + subsequent_indent=split * " ", + ) + else: + line += "\n" + line += textwrap.fill( + arg.help, + width=width, + initial_indent=split * " ", + subsequent_indent=split * " ", + ) + + text += line + text += "\n\n" + sys.stdout.write(text) def to_underscore(name: str) -> str: @@ -115,62 +116,50 @@ return name[:2] + name[2:].replace("-", "_") -def to_key(op: Union[Options, Required]) -> str: +def to_key(op: Options | Required) -> str: """Convert option name to arg key. e.g. --hello-world -> hello_world""" return op.names[0][:2].replace("-", "") + op.names[0][2:].replace("-", "_") -def print_option_help(program_name: str, ns_obj: T, option: Options) -> None: - text = f" {', '.join(option.names)}\n\n" +def print_option_help(program_name: str | None, ns_obj: T, option: Options) -> None: + text = f" {', '.join(option.names)} {'' if option.metavar is None else option.metavar}\n\n" - bar_len = 11 if option.flag: text += " type: flag\n" - bar_len = 15 else: if option.nargs != 1: - _add = f" nargs: {option.nargs}\n" - bar_len = len(_add) - text += _add - - _add = f" type: {option.type.__name__}\n" - bar_len = len(_add) - text += _add + text += f" nargs: {option.nargs}\n" - default: Optional[str] = None + default: str | float | int | tuple | None = None try: default = getattr(ns_obj, to_key(option)) except AttributeError: pass - if default is not None: - if isinstance(default, tuple): - _add = f" default: {','.join(map(str, default))}\n" - else: - _add = f" default: {default}\n" - bar_len = len(_add) - text += _add + if default is not None and isinstance(default, (int, float, str)): + text += f" default: {default}\n" if option.choices is not None: - text += " choices: " + ", ".join(option.choices) + "\n" + text += f" choices: {', '.join(option.choices)}\n" - text += f" {'-' * (bar_len - 5)}\n\n {option.help}\n\n" - data = get_help_data() + from auto_editor.help import data - if option.names[0] in data[program_name]: + if program_name is not None and option.names[0] in data[program_name]: text += indent(data[program_name][option.names[0]], " ") + "\n" + else: + text += f" {option.help}\n\n" out(text) -def get_option(name: str, options: List[Options]) -> Optional[Options]: +def get_option(name: str, options: list[Options]) -> Options | None: for option in options: if name in option.names or name in map(to_underscore, option.names): return option return None -def parse_value(option: Union[Options, Required], val: Optional[str]) -> Any: +def parse_value(option: Options | Required, val: str | None) -> Any: if val is None and option.nargs == 1: Log().error(f"{option.names[0]} needs argument.") @@ -180,48 +169,43 @@ Log().error(e) if option.choices is not None and value not in option.choices: - my_choices = ", ".join(option.choices) + choices = ", ".join(option.choices) Log().error( - f"{value} is not a choice for {option.names[0]}\nchoices are:\n {my_choices}" + f"{value} is not a choice for {option.names[0]}\nchoices are:\n {choices}" ) return value class ArgumentParser: - def __init__(self, program_name: str) -> None: + def __init__(self, program_name: str | None): self.program_name = program_name - self.requireds: List[Required] = [] - self.options: List[Options] = [] - self.args: List[Union[Options, OptionText]] = [] + self.requireds: list[Required] = [] + self.options: list[Options] = [] + self.args: list[Options | OptionText] = [] - def add_argument(self, *args: str, **kwargs) -> None: + def add_argument(self, *args: str, **kwargs: Any) -> None: x = Options(args, **kwargs) self.options.append(x) self.args.append(x) - def add_required(self, *args: str, **kwargs) -> None: + def add_required(self, *args: str, **kwargs: Any) -> None: self.requireds.append(Required(args, **kwargs)) def add_text(self, text: str) -> None: - self.args.append(OptionText(text, "text")) - - def add_blank(self) -> None: - self.args.append(OptionText("", "blank")) + self.args.append(OptionText(text)) def parse_args( self, - ns_obj: T, - sys_args: List[str], - macros: Optional[List[Tuple[Set[str], List[str]]]] = None, + ns_obj: type[T], + sys_args: list[str], + macros: list[tuple[set[str], list[str]]] | None = None, ) -> T: - if len(sys_args) == 0: - out(get_help_data()[self.program_name]["_"]) - sys.exit() + if len(sys_args) == 0 and self.program_name is not None: + from auto_editor.help import data - if len(sys_args) == 1 and sys_args[0] in ("-v", "-V"): - sys.stdout.write(f"{auto_editor.version} ({auto_editor.__version__})\n") + out(data[self.program_name]["_"]) sys.exit() if macros is not None: @@ -235,7 +219,7 @@ del macros ns = ns_obj() - option_names: List[str] = [] + option_names: list[str] = [] program_name = self.program_name requireds = self.requireds @@ -245,21 +229,19 @@ builtin_help = Options( ("--help", "-h"), flag=True, - help="Show info about this program or option then exit.", + help="Show info about this program or option then exit", ) options.append(builtin_help) args.append(builtin_help) # Figure out command line options changed by user. - used_options: List[Options] = [] + used_options: list[Options] = [] - req_list: List[str] = [] + req_list: list[str] = [] req_list_name = requireds[0].names[0] setting_req_list = requireds[0].nargs != 1 - option_list: Any = [] - op_key: str = "" - oplist_name: Optional[str] = None + oplist_name: str | None = None oplist_coerce: Callable[[str], str] = str i = 0 @@ -268,10 +250,12 @@ option = get_option(arg, options) if option is None: - if oplist_name == "pool": - option_list.append((op_key, arg)) - elif oplist_name is not None: - option_list.append(oplist_coerce(arg)) + if oplist_name is not None: + try: + val = oplist_coerce(arg) + ns.__setattr__(oplist_name, getattr(ns, oplist_name) + [val]) + except (TypeError, ValueError) as e: + Log().error(e) elif requireds and not arg.startswith("--"): if requireds[0].nargs == 1: ns.__setattr__(req_list_name, parse_value(requireds[0], arg)) @@ -293,15 +277,13 @@ ) Log().error(f"Unknown {label}: {arg}") else: - if oplist_name is not None and oplist_name != "pool": - ns.__setattr__(oplist_name, option_list) - - if not option.pool: + if option.nargs != "*": if option in used_options: - Log().error(f"Cannot repeat option {option.names[0]} twice.") + Log().error( + f"Option {option.names[0]} may not be used more than once." + ) used_options.append(option) - option_list = [] oplist_name = None oplist_coerce = option.type @@ -312,30 +294,20 @@ print_option_help(program_name, ns_obj, option) sys.exit() - if option.nargs != 1: - if option.pool: - oplist_name = "pool" - op_key = key - ns.pool.append((key, next_arg)) - else: - oplist_name = key - ns.__setattr__(oplist_name, parse_value(option, next_arg)) - elif option.flag: + if option.flag: ns.__setattr__(key, True) - else: + elif option.nargs == 1: ns.__setattr__(key, parse_value(option, next_arg)) i += 1 - i += 1 + else: + oplist_name = key - if oplist_name == "pool": - ns.pool += option_list - elif oplist_name is not None: - ns.__setattr__(oplist_name, option_list) + i += 1 if setting_req_list: ns.__setattr__(req_list_name, req_list) - if ns.help: + if getattr(ns, "help"): print_program_help(requireds, args) sys.exit() diff -Nru auto-editor-22w28a+ds/auto_editor/wavfile.py auto-editor-22w52a+ds/auto_editor/wavfile.py --- auto-editor-22w28a+ds/auto_editor/wavfile.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/auto_editor/wavfile.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,6 +1,9 @@ +from __future__ import annotations + import io import struct -from typing import Literal, Optional, Tuple, Union +import sys +from typing import Literal, Union import numpy as np @@ -14,7 +17,7 @@ def _read_fmt_chunk( fid: io.BufferedReader, en: Endian -) -> Tuple[int, int, int, int, int]: +) -> tuple[int, int, int, int, int]: size: int = struct.unpack(f"{en}I", fid.read(4))[0] if size < 16: @@ -23,7 +26,7 @@ res = struct.unpack(f"{en}HHIIHH", fid.read(16)) bytes_read = 16 - format_tag, channels, fs, _, block_align, bit_depth = res + format_tag, channels, sr, _, block_align, bit_depth = res # underscore is "bitrate" if format_tag == EXTENSIBLE and size >= 18: @@ -55,7 +58,7 @@ # fmt should always be 16, 18 or 40, but handle it just in case _handle_pad_byte(fid, size) - return format_tag, channels, fs, block_align, bit_depth + return format_tag, channels, sr, block_align, bit_depth def _read_data_chunk( @@ -65,7 +68,7 @@ bit_depth: int, en: Endian, block_align: int, - data_size: Optional[int], + data_size: int | None, ) -> AudioData: size: int = struct.unpack(f"{en}I", fid.read(4))[0] @@ -122,14 +125,14 @@ _handle_pad_byte(fid, size) -def _read_rf64_chunk(fid: io.BufferedReader) -> Tuple[int, int, Endian]: +def _read_rf64_chunk(fid: io.BufferedReader) -> tuple[int, int, Endian]: # https://tech.ebu.ch/docs/tech/tech3306v1_0.pdf # https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.2088-1-201910-I!!PDF-E.pdf heading = fid.read(12) if heading != b"\xff\xff\xff\xffWAVEds64": - raise ValueError(f"Wrong heading: {repr(heading)}") + raise ValueError(f"Wrong heading: {heading!r}") chunk_size = fid.read(4) @@ -155,13 +158,13 @@ return data_size, file_size, en -def _read_riff_chunk(sig: bytes, fid: io.BufferedReader) -> Tuple[None, int, Endian]: +def _read_riff_chunk(sig: bytes, fid: io.BufferedReader) -> tuple[None, int, Endian]: en: Endian = "<" if sig == b"RIFF" else ">" file_size: int = struct.unpack(f"{en}I", fid.read(4))[0] + 8 form = fid.read(4) if form != b"WAVE": - raise ValueError(f"Not a WAV file. RIFF form type is {repr(form)}.") + raise ValueError(f"Not a WAV file. RIFF form type is {form!r}.") return None, file_size, en @@ -171,7 +174,7 @@ fid.seek(1, 1) -def read(filename: str) -> Tuple[int, AudioData]: +def read(filename: str) -> tuple[int, AudioData]: fid = open(filename, "rb") try: @@ -181,7 +184,7 @@ elif file_sig == b"RF64": data_size, file_size, en = _read_rf64_chunk(fid) else: - raise ValueError(f"File format {repr(file_sig)} not supported.") + raise ValueError(f"File format {file_sig!r} not supported.") fmt_chunk_received = False data_chunk_received = False @@ -190,19 +193,15 @@ if not chunk_id: if data_chunk_received: - # EOF but data successfully read - break - else: - raise ValueError("Unexpected end of file.") - elif len(chunk_id) < 4: - if fmt_chunk_received and data_chunk_received: - pass - else: - raise ValueError(f"Incomplete chunk ID: {repr(chunk_id)}") + break # EOF but data successfully read + raise ValueError("Unexpected end of file.") + + elif len(chunk_id) < 4 and not (fmt_chunk_received and data_chunk_received): + raise ValueError(f"Incomplete chunk ID: {chunk_id!r}") if chunk_id == b"fmt ": fmt_chunk_received = True - format_tag, channels, fs, block_align, bit_depth = _read_fmt_chunk( + format_tag, channels, sr, block_align, bit_depth = _read_fmt_chunk( fid, en ) elif chunk_id == b"data": @@ -225,4 +224,73 @@ finally: fid.seek(0) - return fs, data + return sr, data + + +def write(filename: str, sr: int, arr: AudioData) -> None: + # Write empty samples to 'filename' with properties of given arr + fid = open(filename, "wb") + + # Write RIFF WAV Header + fid.write(b"RIFF\x00\x00\x00\x00WAVE") + + # Write RF64 WAV Header + # fid.write(b"RF64\xff\xff\xff\xffWAVEds64") + + # # - chunk_size + # fid.write(b'\x1c\x00\x00\x00') # Value based on bit-depth + '# of samples' + + # # - bw_size, Declaring Little Endian + # fid.write(b'j@|\x00') + # fid.write(b'\x00\x00\x00\x00') + + # Write 'fmt' Header + dkind = arr.dtype.kind + + header_data = b"fmt " + + format_tag = IEEE_FLOAT if dkind == "f" else PCM + channels = 1 if arr.ndim == 1 else arr.shape[1] + bit_depth = arr.dtype.itemsize * 8 + bit_rate = sr * (bit_depth // 8) * channels + block_align = channels * (bit_depth // 8) + + fmt_chunk_data = struct.pack( + " 0xFFFFFFFF: + raise ValueError("Data exceeds wave file size limit") + + fid.write(header_data) + + # Write Data Chunk + fid.write(b"data") + fid.write(struct.pack("" or ( + arr.dtype.byteorder == "=" and sys.byteorder == "big" + ): + arr = arr.byteswap() + + # Write the actual data + fid.write(arr.ravel().view("b").data) + + # Write size info + size = fid.tell() + fid.seek(4) + fid.write(struct.pack(" Wed, 04 Jan 2023 08:47:12 +0100 + auto-editor (22w28a+ds-1) unstable; urgency=medium * New upstream version. diff -Nru auto-editor-22w28a+ds/debian/copyright auto-editor-22w52a+ds/debian/copyright --- auto-editor-22w28a+ds/debian/copyright 2022-07-25 13:19:57.000000000 +0000 +++ auto-editor-22w52a+ds/debian/copyright 2023-01-04 07:47:12.000000000 +0000 @@ -5,7 +5,7 @@ Files-Excluded: ae-ffmpeg/ffmpeg resources example.mp4 Files: * -Copyright: 2020-2022 Wyatt Blue +Copyright: 2020-2023 Wyatt Blue License: Unlicense This is free and unencumbered software released into the public domain. . @@ -61,7 +61,7 @@ THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE Files: debian/* -Copyright: 2022 Gürkan Myczko +Copyright: 2022-2023 Gürkan Myczko License: GPL-2+ This package is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by diff -Nru auto-editor-22w28a+ds/.github/ISSUE_TEMPLATE/feature_request.md auto-editor-22w52a+ds/.github/ISSUE_TEMPLATE/feature_request.md --- auto-editor-22w28a+ds/.github/ISSUE_TEMPLATE/feature_request.md 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/.github/ISSUE_TEMPLATE/feature_request.md 2022-12-31 17:05:14.000000000 +0000 @@ -6,12 +6,10 @@ --- -Descript what feature/improvement you would like here. + -Descript how you would use it. + + -Make sure that it is something that YOU need, not that just seems useful. - - -(Make sure you're using the latest version) \ No newline at end of file + diff -Nru auto-editor-22w28a+ds/.github/workflows/build.yml auto-editor-22w52a+ds/.github/workflows/build.yml --- auto-editor-22w28a+ds/.github/workflows/build.yml 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/.github/workflows/build.yml 2022-12-31 17:05:14.000000000 +0000 @@ -19,7 +19,7 @@ build1: strategy: matrix: - python-version: ['3.8'] + python-version: ['3.9'] runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 diff -Nru auto-editor-22w28a+ds/.github/workflows/ff-publish.yaml auto-editor-22w52a+ds/.github/workflows/ff-publish.yaml --- auto-editor-22w28a+ds/.github/workflows/ff-publish.yaml 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/.github/workflows/ff-publish.yaml 2022-12-31 17:05:14.000000000 +0000 @@ -23,13 +23,21 @@ run: | cd ae-ffmpeg mv ae_ffmpeg/Windows ./ + mv ae_ffmpeg/Darwin-arm64 ./ + python setup.py bdist_wheel --plat-name=macosx_10_9_x86_64 + twine upload dist/* + rm -rf dist build + + mv Darwin-arm64 ae_ffmpeg + mv ae_ffmpeg/Darwin-x86_64 ./ + python setup.py bdist_wheel --plat-name=macosx_11_0_arm64 twine upload dist/* rm -rf dist build mv Windows ae_ffmpeg - mv ae_ffmpeg/Darwin ./ + mv ae_ffmpeg/Darwin-arm64 ./ python setup.py bdist_wheel --plat-name=win_amd64 twine upload dist/* diff -Nru auto-editor-22w28a+ds/.gitignore auto-editor-22w52a+ds/.gitignore --- auto-editor-22w28a+ds/.gitignore 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/.gitignore 2022-12-31 17:05:14.000000000 +0000 @@ -2,14 +2,11 @@ *_ALTERED* *_tracks *example -*.xml -*.mlt -*.fcpxml *https* -example.json # Website Files site/src/app/ +site/src/ref site/binaries/ # Python Generated Files diff -Nru auto-editor-22w28a+ds/README.md auto-editor-22w52a+ds/README.md --- auto-editor-22w28a+ds/README.md 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/README.md 2022-12-31 17:05:14.000000000 +0000 @@ -25,12 +25,12 @@

Cutting

-Change the **pace** of the edited video by using `--frame-margin`. +Change the **pace** of the edited video by using `--margin`. -`--frame-margin` will including small sections that are next to loud parts. A frame margin of 8 will add up to 8 frames before and 8 frames after the loud part. +`--margin` adds in some "silent" sections to make the editing feel nicer. Setting `--margin` to `0.2sec` will add up to 0.2 seconds in front of and 0.2 seconds behind the original clip. ``` -auto-editor example.mp4 --frame-margin 8 +auto-editor example.mp4 --margin 0.2sec ```

Set how cuts are made

@@ -39,16 +39,18 @@ For example, edit out motionlessness in a video by setting `--edit motion`. - ``` # cut out sections where percentage of motion is less than 2. auto-editor example.mp4 --edit motion:threshold=2% -# --edit is set to "audio" by default -auto-editor example.mp4 --silent-threshold 4% +# --edit is set to "audio:threshold=4%" by default. +auto-editor example.mp4 + +# Different tracks can be set with different attribute. +auto-editor multi-track.mov --edit "(or audio:stream=0 audio:threshold=10%,stream=1)" -# audio and motion thresholds are toggled independently -auto-editor example.mp4 --edit 'audio:threshold=3% or motion:threshold=6%' +# Different editing methods can be used together. +auto-editor example.mp4 --edit "(or audio:threshold=3% motion:threshold=6%)" ```

See what auto-editor cuts out

@@ -67,10 +69,10 @@ auto-editor example.mp4 --export premiere ``` -Similar commands exist for: +Auto-Editor can also export to: -- `--export final-cut-pro` for Final Cut Pro. -- `--export shotcut` for ShotCut. +- Final Cut Pro with `--export final-cut-pro` +- ShotCut with `--export shotcut` Other editors, like Sony Vegas, can understand the `premiere` format. If your favorite editor doesn't, you can use ` --export clip-sequence` which creates many video clips that can be imported and manipulated like normal. @@ -131,10 +133,8 @@ ## Articles - [How to Install Auto-Editor](https://auto-editor.com/installing) - [All the Options (And What They Do)](https://auto-editor.com/options) - - [Supported Media](https://auto-editor.com/supported_media) - - [What is Range Syntax](https://auto-editor.com/range_syntax) - - [Subcommands](https://auto-editor.com/subcommands) - - [Note on GPU Acceleration](https://auto-editor.com/gpu) + - [Docs](https://auto-editor.com/docs) + - [Blog](https://auto-editor.com/blog) ## Copyright Auto-Editor is under the [Public Domain](https://github.com/WyattBlue/auto-editor/blob/master/LICENSE) and includes all directories besides the ones listed below. Auto-Editor was created by [these people.](https://github.com/WyattBlue/auto-editor/blob/master/AUTHORS.md) diff -Nru auto-editor-22w28a+ds/setup.py auto-editor-22w52a+ds/setup.py --- auto-editor-22w28a+ds/setup.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/setup.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,5 +1,6 @@ import re -from setuptools import setup, find_packages + +from setuptools import find_packages, setup def pip_version(): @@ -16,7 +17,7 @@ raise ValueError("Unable to find version string.") -with open("README.md", "r") as f: +with open("README.md") as f: long_description = f.read() setup( @@ -36,16 +37,14 @@ keywords="video audio media editor editing processing nonlinear automatic " "silence-detect silence-removal silence-speedup motion-detection", packages=find_packages(), - package_data={"auto_editor": ["help.json"]}, - include_package_data=True, - zip_safe=False, + zip_safe=True, install_requires=[ "numpy>=1.22.0", - "pillow==9.2.0", - "av==9.2.0", - "ae-ffmpeg==1.0.0", + "pillow==9.3.0", + "av==10.0.0", + "ae-ffmpeg==1.1.1", ], - python_requires=">=3.8", + python_requires=">=3.9", classifiers=[ "Topic :: Multimedia :: Sound/Audio", "Topic :: Multimedia :: Video", @@ -58,9 +57,9 @@ "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", - "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", ], diff -Nru auto-editor-22w28a+ds/site/basswood.py auto-editor-22w52a+ds/site/basswood.py --- auto-editor-22w28a+ds/site/basswood.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/basswood.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,10 +1,12 @@ +from __future__ import annotations + import os import re import shlex import shutil import sys from re import Match -from typing import Any, Callable, NoReturn, Optional +from typing import Any, Callable, NoReturn def error(msg: str) -> NoReturn: @@ -12,15 +14,14 @@ sys.exit(1) -def regex_match(regex: str, text: str) -> Optional[str]: - match = re.search(regex, text) - if match: +def regex_match(regex: str, text: str) -> str | None: + if match := re.search(regex, text): return match.groupdict()["match"] return None def match_liquid(text: str, hook: Callable[[list[str]], str]) -> str: - def search(text: str) -> Optional[Match[str]]: + def search(text: str) -> Match[str] | None: return re.search(r"{{\s[^\}]+\s}}", text) liquid_syntax = search(text) @@ -66,10 +67,7 @@ error(f"Cannot use variable: {var}") return " ".join(args) - new_lines = [] - for line in lines: - new_lines.append(match_liquid(line, comp_hook)) - return new_lines + return [match_liquid(l, comp_hook) for l in lines] def safe_rm_dir(path: str) -> None: @@ -81,35 +79,26 @@ class Site: - def __init__(self, source: str, output_dir: str) -> None: + def __init__(self, prod: bool, source: str, output_dir: str): self.source = source self.output_dir = output_dir self.components = os.path.join(source, "components") - self.production = False + self.production = prod if not os.path.isdir(self.components): error(f"components dir: '{self.components}' not found") - @staticmethod - def _get_filename(path: str) -> str: - return os.path.splitext(os.path.basename(path))[0] - - def _get_components(self) -> dict[str, str]: - components = {} - if os.path.exists(self.components): - for item in os.listdir(self.components): - if item.startswith("."): - continue - comp_name = self._get_filename(item) - with open(os.path.join(self.components, item), "r") as file: - components[comp_name] = file.read() - return components - def make(self) -> None: join = os.path.join - components = self._get_components() + components = {} + for item in os.listdir(self.components): + if item.startswith("."): + continue + comp_name = os.path.splitext(os.path.basename(item))[0] + with open(os.path.join(self.components, item)) as file: + components[comp_name] = file.read() - def fix_files(path: str, OUT: str, root: str) -> None: + def fix_files(path: str, OUT: str) -> None: safe_rm_dir(OUT) for item in os.listdir(path): the_file = join(path, item) @@ -118,7 +107,7 @@ if os.path.isdir(the_file): if the_file != self.components: shutil.copytree(the_file, new_file) - fix_files(the_file, new_file, root) + fix_files(the_file, new_file) continue ext = os.path.splitext(the_file)[1] @@ -128,31 +117,22 @@ shutil.copy(the_file, new_file) continue - with open(the_file, "r") as file: + with open(the_file) as file: contents = file.read().splitlines(True) contents = add_components(contents, components) - def remove_html_links(n): - return n.replace(".html", "") - - if ext == ".html": - if self.production: - if "index" not in item: - new_file = os.path.splitext(new_file)[0] - - # remove bad html files with liquid syntax. - if os.path.exists(new_file + ".html"): - os.remove(new_file + ".html") - - # remove .html links - contents = list(map(remove_html_links, contents)) + if self.production and ext == ".html": + # remove .html links + contents = list(map(lambda n: n.replace(".html", ""), contents)) + if "index" not in new_file: + # rename file so new pages don't have .html ext + new_file = new_file.replace(".html", "") with open(new_file, "w") as file: file.writelines(contents) - root = os.path.abspath(self.source) - fix_files(self.source, self.output_dir, root) + fix_files(self.source, self.output_dir) def serve(self, port: int) -> None: import http.server @@ -171,7 +151,7 @@ httpd.serve_forever() except KeyboardInterrupt: pass - print("\nClosing server.") + print("\nclosing server") httpd.server_close() try: diff -Nru auto-editor-22w28a+ds/site/build.py auto-editor-22w52a+ds/site/build.py --- auto-editor-22w28a+ds/site/build.py 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/build.py 2022-12-31 17:05:14.000000000 +0000 @@ -1,44 +1,54 @@ #!/usr/bin/env python3 -import argparse -import os import re -import shutil import subprocess +import sys +from argparse import ArgumentParser +from os import getenv +from pathlib import Path +from shutil import rmtree import basswood +import paletdoc +from paletdoc import code, proc, syntax, text, value, var +sys.path.insert(0, "/Users/wyattblue/projects/auto-editor") + +from auto_editor import version import auto_editor.vanparse as vanparse from auto_editor.__main__ import main_options -from auto_editor.vanparse import Options, OptionText, Required - -parser = argparse.ArgumentParser() -parser.add_argument("--production", "-p", action="store_true") -args = parser.parse_args() +from auto_editor.vanparse import OptionText -SECRET = "secret" if os.getenv("AE_SECRET") is None else os.getenv("AE_SECRET") +argp = ArgumentParser() +argp.add_argument("--production", "-p", action="store_true") +args = argp.parse_args() +parser = vanparse.ArgumentParser("Auto-Editor") +parser = main_options(parser) -def get_link_name(item: str) -> str: - root, ext = os.path.splitext(item) - - _os = "Windows" - if ext == ".dmg": - _os = "MacOS" - if ext == ".7z": - _os = "Arch-Based" - if ext == ".deb": - _os = "Debian" +secret = Path("src") / getenv("AE_SECRET", "secret") +ref = Path("src") / "ref" / version +ref.mkdir(parents=True, exist_ok=True) + + +def get_link_name(item: Path) -> str: + extname = { + ".dmg": "MacOS", + ".7z": "Arch-Based", + ".deb": "Debian", + ".exe": "Windows", + ".zip": "Windows", + } + _os = extname.get(item.suffix) + if _os is None: + raise ValueError(f"Unknown suffix: {item.suffix}") - version = re.search(r"[0-9]\.[0-9]\.[0-9]", root).group() + version = re.search(r"[0-9]\.[0-9]\.[0-9]", item.stem).group() return f"{version} {_os} Download" -parser = vanparse.ArgumentParser("Auto-Editor") -parser = main_options(parser) - -with open("src/options.html", "w") as file: +with open(ref / "options.html", "w") as file: file.write( '{{ comp.header "Options" }}\n' "\n" @@ -46,13 +56,9 @@ '
\n' '
\n' ) - for op in parser.args: if isinstance(op, OptionText): - if op._type == "text": - file.write(f"

{op.text}

\n") - else: - file.write("
\n") + file.write(f"

{op.text}

\n") else: file.write(f"

{op.names[0]}

\n") if len(op.names) > 1: @@ -66,8 +72,89 @@ file.write("
\n
\n\n\n\n") -if os.path.exists("binaries"): - with open(f"./src/{SECRET}/more.html", "w") as file: + +with open(ref / "palet.html", "w") as file: + file.write( + '{{ comp.header "Palet Scripting Reference" }}\n' + "\n" + "{{ comp.nav }}\n" + '
\n' + '
\n' + "

Palet Scripting Reference

\n" + "

This manual describes the complete overview of the palet scripting language.

" + "

Palet is an anagram-acronym of the words: " + "(A)uto-(E)ditor's (T)iny (P)rocedural (L)anguage

" + ) + + def text_to_str(t: text) -> str: + s = "" + for c in t.children: + if isinstance(c, code): + s += f"{c.val}" + elif isinstance(c, var): + s += f'{c.val}' + else: + s += c + return s + + def build_sig(vec: list[str]) -> str: + results = [ + v if v == "..." else f'{v}' for v in vec + ] + return " ".join(results) + + for category, somethings in paletdoc.doc.items(): + file.write(f'

{category}

\n') + for some in somethings: + if isinstance(some, syntax): + _body = build_sig(some.body.split(" ")) + file.write( + f'
\n' + f'

Syntax

\n' + f'

({some.name} {_body})

\n
\n' + f"

{text_to_str(some.summary)}

\n" + ) + if isinstance(some, proc): + rname = some.sig[1] + file.write( + f'
\n' + f'

Procedure

\n' + f'

({some.name} {build_sig(some.sig[0])})' + + ( + "

\n" + if rname == "none" + else f' → {rname}

\n' + ) + ) + for argsig in some.argsig: + name = argsig[0] + sig = argsig[1] + default = argsig[2] if len(argsig) > 2 else None + file.write( + f'

 {name} : {sig}' + + ( + "

\n" + if default is None + else f" = {default}

\n" + ) + ) + + file.write("
\n" f"

{text_to_str(some.summary)}

\n") + if isinstance(some, value): + file.write( + f'
\n' + f'

Value

\n' + f'

{some.name}\n' + f' : {some.sig}

\n
\n' + f"

{text_to_str(some.summary)}

\n" + ) + if isinstance(some, text): + file.write(f"

{text_to_str(some)}

\n") + + +binaries = Path("binaries") +if binaries.exists(): + with open(secret / "more.html", "w") as file: file.write( '{{ comp.header "More Downloads" }}\n' "\n" @@ -76,18 +163,20 @@ '
\n' '

More Downloads

\n' ) - for item in sorted(os.listdir("binaries")): - if os.path.splitext(item)[1] in (".dmg", ".zip", ".7z", ".deb"): - file.write(f'

{get_link_name(item)}

\n') + for item in sorted([x for x in binaries.iterdir() if x.is_file()]): + if item.suffix in (".dmg", ".zip", ".7z", ".deb"): + file.write( + f'

{get_link_name(item)}

\n' + ) file.write("
\n
\n\n\n\n") -site = basswood.Site(source="src", output_dir="auto-editor") -site.production = args.production +site = basswood.Site(args.production, source="src", output_dir="auto-editor") site.make() if args.production: - SECRET2 = os.getenv("AE_SECRET2") + SECRET2 = getenv("AE_SECRET2") + assert SECRET2 is not None subprocess.run(["rsync", "-rtvzP", "binaries/", SECRET2]) subprocess.run( ["rsync", "-rtvzP", "auto-editor/", "root@auto-editor.com:/var/www/auto-editor"] @@ -95,10 +184,11 @@ else: site.serve(port=8080) -print("Removing temporary files.") +print("removing auto-generated files") try: - shutil.rmtree("./auto-editor") + rmtree("./auto-editor") except FileNotFoundError: pass -os.remove("./src/options.html") -os.remove(f"./src/{SECRET}/more.html") + +(secret / "more.html").unlink(missing_ok=True) +print("done") diff -Nru auto-editor-22w28a+ds/site/paletdoc.py auto-editor-22w52a+ds/site/paletdoc.py --- auto-editor-22w28a+ds/site/paletdoc.py 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/paletdoc.py 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,623 @@ +from __future__ import annotations +from dataclasses import dataclass + + +@dataclass +class code: + val: str + + +@dataclass +class var: + val: str + + +@dataclass +class text: + children: list[str | code] + + +@dataclass +class proc: + name: str + sig: tuple[list[str], str] + argsig: list[tuple[str, str] | tuple[str, str, str]] + summary: text + + +@dataclass +class syntax: + name: str + body: str + summary: text + + +@dataclass +class value: + name: str + sig: str + summary: text + + +doc: dict[str, list[proc | value | syntax | text]] = { + "Basic Syntax": [ + syntax( + "define", + "id expr", + text(["Set ", var("id"), " to the result of ", var("expr"), "."]), + ), + syntax( + "set!", + "id expr", + text([ + "Set the result of ", var("expr"), " to ", var("id"), " if ", + var("id"), " is already defined. If ", var("id"), + " is not defined, raise an error.", + ]), + ), + syntax( + "if", + "test-expr then-expr else-expr", + text([ + "Evaluates ", var("test-expr"), ". If ", code("#t"), " then evaluate ", + var("then-expr"), " else evaluate ", var("else-expr"), + ". An error will be raised if evaluated ", + var("test-expr"), " is not a ", code("boolean?"), ".", + ]), + ), + syntax( + "when", + "test-expr body", + text([ + "Evaluates ", var("test-expr"), ". If ", code("#t"), " then evaluate ", + var("body"), " else do nothing. An error will be raised if evaluated ", + var("test-expr"), " is not a ", code("boolean?"), ".", + ]), + ), + ], + "Loops": [ + syntax( + "for", + "([id seq-expr] ...) body", + text([ + "Loop over ", var("seq-expr"), " by setting the variable ", var("id"), + " to the nth item of ", var("seq-expr"), " and evaluating ", + var("body"), ".", + ]), + ), + syntax( + "for/vector", + "([id seq-expr] ...) body", + text(["Like ", code("for"), + " but returns a vector with the last evaluated elements of ", + code("body"), ".", + ]), + ), + ], + "Equality": [ + proc( + "equal?", + (["v1", "v2"], "boolean?"), + [("v1", "any/c"), ("v2", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v1"), " and ", var("v2"), + " are the same type and have the same value, ", code("#f"), " otherwise." + ]), + ), + proc( + "eq?", + (["v1", "v2"], "boolean?"), + [("v1", "any/c"), ("v2", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v1"), " and ", var("v2"), + " refer to the same object in memory, ", code("#f"), " otherwise." + ]), + ), + ], + "Booleans": [ + proc( + "boolean?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is ", code("#t"), " or ", + code("#f"), ", ", code("#f"), " otherwise." + ]), + ), + value("true", "boolean?", text(["An alias for ", code("#t"), "."])), + value("false", "boolean?", text(["An alias for ", code("#f"), "."])), + ], + "Number Types": [ + proc( + "number?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is a number, ", code("#f"), + " otherwise.", + ]), + ), + proc( + "real?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is a real number, ", + code("#f"), " otherwise.", + ]), + ), + proc( + "integer?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is an integer, ", + code("#f"), " otherwise.", + ]), + ), + proc( + "nonnegative-integer?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is an integer and ", + var("v"), " is greater than ", code("-1"), ", ", code("#f"), + " otherwise.", + ]), + ), + proc( + "zero?", + (["v"], "boolean?"), + [("v", "real?")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is equal to ", code("0"), + ", ", code("#f"), " otherwise." + ]), + ), + proc( + "positive?", + (["v"], "boolean?"), + [("v", "real?")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is greater than ", + code("0"), ", ", code("#f"), " otherwise." + ]), + ), + proc( + "negative?", + (["v"], "boolean?"), + [("v", "real?")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is less than ", code("0"), + ", ", code("#f"), " otherwise." + ]), + ), + ], + "Numbers": [ + proc( + "+", + (["z", "..."], "number?"), + [("z", "number?")], + text( + [ + "Return the sum of ", + var("z"), + "s. Add from left to right. If no arguments are provided, the result is ", + code("0"), + ".", + ] + ), + ), + proc( + "-", + (["z", "w", "..."], "number?"), + [("z", "number?"), ("w", "number?")], + text( + [ + "When no ", + var("w"), + "s are applied, return ", + code("(- 0 z)"), + ". Otherwise, return the subtraction of ", + var("w"), + "s of ", + var("z"), + ".", + ] + ), + ), + proc( + "*", + (["z", "..."], "number?"), + [("z", "number?")], + text( + [ + "Return the product of ", + var("z"), + "s. If no ", + var("z"), + "s are supplied, the result is ", + code("1"), + ".", + ] + ), + ), + proc( + "/", + (["z", "w", "..."], "number?"), + [("z", "number?"), ("w", "number?")], + text( + [ + "When no ", + var("w"), + "s are applied, return ", + code("(/ 1 z)"), + ". Otherwise, return the division of ", + var("w"), + "s of ", + var("z"), + ".", + ] + ), + ), + proc( + "mod", + (["n", "m"], "integer?"), + [("n", "integer?"), ("m", "integer?")], + text(["Return the modulo of ", var("n"), " and ", var("m"), "."]), + ), + proc( + "modulo", + (["n", "m"], "real?"), + [("n", "real?"), ("m", "real?")], + text(["Clone of ", code("mod"), "."]), + ), + proc( + "add1", + (["z"], "number?"), + [("z", "number?")], + text(["Returns ", code("(+ z 1)"), "."]), + ), + proc( + "sub1", + (["z"], "number?"), + [("z", "number?")], + text(["Returns ", code("(- z 1)"), "."]), + ), + proc( + "=", + (["z", "w", "..."], "boolean?"), + [("z", "number?"), ("w", "number?")], + text( + [ + "Returns ", + code("#t"), + " if all arguments are numerically equal, ", + code("#f"), + " otherwise.", + ] + ), + ), + proc( + "<", + (["x", "y"], "boolean?"), + [("x", "real?"), ("y", "real?")], + text( + [ + "Returns ", + code("#t"), + " if ", + var("x"), + " is less than ", + var("y"), + ", ", + code("#f"), + " otherwise.", + ] + ), + ), + proc( + "<=", + (["x", "y"], "boolean?"), + [("x", "real?"), ("y", "real?")], + text([ + "Returns ", code("#t"), " if ", var("x"), " is less than or equal to ", + var("y"), ", ", code("#f"), " otherwise.", + ]), + ), + proc( + ">", + (["x", "y"], "boolean?"), + [("x", "real?"), ("y", "real?")], + text([ + "Returns ", code("#t"), " if ", var("x"), " is greater than ", + code("y"), ", ", code("#f"), " otherwise.", + ]), + ), + proc( + ">=", + (["x", "y"], "boolean?"), + [("x", "real?"), ("y", "real?")], + text([ + "Returns ", code("#t"), " if ", var("x"), + " is greater than or equal to ", var("y"), ", ", code("#f"), + " otherwise.", + ]), + ), + proc( + "abs", + (["x"], "real?"), + [("x", "real?")], + text(["Returns the absolute value of ", var("x"), "."]), + ), + proc( + "max", + (["x", "..."], "real?"), + [("x", "real?")], + text(["Returns largest value of the ", var("x"), "s."]), + ), + proc( + "min", + (["x", "..."], "real?"), + [("x", "real?")], + text(["Returns smallest value of the ", var("x"), "s."]), + ), + ], + "Vectors": [ + proc( + "vector?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is a vector, ", code("#f"), + " otherwise.", + ]), + ), + proc( + "vector", + (["v", "..."], "vector?"), + [("v", "any/c")], + text([ + "Returns a new vector with the ", var("v"), + " args filled with its slots in order.", + ]), + ), + proc( + "make-vector", + (["size", "[v]"], "vector?"), + [("size", "nonnegative-integer?"), ("v", "any/c", "0")], + text([ + "Returns a new vector with ", var("size"), " slots, all filled with ", + var("v"), "s.", + ]), + ), + proc( + "vector-pop!", + (["vec"], "any/c"), + [("vec", "vector?")], + text(["Remove the last element of ", var("vec"), " and return it."]), + ), + proc( + "vector-add!", + (["vec", "v"], "none"), + [("vec", "vector?"), ("v", "any/c")], + text(["Append ", var("v"), " to the end of ", var("vec"), "."]), + ), + proc( + "vector-set!", + (["vec", "pos", "v"], "none"), + [("vec", "vector?"), ("pos", "integer?"), ("v", "any/c")], + text(["Set slot ", var("pos"), " of ", var("vec"), " to ", var("v"), "."]), + ), + proc( + "vector-extend!", + (["vec", "vec2", "..."], "none"), + [("vec", "vector?"), ("vec2", "vector?")], + text([ + "Append all elements of ", var("vec2"), " to the end of ", var("vec"), + " in order.", + ]), + ), + ], + "Arrays": [ + proc( + "array?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is an array, ", + code("#f"), " otherwise.", + ]), + ), + proc( + "array", + (["dtype", "v", "..."], "array?"), + [("dtype", "symbol?"), ("v", "any/c")], + text( + [ + "Returns a freshly allocated array with ", + var("dtype"), + " as its datatype and the ", + var("v"), + " args as its values filled in order.", + ] + ), + ), + proc( + "array-splice!", + (["arr", "v", "[start]", "[stop]"], "array?"), + [ + ("arr", "array?"), + ("v", "real?"), + ("start", "integer?", "0"), + ("stop", "integer?", "(length arr)"), + ], + text([ + "Modify ", var("arr"), " by setting ", var("start"), " to ", + var("stop"), "to ", var("v"), ".", + ]), + ), + proc( + "margin", + (["left", "[right]", "arr"], "bool-array?"), + [ + ("left", "integer?"), + ("right", "integer?", "left"), + ("arr", "bool-array?"), + ], + text([ + "Returns a new ", code("bool-array?"), " with ", var("left"), " and", + var("right"), " margin applied." + ]), + ), + ], + "Pairs and Lists": [ + proc( + "pair?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is a pair, ", + code("#f"), " otherwise.", + ]), + ), + proc( + "null?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is an empty list, ", + code("#f"), " otherwise.", + ]), + ), + proc( + "cons", + (["a", "d"], "pair?"), + [("a", "any/c"), ("d", "any/c")], + text([ + "Returns a newly allocated pair where the first item is set to ", + var("a"), " and the second item set to ", var("d"), ".", + ]), + ), + proc( + "car", + (["p"], "any/c?"), + [("p", "pair?")], + text([ + "Returns the first element of the pair ", var("p"), ".", + ]), + ), + proc( + "cdr", + (["p"], "any/c?"), + [("p", "pair?")], + text([ + "Returns the second element of the pair ", var("p"), ".", + ]), + ), + proc( + "list?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), + " is an empty list or a pair whose second element is a list.", + ]), + ), + proc( + "list", + (["v", "..."], "list?"), + [("v", "any/c")], + text(["Returns a list with ", var("v"), " in order."]), + ), + proc( + "list-ref", + (["lst", "pos"], "any/c"), + [("lst", "list?"), ("pos", "nonnegative-integer?")], + text([ + "Returns the element of ", var("lst"), " at position ", var("pos"), ".", + ]), + ), + ], + "Ranges": [ + proc( + "range?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), " is a range object, ", + code("#f"), " otherwise.", + ]), + ), + proc( + "in-range", + (["start", "stop", "[step]"], "range?"), + [ + ("start", "integer?"), + ("stop", "integer?"), + ("step", "integer?", "1"), + ], + text(["Returns a range object."]), + ), + ], + "Generic Sequences": [ + proc( + "iterable?", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " if ", var("v"), + " is a vector, array, string, pair, or range, ", code("#f"), + " otherwise.", + ]), + ), + proc( + "length", + (["seq"], "nonnegative-integer?"), + [("seq", "iterable?")], + text(["Returns the length of ", var("seq"), "."]), + ), + proc( + "ref", + (["seq", "pos"], "any/c"), + [("seq", "iterable?"), ("pos", "integer?")], + text([ + "Returns the element of ", var("seq"), " at position ", + var("pos"), ", where the first element is at position", code("0"), + ". For sequences other than pair?, negative positions are allowed.", + ]), + ), + proc( + "slice", + (["seq", "start", "[stop]", "[step]"], "iterable?"), + [ + ("seq", "iterable?"), + ("start", "integer?"), + ("stop", "integer?", "(length seq)"), + ("step", "integer?", "1"), + ], + text([ + "Returns the elements of ", var("seq"), " from ", var("start"), + " inclusively to ", var("stop"), " exclusively. If ", var("step"), + " is negative, then ", var("stop"), " is inclusive and ", + var("start"), " is exclusive.", + ]), + ), + proc( + "reverse", + (["seq"], "iterable?"), + [("seq", "iterable?")], + text(["Returns ", var("seq"), " in reverse order."]), + ), + ], + "Contracts": [ + proc( + "any/c", + (["v"], "boolean?"), + [("v", "any/c")], + text([ + "Returns ", code("#t"), " regardless of the value of ", var("v"), ".", + ]), + ), + ], +} diff -Nru auto-editor-22w28a+ds/site/src/blog/index.html auto-editor-22w52a+ds/site/src/blog/index.html --- auto-editor-22w28a+ds/site/src/blog/index.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/blog/index.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,13 @@ +{{ comp.header "Auto-Editor - Blog Posts" }} + +{{ comp.nav }} +
+
+

Blog Posts

+
+

The --source Option October 16, 2022

+

Why It's Time to Remove --silent-threshold July 26, 2022

+
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/blog/silent-threshold.html auto-editor-22w52a+ds/site/src/blog/silent-threshold.html --- auto-editor-22w28a+ds/site/src/blog/silent-threshold.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/blog/silent-threshold.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,26 @@ +{{ comp.header "Why It's Time to Remove --silent-threshold" }} + +{{ comp.blog-nav }} +
+
+ +

Why It's Time To Remove --silent-threshold

+

Author: WyattBlue

+

Date: July 26, 2022

+

TL;DR use --edit audio:threshold=NUM instead.

+
+

--silent-threshold has been with us since the literal first day of auto-editor's existence. However, it's continued usage has become problematic. It's finally time to put this aging option to rest.

+

Reason 1: It's Too Ambiguous

+

Every threshold controls what is considered silent and loud, yet --silent-threshold only controls audio threshold. This used to make sense when auto-editor used to only have one way of automatically editing files. But now, it's not so obvious that it still only controls audio threshold, or really, that --silent-threshold only controls the default value of the threshold attribute of --edit, which leads us to...

+ +

Reason 2: --edit's Syntax Is So Much Nicer

+

With --edit's syntax, you can edit multiple tracks with different thresholds, something never possible with the --silent-threshold.

+
auto-editor multi-track.mov --edit 'audio:stream=0,threshold=0.04 or audio:stream=1,threshold=0.09'
+

It's also much clearer how threshold impacts the editing process while --silent-threshold is much more opaque on how it exactly works and how it interacts with --edit. Why bother explaining how --silent-threshold interacts with --edit to every user when --edit is better in every way?

+

Reason 3: It’s Used Surprisingly Little Out in the Wild

+

--silent-threshold and it's alias -t is used surprising little in scripts. I look at GitHub's Dependency Graph and watch various YouTube videos showcasing auto-editor to gauge how options are used in the real world, and in all usages I can find, it's never mentioned at all. This might be both that 4% is a very good default that doesn't need changing and/or how audio threshold works is unintuitive, which is why it is never explained or used. Whatever the case, this lack of usage means --silent-threshold can be removed without causing annoyance.

+
+

Appendix: Why Not Create a Macro?

+

When I removed --export-to-premiere and --export-to-final-cut-pro options, I used a 'macro' that essentially silently treats '--export-to-premiere' like '--export premiere' and this allowed users to write the option in the "old style", blissfully unaware of any changes, even when the option and it's help text technically doesn't exist anymore. The reason why I didn't use a similar strategy for --silent-threshold is that any script makers who feel the need to change silent threshold, are the people most who would benefit from the flexibility of --edit. + + diff -Nru auto-editor-22w28a+ds/site/src/blog/source.html auto-editor-22w52a+ds/site/src/blog/source.html --- auto-editor-22w28a+ds/site/src/blog/source.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/blog/source.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,41 @@ +{{ comp.header "The --source option" }} + +{{ comp.blog-nav }} +

+
+ +

The --source Option

+

Author: WyattBlue

+

Date: October 16, 2022

+ +
+

What Does the --source Option Do?

+ +

Auto-Editor allows you to create timeline objects with your files, however, typing/dragging the file path every time an object is declared is a pain. What --source does is map a path to a short and reusable label. You can use that label to reference the file without using it's path.

+ +
# Map a path to the label "dog"
+--source dog:/Users/wyattblue/Downloads/dog-123.png
+
+ +

Right now, src only accepts source names and not the file path directly. This might change in the future.

+

Also, User defined labels cannot:

+
    +
  • Be greater than 55 characters
  • +
  • Start with a with a digit or a dash (0 1 2 3 4 5 6 7 8 9 -)
  • +
  • Contain , = . : ; ( ) / \ [ ] { }' " | # < > & ^ % $ _ @ anywhere
  • +
  • Contain an invalid UTF-8 character
  • +
  • Contain the space character
  • +
+ +

This is partly because of limitations on file path names, *cough* Windows *cough*, but also to make parsing easy and forwards compatible.

+

The reason User defined is there is that paths given to auto-editor without a label are assigned a name, 0, 1, 2 and beyond. + +

# How you would use --source in a real situation
+auto-editor movie.mp4 movie2.mp4 --source dog:/Users/wyattblue/Downloads/dog-123.png \
+--add image:0,30,src=dog
+
+ +
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/components/blog-nav.html auto-editor-22w52a+ds/site/src/components/blog-nav.html --- auto-editor-22w28a+ds/site/src/components/blog-nav.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/components/blog-nav.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,16 @@ + diff -Nru auto-editor-22w28a+ds/site/src/components/header.html auto-editor-22w52a+ds/site/src/components/header.html --- auto-editor-22w28a+ds/site/src/components/header.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/components/header.html 2022-12-31 17:05:14.000000000 +0000 @@ -6,7 +6,7 @@ - + diff -Nru auto-editor-22w28a+ds/site/src/components/index_header.html auto-editor-22w52a+ds/site/src/components/index_header.html --- auto-editor-22w28a+ds/site/src/components/index_header.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/components/index_header.html 2022-12-31 17:05:14.000000000 +0000 @@ -6,7 +6,7 @@ - + diff -Nru auto-editor-22w28a+ds/site/src/docs/gpu.html auto-editor-22w52a+ds/site/src/docs/gpu.html --- auto-editor-22w28a+ds/site/src/docs/gpu.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/docs/gpu.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,39 @@ +{{ comp.header "Auto-Editor - GPU Acceleration" }} + +{{ comp.nav }} +
+
+

GPU Acceleration

+

Does Auto-Editor support GPU acceleration?

+

Yes, enable it by linking a version of FFmpeg with GPU acceleration to Auto-Editor and setting the appropriate video codec

+

Use --my-ffmpeg or --ffmpeg-location option for linking.

+

How do I enable GPU acceleration on FFmpeg?

+

Compile FFmpeg with the appropriate flags and follow the relevant instructions. +

+

+

Remember to set the export codec in auto-editor. auto-editor --video-codec

+

Note that the resulting build is legally undistributable.

+

Will enabling GPU acceleration make auto-editor go faster?

+

If you want to export to a certain codec that is compatible with your GPU, yes, in some cases, it will go noticeably faster, albeit with some slight quality loss.

+

However, in most other cases, GPU acceleration won't do anything since analyze and creating new media files are mostly CPU bound. Given how relatively complex enabling GPU acceleration is, it is not recommend for most users.

+
+

Further Reading

+ +
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/docs/index.html auto-editor-22w52a+ds/site/src/docs/index.html --- auto-editor-22w28a+ds/site/src/docs/index.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/docs/index.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,16 @@ +{{ comp.header "Auto-Editor - Docs" }} + +{{ comp.nav }} +
+ +
+ + diff -Nru auto-editor-22w28a+ds/site/src/docs/range_syntax.html auto-editor-22w52a+ds/site/src/docs/range_syntax.html --- auto-editor-22w28a+ds/site/src/docs/range_syntax.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/docs/range_syntax.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,32 @@ +{{ comp.header "Auto-Editor - Range Syntax" }} + +{{ comp.nav }} +
+
+

Range Syntax

+

Range syntax is a common format used in --add-in, --cut-out, --mark-as-loud, --mark-as-silent, --set-speed-for-range options.

+

It describes a range in time based on frames starting from 0, the first frame.

+
# Cut out 1 frame
+auto-editor example.mp4 --cut-out 0,1
+
+# Cut out 60 frames
+auto-editor example.mp4 --cut-out 10,70
+
+# Cuts out no frames
+auto-editor example.mp4 --cut-out 0,0
+

Any number greater or equal to 0 can be used. The constants, `start` and `end` can also be used.

+
auto-editor example.mp4 --mark-as-loud 72,end
+

You can describe these ranges using seconds instead of frames.

+
auto-editor example.mp4 --cut-out 1secs,10secs
+

s, sec, secs, second, seconds can be used interchangeably. Explicit frame units may also be used. (f, frame, frames). Constants do not accept units.

+

You can also use negative numbers. (i.e -60,end selects the last 60 frames, -10s,-5s selects from 10 seconds to end to 5 seconds to the end)

+

Range Syntax has a nargs value of '*' meaning it can take any many ranges.

+
auto-editor example.mp4 --cut-out 0,20 45,60, 234,452
+
+

The --set-speed-for-range option has an additional argument for speed. The command:

+
auto-editor example.mp4 --set-speed-for-range 2,0,30
+

means set the speed of the video to twice as fast (2x) from the 0th frame to the 30th frame.

+
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/docs/subcommands.html auto-editor-22w52a+ds/site/src/docs/subcommands.html --- auto-editor-22w28a+ds/site/src/docs/subcommands.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/docs/subcommands.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,86 @@ +{{ comp.header "Auto-Editor - Subcommands" }} + +{{ comp.nav }} +
+
+

Subcommands

+

Subcommands are programs that have their own options, separate from the main auto-editor program. They typically serve an auxiliary function.

+

Examples:

+
auto-editor info example.mp4
+auto-editor grep "pattern" file_with_subtitles.mp4
+
+

Subcommands can also be called by themselves with the ae prefix:

+
aeinfo example.mp4
+aegrep "pattern" file_with_subtitles.mp4
+
+

Info

+

The info subcommand accepts filenames and outputs basic media info like duration, fps, etc.

+
auto-editor info example.mp4 resources/man_on_green_screen.gif
+
+file: example.mp4
+ - video tracks: 1
+   - Track #0
+     - codec: h264
+     - pix_fmt: yuv420p
+     - time_base: 1/30000
+     - fps: 30
+     - resolution: 1280x720 (16:9)
+     - bitrate: 240 kb/s
+ - audio tracks: 1
+   - Track #0
+     - codec: aac
+     - samplerate: 48000
+     - bitrate: 317 kb/s
+ - container:
+   - duration: 00:00:42.45
+   - bitrate: 569 kb/s
+
+file: resources/man_on_green_screen.gif
+ - video tracks: 1
+   - Track #0
+     - codec: gif
+     - pix_fmt: bgra
+     - time_base: 1/100
+     - fps: 30
+     - resolution: 1280x720 (16:9)
+ - container:
+   - duration: 00:00:24.41
+   - bitrate: 1649 kb/s
+
+

You can also check for additional information with options.

+
auto-editor info example.mp4 --has-vfr
+
+file: example.mp4
+ - video tracks: 1
+   - Track #0
+     - codec: h264
+     - pix_fmt: yuv420p
+     - time_base: 1/30000
+     - fps: 30
+     - resolution: 1280x720 (16:9)
+     - bitrate: 240 kb/s
+ - audio tracks: 1
+   - Track #0
+     - codec: aac
+     - samplerate: 48000
+     - bitrate: 317 kb/s
+ - container:
+   - duration: 00:00:42.45
+   - bitrate: 569 kb/s
+
+

Grep

+

The grep subcommand searches and displays regex matches in media subtitles.

+
# search for the pattern "the" in the Movie.mkv subtitles.
+auto-editor grep the_movie.mkv
+

List of all subcommands currently available:

+
    +
  • info
  • +
  • grep
  • +
  • levels
  • +
  • subdump
  • +
  • test (for devs, cannot be called with an ae prefix)
  • +
+
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/docs/supported_media.html auto-editor-22w52a+ds/site/src/docs/supported_media.html --- auto-editor-22w28a+ds/site/src/docs/supported_media.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/docs/supported_media.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,22 @@ +{{ comp.index_header "Auto-Editor - Supported Media" }} + +{{ comp.nav }} +
+
+

Supported Media

+

What is allowed

+
    +
  • Video files with or without audio, subtitles, embedded images
  • +
  • Audio files (with or without multiple tracks)
  • +
+

What isn't allowed

+
    +
  • Bare subtitles (Auto-Editor doesn't know what to do with them)
  • +
  • Media greater than 24 hours may work but is not officially supported
  • +
+

Footnotes

+

Auto-Editor uses the terms "track" and "stream" interchangeably.

+
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/docs/windows.html auto-editor-22w52a+ds/site/src/docs/windows.html --- auto-editor-22w28a+ds/site/src/docs/windows.html 1970-01-01 00:00:00.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/docs/windows.html 2022-12-31 17:05:14.000000000 +0000 @@ -0,0 +1,34 @@ +{{ comp.header "Auto-Editor - Using Auto-Editor on Windows" }} + +{{ comp.nav }} +
+
+

Using Auto-Editor on Windows

+

Recommended Shell and Terminal

+

Use the Windows Terminal app with PowerShell when running auto-editor commands. Using the CMD shell isn't impossible, but you'll have to write it a little differently then the docs.

+

Shell vs. Terminal

+

The shell is the program that actually does stuff and a Terminal is GUI program that displays what the shell is doing and is responsible for tasks like scrolling history and Copy Paste.

+ +

CMD is the older shell Windows keeps around for compatibly with old programs while PowerShell is the relatively newer shell that work with the examples on the docs. +

Traditionally, Windows has kept there terminals and shells tightly coupled. The CMD shell could only be accessed by the CMD Terminal and the PowerShell... well... shell could only be used by the PowerShell Terminal. Windows Terminal allows independent of any shell and allows both.

+ +

Running Auto-Editor on Many Files

+

Auto-Editor doesn't have an option that batch processes files, but you can achieve the same effect with a simple PowerShell script.

+

+    # Save with the ".ps1" extension
+    $files = "C:\Users\WyattBlue\MyDir\"
+    foreach ($f in Get-ChildItem $files) {
+      auto-editor $(Join-Path -Path $files -ChildPath $f)
+    }
+    
+
+
+ + diff -Nru auto-editor-22w28a+ds/site/src/gpu.html auto-editor-22w52a+ds/site/src/gpu.html --- auto-editor-22w28a+ds/site/src/gpu.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/gpu.html 1970-01-01 00:00:00.000000000 +0000 @@ -1,39 +0,0 @@ -{{ comp.header "Auto-Editor - GPU Acceleration" }} - -{{ comp.nav }} -
-
-

GPU Acceleration

-

Does Auto-Editor support GPU acceleration?

-

Yes, enable it by linking a version of FFmpeg with GPU acceleration to Auto-Editor and setting the appropriate video codec

-

Use --my-ffmpeg or --ffmpeg-location option for linking.

-

How do I enable GPU acceleration on FFmpeg?

-

Compile FFmpeg with the appropriate flags and follow the relevant instructions. -

-

-

Remember to set the export codec in auto-editor. auto-editor --video-codec

-

Note that the resulting build is legally undistributable.

-

Will enabling GPU acceleration make auto-editor go faster?

-

If you want to export to a certain codec that is compatible with your GPU, yes, in some cases, it will go noticeably faster, albeit with some slight quality loss.

-

However, in most other cases, GPU acceleration won't do anything since analyze and creating new media files are mostly CPU bound. Given how relatively complex enabling GPU acceleration is, it is not recommend for most users.

-
-

Further Reading

- -
-
- - diff -Nru auto-editor-22w28a+ds/site/src/index.html auto-editor-22w52a+ds/site/src/index.html --- auto-editor-22w28a+ds/site/src/index.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/index.html 2022-12-31 17:05:14.000000000 +0000 @@ -15,9 +15,9 @@

See Installing for additional information.

Cutting

-

Change the pace of a video by using --frame-margin. -

--frame-margin will add small sections that are next to loud sections. A frame margin of 8 will add up to 8 frames before and 8 frames after.

-
auto-editor example.mp4 --frame-margin 8
+

Change the pace of the edited video by using --margin. +

--margin adds in some "silent" sections to make the editing feel nicer. Setting --margin to 0.2sec will add up to 0.2 seconds in front of and 0.2 seconds behind the original clip. +

auto-editor example.mp4 --margin 0.2sec

Set how cuts are made

Use the --edit option. to change how auto-editor makes automated cuts.

@@ -25,11 +25,14 @@
# cut out sections where percentage of motion is less than 2.
 auto-editor example.mp4 --edit motion:threshold=2%
 
-# --edit is set to "audio" by default
-auto-editor example.mp4 --silent-threshold 4%
+# --edit is set to "audio:threshold=4%" by default
+auto-editor example.mp4
 
-# audio and motion thresholds are toggled independently
-auto-editor example.mp4 --edit 'audio:threshold=3% or motion:threshold=6%'
+# Different tracks can be set with different attributes
+auto-editor multi-track.mov --edit "(or audio:stream=0 audio:threshold=10%,stream=1)"
+
+# Different editing methods can be used together.
+auto-editor example.mp4 --edit "(or audio:threshold=3% motion:threshold=6%)"
 

Exporting to Editors

@@ -82,11 +85,19 @@ + +

Docs

+ + +

Copyright

Auto-Editor is under the Public Domain and includes all files besides the ones listed below. Auto-Editor was created by these people.

diff -Nru auto-editor-22w28a+ds/site/src/range_syntax.html auto-editor-22w52a+ds/site/src/range_syntax.html --- auto-editor-22w28a+ds/site/src/range_syntax.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/range_syntax.html 1970-01-01 00:00:00.000000000 +0000 @@ -1,32 +0,0 @@ -{{ comp.header "Auto-Editor - Range Syntax" }} - -{{ comp.nav }} -
-
-

Range Syntax

-

Range syntax is a common format used in --add-in, --cut-out, --mark-as-loud, --mark-as-silent, --set-speed-for-range options.

-

It describes a range in time based on frames starting from 0, the first frame.

-
# Cut out 1 frame
-auto-editor example.mp4 --cut-out 0,1
-
-# Cut out 60 frames
-auto-editor example.mp4 --cut-out 10,70
-
-# Cuts out no frames
-auto-editor example.mp4 --cut-out 0,0
-

Any number greater or equal to 0 can be used. The constants, `start` and `end` can also be used.

-
auto-editor example.mp4 --mark-as-loud 72,end
-

You can describe these ranges using seconds instead of frames.

-
auto-editor example.mp4 --cut-out 1secs,10secs
-

s, sec, secs, second, seconds can be used interchangeably. Explicit frame units may also be used. (f, frame, frames). Constants do not accept units.

-

You can also use negative numbers. (i.e -60,end selects the last 60 frames, -10s,-5s selects from 10 seconds to end to 5 seconds to the end)

-

Range Syntax has a nargs value of '*' meaning it can take any many ranges.

-
auto-editor example.mp4 --cut-out 0,20 45,60, 234,452
-
-

The --set-speed-for-range option has an additional argument for speed. The command:

-
auto-editor example.mp4 --set-speed-for-range 2,0,30
-

means set the speed of the video to twice as fast (2x) from the 0th frame to the 30th frame.

-
-
- - diff -Nru auto-editor-22w28a+ds/site/src/style.css auto-editor-22w52a+ds/site/src/style.css --- auto-editor-22w28a+ds/site/src/style.css 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/style.css 2022-12-31 17:05:14.000000000 +0000 @@ -4,8 +4,8 @@ } body, button, input { - font-family: BlinkMacSystemFont, -apple-system, "Segoe UI", Roboto, Oxygen, Ubuntu, - Cantarell, "Fira Sans", "Droid Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; + font-family: BlinkMacSystemFont, -apple-system, sans-serif, "Segoe UI", Roboto, Oxygen, Ubuntu, + Cantarell, "Fira Sans", "Droid Sans", "Helvetica Neue", Helvetica, Arial; } html, p, h1, h2, h3, h4, h5, html, blockquote { @@ -106,6 +106,11 @@ font-family: SFMono-Regular, ui-monospace, SF Mono, Menlo, Consolas, Liberation Mono, monospace; } +.mono { + font-family: SFMono-Regular, ui-monospace, SF Mono, Menlo, Consolas, Liberation Mono, monospace; + font-size: 110%; +} + pre { padding: 1.25rem 1.5rem; white-space: pre; @@ -149,7 +154,7 @@ justify-content: space-between; } -p, .paragraph {max-width: 75ch;} +p, .paragraph {max-width: 88ch;} p { display: block; margin-block-start: 0.4em; @@ -170,8 +175,8 @@ .huge {font-size: 5rem} .huge-lite {font-size: 4rem} h1, .bigger {font-size: 2.5rem} -h2, .big {font-size: 2rem} -h3, .medium {font-size: 1.65rem} +h2, .big {font-size: 2.1rem} +h3, .medium {font-size: 1.7rem} h4, .smedium {font-size: 1.5rem} p, li, th, pre, blockquote, .small {font-size: 1.28rem} .tiny {font-size: 1rem} @@ -180,14 +185,17 @@ .huge {font-size: 4.5rem} .huge-lite {font-size: 3.4rem} h1, .bigger {font-size: 2.3rem} - h2, .big {font-size: 1.8rem} - h3, .medium {font-size: 1.5rem} + h2, .big {font-size: 1.9rem} + h3, .medium {font-size: 1.6rem} h4, .smedium {font-size: 1.3rem} p, li, th, pre, blockquote, .small {font-size: 1.2rem} } @media (max-width: 55em){ .huge {font-size: 4rem} + .h1, .bigger {font-size: 2rem} + .h2, .big {font-size: 1.7rem} + .h3, .medium {font-size: 1.5rem} } #icon { @@ -205,6 +213,8 @@ h1, h2, th, .bold {font-weight: bold} .underline {text-decoration: underline} +.author, .date {line-height: 1} +.author {margin-bottom: 0.5rem} /* Column Styles */ @@ -228,12 +238,32 @@ } } +.palet-block { + padding-left: 7px; + padding-bottom: 7px; + padding-top: 7px; + margin-top: 30px; + margin-bottom: 6px; +} +.palet-block > p { + margin-top: 0; + margin-bottom: 0; +} + +.palet-var { + font-style: italic; +} + /* Light Theme Colors */ html {background-color: white} .hero {background-color: #EAEAEA} hr {background-color: #EAEAEA} -h1, h2, h3, p, code, pre, th {color: #323232} +h1, h2, h3, h4, p, code, pre, th {color: #323232} blockquote {border-left: 5px solid #EAEAEA} +.palet-block { + background-color: #F9F5F5; + border-top: 3px solid #EAEAEA; +} blockquote > p {color: #696969} pre {background-color: #F9F5F5} code {background-color: #EAEAEA} @@ -250,8 +280,12 @@ html {background-color: #262626} .hero {background-color: #4A4A4A} hr {background-color: #4A4A4A} - h1, h2, h3, p, code, pre, th {color: #F9F7F7} + h1, h2, h3, h4, p, code, pre, th {color: #F9F7F7} blockquote {border-left: 5px solid #4A4A4A} + .palet-block { + background-color: #323232; + border-top: 3px solid #4A4A4A; + } blockquote > p {color: #DBDBDB} pre {background-color: #323232} code {background-color: #4A4A4A} diff -Nru auto-editor-22w28a+ds/site/src/subcommands.html auto-editor-22w52a+ds/site/src/subcommands.html --- auto-editor-22w28a+ds/site/src/subcommands.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/subcommands.html 1970-01-01 00:00:00.000000000 +0000 @@ -1,86 +0,0 @@ -{{ comp.header "Auto-Editor - Subcommands" }} - -{{ comp.nav }} -
-
-

Subcommands

-

Subcommands are programs that have their own options, separate from the main auto-editor program. They typically serve an auxiliary function.

-

Examples:

-
auto-editor info example.mp4
-auto-editor grep "pattern" file_with_subtitles.mp4
-
-

Subcommands can also be called by themselves with the ae prefix:

-
aeinfo example.mp4
-aegrep "pattern" file_with_subtitles.mp4
-
-

Info

-

The info subcommand accepts filenames and outputs basic media info like duration, fps, etc.

-
auto-editor info example.mp4 resources/man_on_green_screen.gif
-
-file: example.mp4
- - video tracks: 1
-   - Track #0
-     - codec: h264
-     - pix_fmt: yuv420p
-     - time_base: 1/30000
-     - fps: 30
-     - resolution: 1280x720 (16:9)
-     - bitrate: 240 kb/s
- - audio tracks: 1
-   - Track #0
-     - codec: aac
-     - samplerate: 48000
-     - bitrate: 317 kb/s
- - container:
-   - duration: 00:00:42.45
-   - bitrate: 569 kb/s
-
-file: resources/man_on_green_screen.gif
- - video tracks: 1
-   - Track #0
-     - codec: gif
-     - pix_fmt: bgra
-     - time_base: 1/100
-     - fps: 30
-     - resolution: 1280x720 (16:9)
- - container:
-   - duration: 00:00:24.41
-   - bitrate: 1649 kb/s
-
-

You can also check for additional information with options.

-
auto-editor info example.mp4 --has-vfr
-
-file: example.mp4
- - video tracks: 1
-   - Track #0
-     - codec: h264
-     - pix_fmt: yuv420p
-     - time_base: 1/30000
-     - fps: 30
-     - resolution: 1280x720 (16:9)
-     - bitrate: 240 kb/s
- - audio tracks: 1
-   - Track #0
-     - codec: aac
-     - samplerate: 48000
-     - bitrate: 317 kb/s
- - container:
-   - duration: 00:00:42.45
-   - bitrate: 569 kb/s
-
-

Grep

-

The grep subcommand searches and displays regex matches in media subtitles.

-
# search for the pattern "the" in the Movie.mkv subtitles.
-auto-editor grep the_movie.mkv
-

List of all subcommands currently available:

-
    -
  • info
  • -
  • grep
  • -
  • levels
  • -
  • subdump
  • -
  • test (for devs, cannot be called with an ae prefix)
  • -
-
-
- - diff -Nru auto-editor-22w28a+ds/site/src/supported_media.html auto-editor-22w52a+ds/site/src/supported_media.html --- auto-editor-22w28a+ds/site/src/supported_media.html 2022-07-14 04:19:35.000000000 +0000 +++ auto-editor-22w52a+ds/site/src/supported_media.html 1970-01-01 00:00:00.000000000 +0000 @@ -1,23 +0,0 @@ -{{ comp.index_header "Auto-Editor - Supported Media" }} - -{{ comp.nav }} -
-
-

Supported Media

-

What is allowed

-
    -
  • Video files with or without audio, subtitles, embedded images
  • -
  • Audio files (with or without multiple tracks)
  • -
-

What isn't allowed

-
    -
  • Bare subtitles (Auto-Editor doesn't know what to do with them)
  • -
  • Auto-Editor does not allow an input's framerate to be less than 1
  • -
  • Media greater than 24 hours may work but is not officially supported
  • -
-

Footnotes

-

Auto-Editor uses the terms "track" and "stream" interchangeably.

-
-
- -