1
mirror of https://github.com/yt-dlp/yt-dlp synced 2024-11-24 06:26:30 +01:00

[misc] Add hatch, ruff, pre-commit and improve dev docs (#7409)

Authored by: bashonly, seproDev, Grub4K

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
This commit is contained in:
Simon Sawicki 2024-05-26 21:27:21 +02:00 committed by GitHub
parent a2e9031605
commit e897bd8292
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
264 changed files with 1224 additions and 1014 deletions

View File

@ -28,7 +28,6 @@ Fixes #
### Before submitting a *pull request* make sure you have: ### Before submitting a *pull request* make sure you have:
- [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply: ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)

View File

@ -53,7 +53,7 @@ jobs:
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install test requirements - name: Install test requirements
run: python3 ./devscripts/install_deps.py --include dev --include curl-cffi run: python3 ./devscripts/install_deps.py --include test --include curl-cffi
- name: Run tests - name: Run tests
continue-on-error: False continue-on-error: False
run: | run: |

View File

@ -15,13 +15,13 @@ jobs:
with: with:
python-version: '3.8' python-version: '3.8'
- name: Install test requirements - name: Install test requirements
run: python3 ./devscripts/install_deps.py --include dev run: python3 ./devscripts/install_deps.py --include test
- name: Run tests - name: Run tests
run: | run: |
python3 -m yt_dlp -v || true python3 -m yt_dlp -v || true
python3 ./devscripts/run_tests.py core python3 ./devscripts/run_tests.py core
flake8: check:
name: Linter name: Code check
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
@ -29,9 +29,11 @@ jobs:
- uses: actions/setup-python@v5 - uses: actions/setup-python@v5
with: with:
python-version: '3.8' python-version: '3.8'
- name: Install flake8 - name: Install dev dependencies
run: python3 ./devscripts/install_deps.py -o --include dev run: python3 ./devscripts/install_deps.py -o --include static-analysis
- name: Make lazy extractors - name: Make lazy extractors
run: python3 ./devscripts/make_lazy_extractors.py run: python3 ./devscripts/make_lazy_extractors.py
- name: Run flake8 - name: Run ruff
run: flake8 . run: ruff check --output-format github .
- name: Run autopep8
run: autopep8 --diff .

2
.gitignore vendored
View File

@ -67,7 +67,7 @@ cookies
# Python # Python
*.pyc *.pyc
*.pyo *.pyo
.pytest_cache .*_cache
wine-py2exe/ wine-py2exe/
py2exe.log py2exe.log
build/ build/

14
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,14 @@
repos:
- repo: local
hooks:
- id: linter
name: Apply linter fixes
entry: ruff check --fix .
language: system
types: [python]
require_serial: true
- id: format
name: Apply formatting fixes
entry: autopep8 --in-place .
language: system
types: [python]

9
.pre-commit-hatch.yaml Normal file
View File

@ -0,0 +1,9 @@
repos:
- repo: local
hooks:
- id: fix
name: Apply code fixes
entry: hatch fmt
language: system
types: [python]
require_serial: true

View File

@ -134,18 +134,53 @@ We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-a
# DEVELOPER INSTRUCTIONS # DEVELOPER INSTRUCTIONS
Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases) or get them via [the other installation methods](README.md#installation). Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases), get them via [the other installation methods](README.md#installation) or directly run it using `python -m yt_dlp`.
To run yt-dlp as a developer, you don't need to build anything either. Simply execute `yt-dlp` uses [`hatch`](<https://hatch.pypa.io>) as a project management tool.
You can easily install it using [`pipx`](<https://pipx.pypa.io>) via `pipx install hatch`, or else via `pip` or your package manager of choice. Make sure you are using at least version `1.10.0`, otherwise some functionality might not work as expected.
python3 -m yt_dlp If you plan on contributing to `yt-dlp`, best practice is to start by running the following command:
To run all the available core tests, use: ```shell
$ hatch run setup
```
python3 devscripts/run_tests.py The above command will install a `pre-commit` hook so that required checks/fixes (linting, formatting) will run automatically before each commit. If any code needs to be linted or formatted, then the commit will be blocked and the necessary changes will be made; you should review all edits and re-commit the fixed version.
After this you can use `hatch shell` to enable a virtual environment that has `yt-dlp` and its development dependencies installed.
In addition, the following script commands can be used to run simple tasks such as linting or testing (without having to run `hatch shell` first):
* `hatch fmt`: Automatically fix linter violations and apply required code formatting changes
* See `hatch fmt --help` for more info
* `hatch test`: Run extractor or core tests
* See `hatch test --help` for more info
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases. See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
While it is strongly recommended to use `hatch` for yt-dlp development, if you are unable to do so, alternatively you can manually create a virtual environment and use the following commands:
```shell
# To only install development dependencies:
$ python -m devscripts.install_deps --include dev
# Or, for an editable install plus dev dependencies:
$ python -m pip install -e ".[default,dev]"
# To setup the pre-commit hook:
$ pre-commit install
# To be used in place of `hatch test`:
$ python -m devscripts.run_tests
# To be used in place of `hatch fmt`:
$ ruff check --fix .
$ autopep8 --in-place .
# To only check code instead of applying fixes:
$ ruff check .
$ autopep8 --diff .
```
If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile). If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile).
@ -165,12 +200,16 @@ After you have ensured this site is distributing its content legally, you can fo
1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork) 1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork)
1. Check out the source code with: 1. Check out the source code with:
git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git ```shell
$ git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
```
1. Start a new git branch with 1. Start a new git branch with
cd yt-dlp ```shell
git checkout -b yourextractor $ cd yt-dlp
$ git checkout -b yourextractor
```
1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`: 1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`:
@ -217,21 +256,27 @@ After you have ensured this site is distributing its content legally, you can fo
# TODO more properties (see yt_dlp/extractor/common.py) # TODO more properties (see yt_dlp/extractor/common.py)
} }
``` ```
1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`. 1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`. Also note that when adding a parenthesized import group, the last import in the group must have a trailing comma in order for this formatting to be respected by our code formatter.
1. Run `python3 devscripts/run_tests.py YourExtractor`. This *may fail* at first, but you can continually re-run it until you're done. Upon failure, it will output the missing fields and/or correct values which you can copy. If you decide to add more than one test, the tests will then be named `YourExtractor`, `YourExtractor_1`, `YourExtractor_2`, etc. Note that tests with an `only_matching` key in the test's dict are not included in the count. You can also run all the tests in one go with `YourExtractor_all` 1. Run `hatch test YourExtractor`. This *may fail* at first, but you can continually re-run it until you're done. Upon failure, it will output the missing fields and/or correct values which you can copy. If you decide to add more than one test, the tests will then be named `YourExtractor`, `YourExtractor_1`, `YourExtractor_2`, etc. Note that tests with an `only_matching` key in the test's dict are not included in the count. You can also run all the tests in one go with `YourExtractor_all`
1. Make sure you have at least one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running. 1. Make sure you have at least one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L119-L440). Add tests and code for as many as you want. 1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L119-L440). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart): 1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions), passes [ruff](https://docs.astral.sh/ruff/tutorial/#getting-started) code checks and is properly formatted:
$ flake8 yt_dlp/extractor/yourextractor.py ```shell
$ hatch fmt --check
```
You can use `hatch fmt` to automatically fix problems.
1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.8 and above. Backward compatibility is not required for even older versions of Python. 1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.8 and above. Backward compatibility is not required for even older versions of Python.
1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this: 1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
$ git add yt_dlp/extractor/_extractors.py ```shell
$ git add yt_dlp/extractor/yourextractor.py $ git add yt_dlp/extractor/_extractors.py
$ git commit -m '[yourextractor] Add extractor' $ git add yt_dlp/extractor/yourextractor.py
$ git push origin yourextractor $ git commit -m '[yourextractor] Add extractor'
$ git push origin yourextractor
```
1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it. 1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.

View File

@ -27,7 +27,7 @@ clean-dist:
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS
clean-cache: clean-cache:
find . \( \ find . \( \
-type d -name .pytest_cache -o -type d -name __pycache__ -o -name "*.pyc" -o -name "*.class" \ -type d -name ".*_cache" -o -type d -name __pycache__ -o -name "*.pyc" -o -name "*.class" \
\) -prune -exec rm -rf {} \; \) -prune -exec rm -rf {} \;
completion-bash: completions/bash/yt-dlp completion-bash: completions/bash/yt-dlp
@ -70,7 +70,8 @@ uninstall:
rm -f $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish rm -f $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish
codetest: codetest:
flake8 . ruff check .
autopep8 --diff .
test: test:
$(PYTHON) -m pytest $(PYTHON) -m pytest
@ -151,7 +152,7 @@ yt-dlp.tar.gz: all
--exclude '*.pyo' \ --exclude '*.pyo' \
--exclude '*~' \ --exclude '*~' \
--exclude '__pycache__' \ --exclude '__pycache__' \
--exclude '.pytest_cache' \ --exclude '.*_cache' \
--exclude '.git' \ --exclude '.git' \
-- \ -- \
README.md supportedsites.md Changelog.md LICENSE \ README.md supportedsites.md Changelog.md LICENSE \

View File

@ -42,17 +42,25 @@ def parse_args():
def main(): def main():
args = parse_args() args = parse_args()
project_table = parse_toml(read_file(args.input))['project'] project_table = parse_toml(read_file(args.input))['project']
recursive_pattern = re.compile(rf'{project_table["name"]}\[(?P<group_name>[\w-]+)\]')
optional_groups = project_table['optional-dependencies'] optional_groups = project_table['optional-dependencies']
excludes = args.exclude or [] excludes = args.exclude or []
def yield_deps(group):
for dep in group:
if mobj := recursive_pattern.fullmatch(dep):
yield from optional_groups.get(mobj.group('group_name'), [])
else:
yield dep
targets = [] targets = []
if not args.only_optional: # `-o` should exclude 'dependencies' and the 'default' group if not args.only_optional: # `-o` should exclude 'dependencies' and the 'default' group
targets.extend(project_table['dependencies']) targets.extend(project_table['dependencies'])
if 'default' not in excludes: # `--exclude default` should exclude entire 'default' group if 'default' not in excludes: # `--exclude default` should exclude entire 'default' group
targets.extend(optional_groups['default']) targets.extend(yield_deps(optional_groups['default']))
for include in filter(None, map(optional_groups.get, args.include or [])): for include in filter(None, map(optional_groups.get, args.include or [])):
targets.extend(include) targets.extend(yield_deps(include))
targets = [t for t in targets if re.match(r'[\w-]+', t).group(0).lower() not in excludes] targets = [t for t in targets if re.match(r'[\w-]+', t).group(0).lower() not in excludes]

View File

@ -4,6 +4,7 @@ import argparse
import functools import functools
import os import os
import re import re
import shlex
import subprocess import subprocess
import sys import sys
from pathlib import Path from pathlib import Path
@ -18,6 +19,8 @@ def parse_args():
'test', help='a extractor tests, or one of "core" or "download"', nargs='*') 'test', help='a extractor tests, or one of "core" or "download"', nargs='*')
parser.add_argument( parser.add_argument(
'-k', help='run a test matching EXPRESSION. Same as "pytest -k"', metavar='EXPRESSION') '-k', help='run a test matching EXPRESSION. Same as "pytest -k"', metavar='EXPRESSION')
parser.add_argument(
'--pytest-args', help='arguments to passthrough to pytest')
return parser.parse_args() return parser.parse_args()
@ -26,15 +29,16 @@ def run_tests(*tests, pattern=None, ci=False):
run_download = 'download' in tests run_download = 'download' in tests
tests = list(map(fix_test_name, tests)) tests = list(map(fix_test_name, tests))
arguments = ['pytest', '-Werror', '--tb=short'] pytest_args = args.pytest_args or os.getenv('HATCH_TEST_ARGS', '')
arguments = ['pytest', '-Werror', '--tb=short', *shlex.split(pytest_args)]
if ci: if ci:
arguments.append('--color=yes') arguments.append('--color=yes')
if pattern:
arguments.extend(['-k', pattern])
if run_core: if run_core:
arguments.extend(['-m', 'not download']) arguments.extend(['-m', 'not download'])
elif run_download: elif run_download:
arguments.extend(['-m', 'download']) arguments.extend(['-m', 'download'])
elif pattern:
arguments.extend(['-k', pattern])
else: else:
arguments.extend( arguments.extend(
f'test/test_download.py::TestDownload::test_{test}' for test in tests) f'test/test_download.py::TestDownload::test_{test}' for test in tests)
@ -46,13 +50,13 @@ def run_tests(*tests, pattern=None, ci=False):
pass pass
arguments = [sys.executable, '-Werror', '-m', 'unittest'] arguments = [sys.executable, '-Werror', '-m', 'unittest']
if pattern:
arguments.extend(['-k', pattern])
if run_core: if run_core:
print('"pytest" needs to be installed to run core tests', file=sys.stderr, flush=True) print('"pytest" needs to be installed to run core tests', file=sys.stderr, flush=True)
return 1 return 1
elif run_download: elif run_download:
arguments.append('test.test_download') arguments.append('test.test_download')
elif pattern:
arguments.extend(['-k', pattern])
else: else:
arguments.extend( arguments.extend(
f'test.test_download.TestDownload.test_{test}' for test in tests) f'test.test_download.TestDownload.test_{test}' for test in tests)

View File

@ -66,9 +66,16 @@ build = [
"wheel", "wheel",
] ]
dev = [ dev = [
"flake8", "pre-commit",
"isort", "yt-dlp[static-analysis]",
"pytest", "yt-dlp[test]",
]
static-analysis = [
"autopep8~=2.0",
"ruff~=0.4.4",
]
test = [
"pytest~=8.1",
] ]
pyinstaller = [ pyinstaller = [
"pyinstaller>=6.3; sys_platform!='darwin'", "pyinstaller>=6.3; sys_platform!='darwin'",
@ -126,3 +133,146 @@ artifacts = ["/yt_dlp/extractor/lazy_extractors.py"]
[tool.hatch.version] [tool.hatch.version]
path = "yt_dlp/version.py" path = "yt_dlp/version.py"
pattern = "_pkg_version = '(?P<version>[^']+)'" pattern = "_pkg_version = '(?P<version>[^']+)'"
[tool.hatch.envs.default]
features = ["curl-cffi", "default"]
dependencies = ["pre-commit"]
path = ".venv"
installer = "uv"
[tool.hatch.envs.default.scripts]
setup = "pre-commit install --config .pre-commit-hatch.yaml"
yt-dlp = "python -Werror -Xdev -m yt_dlp {args}"
[tool.hatch.envs.hatch-static-analysis]
detached = true
features = ["static-analysis"]
dependencies = [] # override hatch ruff version
config-path = "pyproject.toml"
[tool.hatch.envs.hatch-static-analysis.scripts]
format-check = "autopep8 --diff {args:.}"
format-fix = "autopep8 --in-place {args:.}"
lint-check = "ruff check {args:.}"
lint-fix = "ruff check --fix {args:.}"
[tool.hatch.envs.hatch-test]
features = ["test"]
dependencies = [
"pytest-randomly~=3.15",
"pytest-rerunfailures~=14.0",
"pytest-xdist[psutil]~=3.5",
]
[tool.hatch.envs.hatch-test.scripts]
run = "python -m devscripts.run_tests {args}"
run-cov = "echo Code coverage not implemented && exit 1"
[[tool.hatch.envs.hatch-test.matrix]]
python = [
"3.8",
"3.9",
"3.10",
"3.11",
"3.12",
"pypy3.8",
"pypy3.9",
"pypy3.10",
]
[tool.ruff]
line-length = 120
[tool.ruff.lint]
ignore = [
"E402", # module level import not at top of file
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
"E741", # ambiguous variable name
]
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"I", # import order
]
[tool.ruff.lint.per-file-ignores]
"devscripts/lazy_load_template.py" = ["F401"]
"!yt_dlp/extractor/**.py" = ["I"]
[tool.ruff.lint.isort]
known-first-party = [
"bundle",
"devscripts",
"test",
]
relative-imports-order = "closest-to-furthest"
[tool.autopep8]
max_line_length = 120
recursive = true
exit-code = true
jobs = 0
select = [
"E101",
"E112",
"E113",
"E115",
"E116",
"E117",
"E121",
"E122",
"E123",
"E124",
"E125",
"E126",
"E127",
"E128",
"E129",
"E131",
"E201",
"E202",
"E203",
"E211",
"E221",
"E222",
"E223",
"E224",
"E225",
"E226",
"E227",
"E228",
"E231",
"E241",
"E242",
"E251",
"E252",
"E261",
"E262",
"E265",
"E266",
"E271",
"E272",
"E273",
"E274",
"E275",
"E301",
"E302",
"E303",
"E304",
"E305",
"E306",
"E502",
"E701",
"E702",
"E704",
"W391",
"W504",
]
[tool.pytest.ini_options]
addopts = "-ra -v --strict-markers"
markers = [
"download",
]

View File

@ -14,12 +14,6 @@ remove-duplicate-keys = true
remove-unused-variables = true remove-unused-variables = true
[tool:pytest]
addopts = -ra -v --strict-markers
markers =
download
[tox:tox] [tox:tox]
skipsdist = true skipsdist = true
envlist = py{38,39,310,311,312},pypy{38,39,310} envlist = py{38,39,310,311,312},pypy{38,39,310}

View File

@ -93,6 +93,7 @@ if urllib3:
This allows us to chain multiple TLS connections. This allows us to chain multiple TLS connections.
""" """
def __init__(self, socket, ssl_context, server_hostname=None, suppress_ragged_eofs=True, server_side=False): def __init__(self, socket, ssl_context, server_hostname=None, suppress_ragged_eofs=True, server_side=False):
self.incoming = ssl.MemoryBIO() self.incoming = ssl.MemoryBIO()
self.outgoing = ssl.MemoryBIO() self.outgoing = ssl.MemoryBIO()

File diff suppressed because it is too large Load Diff

View File

@ -6,10 +6,10 @@ import time
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
dict_get,
ExtractorError, ExtractorError,
js_to_json, dict_get,
int_or_none, int_or_none,
js_to_json,
parse_iso8601, parse_iso8601,
str_or_none, str_or_none,
traverse_obj, traverse_obj,

View File

@ -12,20 +12,21 @@ import urllib.parse
import urllib.request import urllib.request
import urllib.response import urllib.response
import uuid import uuid
from ..utils.networking import clean_proxies
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_ecb_decrypt from ..aes import aes_ecb_decrypt
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
OnDemandPagedList,
bytes_to_intlist, bytes_to_intlist,
decode_base_n, decode_base_n,
int_or_none, int_or_none,
intlist_to_bytes, intlist_to_bytes,
OnDemandPagedList,
time_seconds, time_seconds,
traverse_obj, traverse_obj,
update_url_query, update_url_query,
) )
from ..utils.networking import clean_proxies
def add_opener(ydl, handler): # FIXME: Create proper API in .networking def add_opener(ydl, handler): # FIXME: Create proper API in .networking

View File

@ -3,10 +3,10 @@ from ..utils import (
float_or_none, float_or_none,
format_field, format_field,
int_or_none, int_or_none,
str_or_none,
traverse_obj,
parse_codecs, parse_codecs,
parse_qs, parse_qs,
str_or_none,
traverse_obj,
) )

View File

@ -10,18 +10,18 @@ from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
from ..compat import compat_b64decode from ..compat import compat_b64decode
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError,
ass_subtitles_timecode, ass_subtitles_timecode,
bytes_to_intlist, bytes_to_intlist,
bytes_to_long, bytes_to_long,
ExtractorError,
float_or_none, float_or_none,
int_or_none, int_or_none,
intlist_to_bytes, intlist_to_bytes,
long_to_bytes, long_to_bytes,
parse_iso8601, parse_iso8601,
pkcs1pad, pkcs1pad,
strip_or_none,
str_or_none, str_or_none,
strip_or_none,
try_get, try_get,
unified_strdate, unified_strdate,
urlencode_postdata, urlencode_postdata,

View File

@ -4,11 +4,11 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
ISO639Utils,
OnDemandPagedList,
float_or_none, float_or_none,
int_or_none, int_or_none,
ISO639Utils,
join_nonempty, join_nonempty,
OnDemandPagedList,
parse_duration, parse_duration,
str_or_none, str_or_none,
str_to_int, str_to_int,

View File

@ -5,7 +5,7 @@ from ..utils import (
int_or_none, int_or_none,
mimetype2ext, mimetype2ext,
parse_iso8601, parse_iso8601,
traverse_obj traverse_obj,
) )

View File

@ -12,7 +12,6 @@ from ..utils import (
) )
from ..utils.traversal import traverse_obj from ..utils.traversal import traverse_obj
_FIELDS = ''' _FIELDS = '''
_id _id
clipImageSource clipImageSource

View File

@ -1,9 +1,9 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
parse_iso8601, int_or_none,
parse_duration, parse_duration,
parse_filesize, parse_filesize,
int_or_none, parse_iso8601,
) )

View File

@ -1,17 +1,13 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse
from ..compat import (
compat_urlparse,
)
from ..utils import ( from ..utils import (
ExtractorError,
clean_html,
int_or_none,
urlencode_postdata, urlencode_postdata,
urljoin, urljoin,
int_or_none,
clean_html,
ExtractorError
) )

View File

@ -1,6 +1,6 @@
from .common import InfoExtractor from .common import InfoExtractor
from .youtube import YoutubeIE
from .vimeo import VimeoIE from .vimeo import VimeoIE
from .youtube import YoutubeIE
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
parse_iso8601, parse_iso8601,

View File

@ -1,7 +1,7 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext,
ExtractorError, ExtractorError,
determine_ext,
int_or_none, int_or_none,
mimetype2ext, mimetype2ext,
parse_iso8601, parse_iso8601,

View File

@ -5,7 +5,7 @@ from ..utils import (
int_or_none, int_or_none,
str_or_none, str_or_none,
traverse_obj, traverse_obj,
unified_timestamp unified_timestamp,
) )

View File

@ -1,7 +1,7 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import url_or_none, merge_dicts from ..utils import merge_dicts, url_or_none
class AngelIE(InfoExtractor): class AngelIE(InfoExtractor):

View File

@ -1,8 +1,5 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import ExtractorError, str_to_int
str_to_int,
ExtractorError
)
class AppleConnectIE(InfoExtractor): class AppleConnectIE(InfoExtractor):

View File

@ -1,5 +1,5 @@
import re
import json import json
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse from ..compat import compat_urlparse

View File

@ -4,8 +4,8 @@ from ..compat import (
compat_urllib_parse_urlparse, compat_urllib_parse_urlparse,
) )
from ..utils import ( from ..utils import (
format_field,
float_or_none, float_or_none,
format_field,
int_or_none, int_or_none,
parse_iso8601, parse_iso8601,
remove_start, remove_start,

View File

@ -2,10 +2,10 @@ import datetime as dt
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
float_or_none, float_or_none,
jwt_encode_hs256, jwt_encode_hs256,
try_get, try_get,
ExtractorError,
) )

View File

@ -2,8 +2,8 @@ import base64
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import (
compat_urllib_parse_urlencode,
compat_str, compat_str,
compat_urllib_parse_urlencode,
) )
from ..utils import ( from ..utils import (
format_field, format_field,

View File

@ -2,12 +2,12 @@ import math
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import (
compat_urllib_parse_urlparse,
compat_parse_qs, compat_parse_qs,
compat_urllib_parse_urlparse,
) )
from ..utils import ( from ..utils import (
format_field,
InAdvancePagedList, InAdvancePagedList,
format_field,
traverse_obj, traverse_obj,
unified_timestamp, unified_timestamp,
) )

View File

@ -2,11 +2,11 @@ import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
try_get,
int_or_none,
url_or_none,
float_or_none, float_or_none,
int_or_none,
try_get,
unified_timestamp, unified_timestamp,
url_or_none,
) )

View File

@ -1,5 +1,4 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
str_or_none, str_or_none,

View File

@ -1,5 +1,5 @@
from .common import InfoExtractor
from .amp import AMPIE from .amp import AMPIE
from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,

View File

@ -1,3 +1,4 @@
from .common import InfoExtractor
from ..utils import ( from ..utils import (
mimetype2ext, mimetype2ext,
parse_duration, parse_duration,
@ -5,7 +6,6 @@ from ..utils import (
str_or_none, str_or_none,
traverse_obj, traverse_obj,
) )
from .common import InfoExtractor
class BloggerIE(InfoExtractor): class BloggerIE(InfoExtractor):

View File

@ -1,7 +1,6 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
extract_attributes, extract_attributes,
) )

View File

@ -1,9 +1,5 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import js_to_json, traverse_obj, unified_timestamp
js_to_json,
traverse_obj,
unified_timestamp
)
class BoxCastVideoIE(InfoExtractor): class BoxCastVideoIE(InfoExtractor):

View File

@ -6,7 +6,7 @@ from ..utils import (
classproperty, classproperty,
int_or_none, int_or_none,
traverse_obj, traverse_obj,
urljoin urljoin,
) )

View File

@ -12,10 +12,11 @@ from ..compat import (
) )
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError,
UnsupportedError,
clean_html, clean_html,
dict_get, dict_get,
extract_attributes, extract_attributes,
ExtractorError,
find_xpath_attr, find_xpath_attr,
fix_xml_ampersands, fix_xml_ampersands,
float_or_none, float_or_none,
@ -29,7 +30,6 @@ from ..utils import (
try_get, try_get,
unescapeHTML, unescapeHTML,
unsmuggle_url, unsmuggle_url,
UnsupportedError,
update_url_query, update_url_query,
url_or_none, url_or_none,
) )

View File

@ -5,14 +5,14 @@ from .youtube import YoutubeIE
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
extract_attributes, extract_attributes,
find_xpath_attr,
get_element_html_by_id, get_element_html_by_id,
int_or_none, int_or_none,
find_xpath_attr,
smuggle_url, smuggle_url,
xpath_element,
xpath_text,
update_url_query, update_url_query,
url_or_none, url_or_none,
xpath_element,
xpath_text,
) )

View File

@ -1,4 +1,5 @@
import json import json
from .common import InfoExtractor from .common import InfoExtractor
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (

View File

@ -1,11 +1,11 @@
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
parse_iso8601, parse_iso8601,
qualities, qualities,
) )
import re
class ClippitIE(InfoExtractor): class ClippitIE(InfoExtractor):

View File

@ -1,5 +1,6 @@
import base64 import base64
import collections import collections
import functools
import getpass import getpass
import hashlib import hashlib
import http.client import http.client
@ -21,7 +22,6 @@ import urllib.parse
import urllib.request import urllib.request
import xml.etree.ElementTree import xml.etree.ElementTree
from ..compat import functools # isort: split
from ..compat import ( from ..compat import (
compat_etree_fromstring, compat_etree_fromstring,
compat_expanduser, compat_expanduser,

View File

@ -1,7 +1,7 @@
from .theplatform import ThePlatformFeedIE from .theplatform import ThePlatformFeedIE
from ..utils import ( from ..utils import (
dict_get,
ExtractorError, ExtractorError,
dict_get,
float_or_none, float_or_none,
int_or_none, int_or_none,
) )

View File

@ -6,6 +6,7 @@ import time
from .common import InfoExtractor from .common import InfoExtractor
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError,
determine_ext, determine_ext,
float_or_none, float_or_none,
int_or_none, int_or_none,
@ -13,7 +14,6 @@ from ..utils import (
parse_age_limit, parse_age_limit,
parse_duration, parse_duration,
url_or_none, url_or_none,
ExtractorError
) )

View File

@ -1,10 +1,12 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from .senategov import SenateISVPIE
from .ustream import UstreamIE
from ..compat import compat_HTMLParseError from ..compat import compat_HTMLParseError
from ..utils import ( from ..utils import (
determine_ext,
ExtractorError, ExtractorError,
determine_ext,
extract_attributes, extract_attributes,
find_xpath_attr, find_xpath_attr,
get_element_by_attribute, get_element_by_attribute,
@ -19,8 +21,6 @@ from ..utils import (
str_to_int, str_to_int,
unescapeHTML, unescapeHTML,
) )
from .senategov import SenateISVPIE
from .ustream import UstreamIE
class CSpanIE(InfoExtractor): class CSpanIE(InfoExtractor):

View File

@ -1,6 +1,6 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import unified_timestamp
from .youtube import YoutubeIE from .youtube import YoutubeIE
from ..utils import unified_timestamp
class CtsNewsIE(InfoExtractor): class CtsNewsIE(InfoExtractor):

View File

@ -1,8 +1,8 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
int_or_none,
determine_protocol, determine_protocol,
int_or_none,
try_get, try_get,
unescapeHTML, unescapeHTML,
) )

View File

@ -1,8 +1,8 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError, clean_html, int_or_none, try_get, unified_strdate
from ..compat import compat_str from ..compat import compat_str
from ..utils import ExtractorError, clean_html, int_or_none, try_get, unified_strdate
class DamtomoBaseIE(InfoExtractor): class DamtomoBaseIE(InfoExtractor):

View File

@ -1,11 +1,11 @@
import re
import os.path import os.path
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse from ..compat import compat_urlparse
from ..utils import ( from ..utils import (
url_basename,
remove_start, remove_start,
url_basename,
) )

View File

@ -1,5 +1,4 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
parse_resolution, parse_resolution,

View File

@ -2,9 +2,9 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
determine_ext, determine_ext,
extract_attributes, extract_attributes,
ExtractorError,
int_or_none, int_or_none,
parse_age_limit, parse_age_limit,
remove_end, remove_end,

View File

@ -2,10 +2,10 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none,
unified_strdate,
determine_ext, determine_ext,
int_or_none,
join_nonempty, join_nonempty,
unified_strdate,
update_url_query, update_url_query,
) )

View File

@ -1,5 +1,5 @@
import time
import hashlib import hashlib
import time
import urllib import urllib
import uuid import uuid

View File

@ -4,8 +4,8 @@ import uuid
from .common import InfoExtractor from .common import InfoExtractor
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
determine_ext,
ExtractorError, ExtractorError,
determine_ext,
float_or_none, float_or_none,
int_or_none, int_or_none,
remove_start, remove_start,

View File

@ -2,8 +2,8 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none,
NO_DEFAULT, NO_DEFAULT,
int_or_none,
parse_duration, parse_duration,
str_to_int, str_to_int,
) )

View File

@ -5,9 +5,9 @@ import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse from ..compat import compat_urlparse
from ..utils import ( from ..utils import (
ExtractorError,
clean_html, clean_html,
extract_attributes, extract_attributes,
ExtractorError,
get_elements_by_class, get_elements_by_class,
int_or_none, int_or_none,
js_to_json, js_to_json,

View File

@ -2,15 +2,15 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext,
ExtractorError, ExtractorError,
determine_ext,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
js_to_json, js_to_json,
mimetype2ext, mimetype2ext,
parse_iso8601,
try_get, try_get,
unescapeHTML, unescapeHTML,
parse_iso8601,
) )

View File

@ -1,10 +1,10 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
unified_strdate, unified_strdate,
url_or_none, url_or_none,
) )
from ..compat import compat_urlparse
class DWIE(InfoExtractor): class DWIE(InfoExtractor):

View File

@ -4,15 +4,15 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
ExtractorError,
clean_html, clean_html,
determine_ext, determine_ext,
ExtractorError,
dict_get, dict_get,
int_or_none, int_or_none,
merge_dicts, merge_dicts,
parse_qs,
parse_age_limit, parse_age_limit,
parse_iso8601, parse_iso8601,
parse_qs,
str_or_none, str_or_none,
try_get, try_get,
url_or_none, url_or_none,

View File

@ -8,7 +8,7 @@ from ..utils import (
qualities, qualities,
traverse_obj, traverse_obj,
unified_strdate, unified_strdate,
xpath_text xpath_text,
) )

View File

@ -1,8 +1,7 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
parse_duration,
js_to_json, js_to_json,
parse_duration,
) )

View File

@ -1,8 +1,8 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
xpath_text,
parse_duration,
ExtractorError, ExtractorError,
parse_duration,
xpath_text,
) )

View File

@ -1,12 +1,6 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import ExtractorError, mimetype2ext, parse_iso8601, try_get
parse_iso8601,
ExtractorError,
try_get,
mimetype2ext
)
class FancodeVodIE(InfoExtractor): class FancodeVodIE(InfoExtractor):

View File

@ -3,9 +3,9 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_etree_fromstring from ..compat import compat_etree_fromstring
from ..utils import ( from ..utils import (
int_or_none,
xpath_element, xpath_element,
xpath_text, xpath_text,
int_or_none,
) )

View File

@ -1,7 +1,7 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none,
float_or_none, float_or_none,
int_or_none,
) )

View File

@ -1,5 +1,4 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
traverse_obj, traverse_obj,

View File

@ -2,10 +2,10 @@ from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError,
int_or_none,
qualities, qualities,
strip_or_none, strip_or_none,
int_or_none,
ExtractorError,
) )

View File

@ -7,7 +7,7 @@ from ..utils import (
parse_codecs, parse_codecs,
parse_duration, parse_duration,
str_to_int, str_to_int,
unified_timestamp unified_timestamp,
) )

View File

@ -10,7 +10,7 @@ from ..utils import (
int_or_none, int_or_none,
str_or_none, str_or_none,
traverse_obj, traverse_obj,
try_get try_get,
) )

View File

@ -1,4 +1,5 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
float_or_none, float_or_none,

View File

@ -4,7 +4,7 @@ import types
import urllib.parse import urllib.parse
import xml.etree.ElementTree import xml.etree.ElementTree
from .common import InfoExtractor # isort: split from .common import InfoExtractor
from .commonprotocols import RtmpIE from .commonprotocols import RtmpIE
from .youtube import YoutubeIE from .youtube import YoutubeIE
from ..compat import compat_etree_fromstring from ..compat import compat_etree_fromstring

View File

@ -1,7 +1,7 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
bool_or_none,
ExtractorError, ExtractorError,
bool_or_none,
dict_get, dict_get,
float_or_none, float_or_none,
int_or_none, int_or_none,

View File

@ -1,5 +1,4 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
urlencode_postdata, urlencode_postdata,

View File

@ -3,9 +3,9 @@ import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
determine_ext, determine_ext,
extract_attributes, extract_attributes,
ExtractorError,
int_or_none, int_or_none,
parse_qs, parse_qs,
smuggle_url, smuggle_url,

View File

@ -3,16 +3,16 @@ import re
from .adobepass import AdobePassIE from .adobepass import AdobePassIE
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
int_or_none,
determine_ext,
parse_age_limit,
remove_start,
remove_end,
try_get,
urlencode_postdata,
ExtractorError, ExtractorError,
unified_timestamp, determine_ext,
int_or_none,
parse_age_limit,
remove_end,
remove_start,
traverse_obj, traverse_obj,
try_get,
unified_timestamp,
urlencode_postdata,
) )

View File

@ -4,7 +4,7 @@ from ..utils import (
determine_ext, determine_ext,
str_or_none, str_or_none,
unified_timestamp, unified_timestamp,
url_or_none url_or_none,
) )
from ..utils.traversal import traverse_obj from ..utils.traversal import traverse_obj

View File

@ -1,10 +1,7 @@
import hashlib import hashlib
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import ExtractorError, try_get
ExtractorError,
try_get
)
class GofileIE(InfoExtractor): class GofileIE(InfoExtractor):

View File

@ -1,11 +1,8 @@
import json
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import try_get, url_or_none
try_get,
url_or_none
)
import json
class GoToStageIE(InfoExtractor): class GoToStageIE(InfoExtractor):

View File

@ -2,11 +2,11 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
xpath_text,
xpath_element,
int_or_none, int_or_none,
parse_duration, parse_duration,
urljoin, urljoin,
xpath_element,
xpath_text,
) )

View File

@ -1,7 +1,7 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
determine_ext,
KNOWN_EXTENSIONS, KNOWN_EXTENSIONS,
determine_ext,
str_to_int, str_to_int,
) )

View File

@ -1,8 +1,8 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
clean_html,
ExtractorError, ExtractorError,
clean_html,
int_or_none, int_or_none,
merge_dicts, merge_dicts,
parse_count, parse_count,

View File

@ -4,8 +4,8 @@ from .common import InfoExtractor
from ..networking import Request from ..networking import Request
from ..networking.exceptions import HTTPError from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
clean_html,
ExtractorError, ExtractorError,
clean_html,
int_or_none, int_or_none,
parse_age_limit, parse_age_limit,
try_get, try_get,

View File

@ -2,8 +2,8 @@ import hashlib
import random import random
import re import re
from ..compat import compat_urlparse, compat_b64decode from .common import InfoExtractor
from ..compat import compat_b64decode, compat_urlparse
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
@ -13,8 +13,6 @@ from ..utils import (
update_url_query, update_url_query,
) )
from .common import InfoExtractor
class HuyaLiveIE(InfoExtractor): class HuyaLiveIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.|m\.)?huya\.com/(?P<id>[^/#?&]+)(?:\D|$)' _VALID_URL = r'https?://(?:www\.|m\.)?huya\.com/(?P<id>[^/#?&]+)(?:\D|$)'

View File

@ -1,6 +1,6 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError, str_or_none, traverse_obj, unified_strdate
from ..compat import compat_str from ..compat import compat_str
from ..utils import ExtractorError, str_or_none, traverse_obj, unified_strdate
class IchinanaLiveIE(InfoExtractor): class IchinanaLiveIE(InfoExtractor):

View File

@ -1,3 +1,4 @@
from .bokecc import BokeCCBaseIE
from ..compat import ( from ..compat import (
compat_b64decode, compat_b64decode,
compat_urllib_parse_unquote, compat_urllib_parse_unquote,
@ -6,10 +7,9 @@ from ..compat import (
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
determine_ext, determine_ext,
update_url_query,
traverse_obj, traverse_obj,
update_url_query,
) )
from .bokecc import BokeCCBaseIE
class InfoQIE(BokeCCBaseIE): class InfoQIE(BokeCCBaseIE):

View File

@ -3,12 +3,12 @@ import time
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError,
determine_ext, determine_ext,
js_to_json, js_to_json,
urlencode_postdata,
ExtractorError,
parse_qs, parse_qs,
traverse_obj traverse_obj,
urlencode_postdata,
) )

View File

@ -4,20 +4,16 @@ import re
import time import time
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import (
compat_str,
compat_urllib_parse_urlencode,
compat_urllib_parse_unquote
)
from .openload import PhantomJSwrapper from .openload import PhantomJSwrapper
from ..compat import compat_str, compat_urllib_parse_unquote, compat_urllib_parse_urlencode
from ..utils import ( from ..utils import (
ExtractorError,
clean_html, clean_html,
decode_packed_codes, decode_packed_codes,
ExtractorError,
float_or_none, float_or_none,
format_field, format_field,
get_element_by_id,
get_element_by_attribute, get_element_by_attribute,
get_element_by_id,
int_or_none, int_or_none,
js_to_json, js_to_json,
ohdave_rsa_encrypt, ohdave_rsa_encrypt,

View File

@ -1,12 +1,11 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
int_or_none, int_or_none,
str_or_none, str_or_none,
traverse_obj, traverse_obj,
urljoin urljoin,
) )

View File

@ -1,23 +1,22 @@
import json import json
from .common import InfoExtractor
from .brightcove import BrightcoveNewIE from .brightcove import BrightcoveNewIE
from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
JSON_LD_RE,
ExtractorError,
base_url, base_url,
clean_html, clean_html,
determine_ext, determine_ext,
extract_attributes, extract_attributes,
ExtractorError,
get_element_by_class, get_element_by_class,
JSON_LD_RE,
merge_dicts, merge_dicts,
parse_duration, parse_duration,
smuggle_url, smuggle_url,
try_get, try_get,
url_or_none,
url_basename, url_basename,
url_or_none,
urljoin, urljoin,
) )

View File

@ -1,9 +1,9 @@
import functools import functools
import urllib.parse
import urllib.error
import hashlib import hashlib
import json import json
import time import time
import urllib.error
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (

View File

@ -1,8 +1,8 @@
import hashlib import hashlib
import random import random
from ..compat import compat_str
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str
from ..utils import ( from ..utils import (
clean_html, clean_html,
int_or_none, int_or_none,

View File

@ -1,5 +1,6 @@
import re import re
from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
clean_html, clean_html,
@ -9,9 +10,8 @@ from ..utils import (
smuggle_url, smuggle_url,
traverse_obj, traverse_obj,
try_call, try_call,
unsmuggle_url unsmuggle_url,
) )
from .common import InfoExtractor
def _parse_japanese_date(text): def _parse_japanese_date(text):

View File

@ -1,8 +1,5 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import ExtractorError, unified_strdate
ExtractorError,
unified_strdate
)
class JoveIE(InfoExtractor): class JoveIE(InfoExtractor):

View File

@ -1,6 +1,6 @@
import base64 import base64
import re
import json import json
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (

View File

@ -3,8 +3,8 @@ from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
int_or_none, int_or_none,
strip_or_none,
str_or_none, str_or_none,
strip_or_none,
traverse_obj, traverse_obj,
unified_timestamp, unified_timestamp,
) )

View File

@ -4,18 +4,18 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import (
compat_urlparse,
compat_parse_qs, compat_parse_qs,
compat_urlparse,
) )
from ..utils import ( from ..utils import (
clean_html,
ExtractorError, ExtractorError,
clean_html,
format_field, format_field,
int_or_none, int_or_none,
unsmuggle_url, remove_start,
smuggle_url, smuggle_url,
traverse_obj, traverse_obj,
remove_start unsmuggle_url,
) )

View File

@ -1,7 +1,7 @@
import time import hashlib
import random import random
import string import string
import hashlib import time
import urllib.parse import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor

Some files were not shown because too many files have changed in this diff Show More